Ansible, Redis

Building an Automated Config Management Server using Ansible+Flask+Redis

It’s almost 2 months since i’ve started playing full time on ansible. Like most of the SYS-Admin’s, ive been using ansible via cli most of the time. Unlike Salt/Puppet, ansible is an agent less one. So we need to invoke things from the box which contains ansible and the respective playbooks installed. Also, if you want to use ansible with ec2 features like auto-scaling, we need to either buy Ansible Tower, or need to use ansible-fetch along with the userdata script. I’ve also seen people, who uses custom scripts, that fetches their repo and execute ansible playbook locally to bootstrap.

Being a good fan of Flask, i’ve used flask on creating many backend API’s to automate a bunch of my tasks. So this time i decided to write a simple Flask API for executing Ansible playbook/ Ansible Adhoc commands etc.. Ansible also provides a Python API, which also made my work easier. Like most of the Ansible user’s, i use Role’s for all my playbooks. We can directly expose an API to ansible and can execute playbooks. But there are cases, where the playbook execution takes > 5min, and offcourse if there is any network latency it will affect our package download etc. I don’t want to force my HTTP clients to wait for the final output of the playbook execution to get a response back.

So i decided to go ahead with a JOB Queue feature. Each time a request comes to my API, the job is queued in Redis and the JOB ID will be returned to the clients. Then my job-workers pick the job’s from the redis queue and performs the job execution on the backend and workers will keep updating the job status. So now, i need to expose 2 API’s first, ie, one for receiving jobs and one for job status. For Redis Queue, there is an awesome library called rq. I’ve been using rq for all queuing tasks.

Flask API

The JOB accepts a bunch of parameters like host, role, env via HTTP POST method. Since the role/host etc.. have to be retrieved from the HTTP request, my playbook yml file has to be a dynamic one. So i’ve decided to use Jinja templating to dynamically create my playbook yml file. Below is my sample API for Role based playbook execution.

@app.route('/ansible/role/', methods=['POST'])
def role():
  inst_ip = request.form['host']                          # Host to which the playbook has to be executed
  inst_role = request.form['role']                        # Role to be applied on the Playbook
  env = request.form['env']               # Extra evn variables to be passed while executing the playbook
  ans_remote_user = "ubuntu"                  # Default remote user
  ans_private_key = "/home/ubuntu/.ssh/id_rsa"        # Default ssh private key
  job = q.enqueue_call(                   # Queuing the job on to Redis
            func=ansble_run, args=(inst_ip, inst_role, env, ans_remote_user, ans_private_key,), result_ttl=5000, timeout=2000
        )
  return job.get_id()                     # Returns job id if the job is successfully queued to Redis

Below is a sample templating function that generates the playbook yml file via Jinja2 templating

def gen_pbook_yml(ip, role):
  r_text = ''
  templateLoader = jinja2.FileSystemLoader( searchpath="/" )
  templateEnv = jinja2.Environment( loader=templateLoader )
  TEMPLATE_FILE = "/opt/ansible/playbook.jinja"                # Jinja template file location
  template = templateEnv.get_template( TEMPLATE_FILE )
  role = role.split(',')                       # Make Role as an array if Multiple Roles are mentioned in the POST request
  r_text = ''.join([random.choice(string.ascii_letters + string.digits) for n in xrange(32)])  
  temp_file = "/tmp/" + "ans-" + r_text + ".yml"           # Crating a unique playbook yml file
  templateVars = { "hst": ip,
                   "roles": role
                 }
  outputText = template.render( templateVars )             # Rendering template
  text_file = open(temp_file, "w")
  text_file.write(outputText)                      # Saving the template output to the temp file
  text_file.close()
  return temp_file

Once the playbook file is ready, we need to invoke Ansible’s API to perform our bootstrapping. This is actually done by the Job workers. Below is a sample function which invokes the playbook API from Ansible CORE.

def ansble_run(ans_inst_ip, ans_inst_role, ans_env, ans_user, ans_key_file):
  yml_pbook = gen_pbook_yml(ans_inst_ip, ans_inst_role)   # Generating the playbook yml file
  run_pbook = ansible.playbook.PlayBook(          # Invoking Ansible's playbook API
                 playbook=yml_pbook,
                 callbacks=playbook_cb,
                 runner_callbacks=runner_cb,
                 stats=stats,
                 remote_user=ans_user,
                 private_key_file=ans_key_file,
                 host_list="/etc/ansible/hosts",          # use either host_file or inventory
#                Inventory='path/to/inventory/file',
                 extra_vars={
                    'env': ans_env
                 }
                 ).run()
  return run_pbook                    # We can tune the output that has to be returned

Now the job-workers executes and updates the status on the Redis. Now we need to expose our JOB status API. Below is a sample Flask API for the same.

@app.route("/ansible/results/<job_key>", methods=['GET'])
def get_results(job_key):

    job = Job.fetch(job_key, connection=conn)
    if job.is_finished:
        ret = job.return_value
    elif job.is_queued:
        ret = {'status':'in-queue'}
    elif job.is_started:
        ret = {'status':'waiting'}
    elif job.is_failed:
        ret = {'status': 'failed'}

    return json.dumps(ret), 200

Now, we have a fully fledged API server for executing Role based playbooks. This API can also be used with user data scripts in autoscaling, where in we need to perform an HTTP POST request to the API server, and our API server will start the Bootstrapping. I’ve tested this app locally with various scenarios and the results are promising. Now as a next step, i’m planning to extend the API to do more jobs like, automating Code Pushes, Running AD-Hoc commands via API etc… With applications like Ansible, Redis, Flask, i’m sure SYS Admins can attain the DevOps Nirvana :). I’ll be pushing the latest working code to my Github account soon…

Standard
CollectD, Elasticsearch, Kibana, logstash, Monitoring, Redis

Monitoring Redis Using CollectD and ELK

Redis is an open-source, networked, in-memory, key-value data store. It’s being heavily used every where from Web stack to Monitoring to Message queues. Monitoring tools like Sensu already has some good scripts to Monitor Redis. Last Month during PyCon 2014 @Plivo, opensourced a new rate limited queue called SHARQ which is based on Redis. So apart from just Monitoring checks, we decided to have a tsdb of what’s happening in our Redis Cluster. Since we are heavily using ELK stack to visualize our infrastructure, we decided to go ahead with the same.

CollectD Redis Plugin

There is a cool CollectD plugin for Redis. It pulls a verity of Data from Redis which includes, Memory used, Commands Processed, No. of Connected Clients and slaves, No. of blocked Clients, No. of Keys stored/db, uptime and challenges since last save. The installation is pretty simple and straight forward.

$ apt-get update && apt-get install collectd

$ git clone https://github.com/powdahound/redis-collectd-plugin.git /tmp/redis-collectd-plugin

Now place the redis_info.py file onto the collectd folder and enable the Python Plugins so that collectd can use this python file. Below is our collectd conf

Hostname    "<redis-server-fqdn>"
Interval 10
Timeout 4
Include "/etc/collectd/filters.conf"
Include "/etc/collectd/thresholds.conf"
LoadPlugin network
ReportStats true

        LogLevel info

Include "/etc/collectd/redis.conf"      # This is the configuration for the Redis plugin
<Plugin network>
    Server "<logstash-fqdn>" "<logstash-collectd-port>"
</Plugin>

Now copy the redis python plugin and the conf file to collectd folder.

$ mkdir /etc/collectd/plugin            # This is where we are going to place our custom plugins

$ cp /tmp/redis-collectd-plugin/redis_info.py /etc/collectd/plugin/

$ cp /tmp/redis-collectd-plugin/redis.conf /etc/collectd/

By default, the plugin folder in the redis.conf is defined as ‘/opt/collectd/lib/collectd/plugins/python’. Make sure to replace this with the location where we are copying the plugin file, in our case “/etc/collectd/plugin”. Now lets restart the collectd daemon to enable the redis plugin.

$ /etc/init.d/collectd stop

$ /etc/init.d/collectd start

In my previous Blog, i’ve mentioned how to enable and use the ColectD input plugin in Logstash and to use Kibana to plot the data coming from the collectd. Below are the Data’s that we are receiving from the CollectD on Logstash,

  1) type_instance: blocked_clients
  2) type_instance: evicted_keys
  3) type_instance: connected_slaves
  4) type_instance: commands_processed
  5) type_instance: connected_clients
  6) type_instance: used_memory 
  7) type_instance: <dbname>-keys
  8) type_instance: changes_since_last_save
  9) type_instance: uptime_in_seconds
10) type_instance: connections_received

Now we need to Visualize these via Kibana. Lets create some ElasticSearch queries so that visualize them directly. Below are some sample queries created in Kibana UI.

1) type_instance: "commands_processed" AND host: "<redis-host-fqdn>"
2) type_instance: "used_memory" AND host: "<redis-host-fqdn>"
3) type_instance: "connections_received" AND host: "<redis-host-fqdn>"
4) type_instance: "<dbname>-keys" AND host: "<redis-host-fqdn>"

Now We have some sample queries, lets visualize them.

Now create histograms in the same procedure by changing the Selected Queries.

Standard
HipChat, Plivo, Redis, Robut

Make Text to Speech Calls From Hip Chat

I’ve been using HipChat for the last one month. Before that it was IRC always. But when i joined Plivo’s DevOps family, i’ve got a lot of places to integrate plivo to automate a lot of stuffs. This i was planning to do some thing with Hipchat. There are a couple of HipChat bot’s available, i decided to use ’robut’, as it is simple, written in Ruby and easy to write plugins also. Robut need only one “Chatfile” as its config file. All the Hip Chat account and room details are mentioned in this file. The Plugins are enabled in this file.

First we need to clone the repository.

$ git clone https://github.com/justinweiss/robut.git

Now i need ’redis’ and ’plivo’ gems for my new plugin. So need to add the same in the Gemfile. Once the gems are added in to the Gemfile, we need to do a ”bundle install” to install all the necessary gems. The path for the Chatfile can be mentioned in ”bin/robut” file. Now before starting robut we need to add the basic HipChat settings ie, jabberid, nick, password and room. Now wen eed to create a plugin to use with robut. The default plugins are available in ”lib/robut/plugin/” folder. Below is the plugin which i created for making Text To Speech calls.

require 'json'
require 'timeout'
require 'securerandom'
require 'plivo'
require 'redis'

class Robut::Plugin::Call
  include Robut::Plugin

  def usage
    "#{at_nick} call <number> <message> "
  end

  def handle(time, sender_nick, message)
    new_msg = message.split(' ')
    if sent_to_me?(message) && new_msg[1] == 'call'
    num = new_msg[2]
    textt = message.split(' ').drop(3).join(' ')
      begin
        reply("Calling #{num}")
        plivo_auth_id = "XXXXXXXXXXXXXXXXX"
        plivo_auth_token = "XXXXXXXXXXXXXXXXX"
        uuid = SecureRandom.hex
        r = Redis.new(:host => "redis_host", :port => redis_port, :password => "redis_port")
        temp = {
               'text' => "#{textt}"
               }
        plivo_number = "plivo_number"
        to_number = "#{num}"
        answer_url = "http://polar-everglades-1062.herokuapp.com/#{uuid}"

        call_params = {
                      'from' => plivo_number,
                      'to' => to_number,
                      'answer_url' => answer_url,
                      'answer_method' => 'GET'
                      }
        r.set "#{uuid}", temp.to_json
        r.expire "#{uuid}", 3000
        sleep 5
        puts "Initiating plivo call"
        p = Plivo::RestAPI.new(plivo_auth_id, plivo_auth_token)
        details = p.make_call(call_params)
        reply("Summary #{details}")
      rescue
        reply("Sorry Unable to Make initiate the Call.")
      end
    end
  end
end
Standard
Monitoring, Plivo, Redis, Sensu, Sinatra

Phone Call Notification for Sensu Using Plivo

It’s almost 2 weeks since i’ve joined Plivo’s DevOps family. I was spending a lot of time on understanding their API’s as i’m new to telephony. Plivo’s API made telephony so easy that even a person with basic programming language can built powerfull telephony apps to suit to their infrastructure. So when i started playing around with the API, i decided to take it to a different level. I’m strong lover of Sensu Monitoring tool, so i decided to integrate Plivo with Sensu to built a new Notification system using the Phone call. Many people rely on Pagerduty for the same feature. But that part is completly managed by PagerDuty, where in for the alerts which we sent to PagerDuty, they will notify us via phone call to the phone numbers mentioned on Pager Duty. So i decided to built a similar Handler to Sensu, so that we can have a similar feature on Sensu. In this blog i will explain how i achieved the same with Plivo Framework.

Plivo provides Application Programming Interfaces (APIs) to make and receive calls, send SMS, make a conference call, and more. These APIs are used in conjunction with XML responses to control the flow of a call or a message. Developers can create Session Initiation Protocol (SIP) endpoints to perform the telephony operations and the APIs are platform independent and can be used in any programming environment such as PHP, Ruby, Python, etc. It also provides helper libraries for these programming languages.

Here the Sensu will be initiating outbound calls using Plivo’s Call API, to the required numbers and will read out the Alert using Plivo’s Text to Speech Engine. First we need an account on Plivo Cloud, we can go to SignUP page, and need to create an account. By using the default, we can make test calls. But for longer usage, i will suggest to buy some credits for using the service extensively. Once we login with credentials, at the dashboard we can see Plivo AuthID and Plivo AuthToken, which is required to access Plivo’s API. Now for making out bound calls, we need to use the Call API. We also need to provide an ”answer_url” which contains XML instructions to direct the progress of the call. Plivo will fetch the ”answer_url” and executes te XML instructions. So here i will be using Sinatra to create a web app that will returns the XML. The main challenge is the text to be mentioned in the XML need to retrieved from the alert. So we cannot predefine the text as well as the request url, because it will be in dynamic in nature.

Solution :- Here i’m going to use a publically accessible Redis server which is protected with a password. When Sensu handler receives the alert, it will create a hash from the alert received with a random UUID as the name of the hash. And the same UUID will be used as the request path for the answer_url. And when sinatra recieves a request, by default the requested path wont be existing in the sinatra’s config file, as it’s a dynamically generated. So by default Sinatra will return a 404 response. But in Sinatra, there is an option called ”not_found”, where we can customize the response instead of returning 404 directly. So i will be using the “not_found” option and instead of directly returing a 404, i will make sinatra to talk to my redis server. Since the UUID can be fetched from the request URL, and the same is used in the redis as the name of the Hash, Sinatra can get the details of the alert from the Redis. These details are then used to create the XML and will be returned as the response to Plivo. It will be better to go through Plivo’s API documentation to understand more about the XML responses and outbound calls.

For those who don’t have a dedicated server to host the Sinatra app and Redis server, heroku can be used to host the Sinatra app. We can also use the redis addon availabe at the Heroku for our usage.

Sinatra App

Below is the code for the Sinatra app. The ”not_found” option is the one which performs the connection with the Redis.

require 'plivo'
require 'sinatra'
require 'redis'
require 'rest_client'

get '/' do
  play_loop = 1
  lang = "en-US"
  voice = "WOMAN"
  text = "Congratulations! You just made a text to speech app on Plivo cloud!"
  @path = request.path
  puts @path
  speak_params = {
                     'loop' => play_loop,
                     'language' => lang,
                     'voice' => voice,
                     }

  plivo = Plivo::Speak.new(text, speak_params)

  response = Plivo::Response.new()
  response.add(p)
  return response.to_xml
end

not_found do
  @path = request.path
  keyword = @path[1..-1]
  response = Redis.new(:host => "<redis_host_name>", :port => <redis_port>, :password => "<redis_pass>")
  data_red = response.get("#{keyword}")
  if data_red == nil
         return "404 Not Found"
  else
         data = JSON.parse(response.get("#{keyword}"))
         text = data["text"]
         play_loop = 1
         lang = "en-US"
         voice = "WOMAN"
         speak_params = {
                        'loop' => play_loop,
                        'language' => lang,
                        'voice' => voice,
                        }

         plivo = Plivo::Speak.new(text, speak_params)

         response = Plivo::Response.new()
         response.add(p)
         return r.to_xml
   end
end

The above app will redirect all the non found requests and make a requests to the Redis server using the request.path as the name of the hash. If there is a valid hash table, it will generate a Plivo XML else it will return “404 Not Found”.

Plivo Handler

Below is the Plivo Handler code. this is the Initial code, need to be tweaked to suit to the Sensu’s coding standard. Also the settings option need to be added so that all the parameters can be provided in the JSON file of the Handler. But as of now, this handler has been tested with Sensu and it is working perfectly fine.

require 'rubygems'
require 'sensu-handler'
require 'timeout'
require 'securerandom'
require 'plivo'
require 'redis'
require 'timeout'


class PLIVO < Sensu::Handler

  def short_name
    @event['client']['name'] + '/' + @event['check']['name']
  end

  def action_to_string
    @event['action'].eql?('resolve') ? "RESOLVED" : "ALERT"
  end

  def handle
    plivo_auth_id = "XXXXXXXXXXXXXXXX"          # => available at the plivo dashboard
    plivo_auth_token = "XXXXXXXXXXXXXXXX"       # => available at the plivo dashboard
    body = <<-BODY.gsub(/^ {14}/, '')
            #{@event['check']['output']}
            Host: #{@event['client']['name']}
           BODY
    uuid = SecureRandom.hex
    r = Redis.new(:host => "<redis_host_name>", :port => <redis_port>, :password => "<redis_pass>")
    temp = {
            'text' => "#{body}"
            }
    plivo_number = "<YOUR PLIVO NUMBER"
    to_number = "<DESTINATION NUMBER>"
    answer_url = "<YOUR HEROKU APP URL>/#{uuid}"
    call_params = {
                     'from' => plivo_number,
                     'to' => to_number,
                     'answer_url' => answer_url,
                     'answer_method' => 'GET'
                     }
         r.set "#{uuid}", temp.to_json
         r.expire "#{uuid}", <EXPIRY TTL>
         sleep 5
     begin
       timeout 10 do
         puts "Connecting to Plivo Cloud.."
         plivo = Plivo::RestAPI.new(plivo_auth_id, plivo_auth_token)
     details = plivo.make_call(call_params)
       end
       rescue Timeout::Error
       puts "timed out while attempting to contact Plivo Cloud"
         end
  end
end

The above handler will make an outbound call to the Destination number and plivo’s text to speech engine will read out the Alert summary on the call. Now If you want to call multiple user’s, simply create an array or hash with the contact numbers and iterate over it.

This handler will give us the same feature of Phone Call notification similar to Pager Duty. We just need to pay for the phone call usage. Dedicated to all those who wants an affordable yet a good notification system with Sensu. I will be merging out my code soon to the sensu-community repository.

Standard
Redis, Sinatra, TwitterAPI

TweetGrabber – a Live Tweet Capturer using REDIS+SINATRA+TwitterAPI

Twitter has become one of the powerfull social networking site. And people do share a lot of good info’s here. Even people reports outages of various sites and services in Twitter. Many of the tech companies do have a valid accounts to keep track of user’s comments on their services/products and they even interacts with user’s. Website’s like “downrightnow.com”, uses Twitter as an information source to identify the outage of many famous websites and services. Even in our personal site’s we use Javascripts to display our tweets and other statuses. But since the Twitter API version 1.1, which needs ouath, many of the jquery plugins became obsolete. This time i got a requirement to built a system that keeps track of tweets based on custom keywords. But i wanted to shaw all the tweets in a category basis, but on a sinlge page, with some scrolling effects, at the same time the sroller should keeps on updating with tweets on a regular interval basis. Which means i’m more concerned about the new trends going on in the Twitter.

Since i’m a Rubyist, i decided to build a Ruby app with Sinatra web Frontend, one my favourite web frameworks. I’m hard lover of Redis, faster since it runs on ram, and i dont want to keep my old tweets. So, My app is pretty simple, there will a Feeder which will talk to Twitter’s API and get the Tweet’s these tweet’s will be then stored in Redis, The Sinatra Fronend will fetch the tweets, and will display it in a scrolling fashion on the front end. Since i’m a CLI junkie, i’m not familiar with HTMLs and UI’s, so i decided to go with Twitter Bootstrap to build the HTML pages.

There is a Ruby gem called ”TweetStream” which works very well with Twitter API v1.1. So i’m going to use this gem to talk to Twitter. Below is the simple architecture diagram for my app.

Let’s see each components in detail.

Feeder

Feeder is a ruby script which constantly talks to Twitter API and grabs the latest streaming tweets based on the search keywords. Add all the grabbed tweets are then stored into the Redis database. Since i’ve to display teets corresponding to each keyword in separate scrolling fashions, i’m using separate redis database for each. So the Feeder has multiple ruby methods, where each method will be used for each keyword and writes into the corresponding Redis DB. Below is one of the Ruby Method in the Feeder script.

                        #######  FEEDER ####### 

TweetStream.configure do |config|
  config.consumer_key       = 'xxxxxxxxxxxxxx'          => All these cosnumerkey,secret and oauth tokens have to be generated from
  config.consumer_secret    = 'xxxxxxxxxxxxxx'             the Twitter's API site, dev.twitter.com
  config.oauth_token        = 'xxxxxxxxxxxxxx'
  config.oauth_token_secret = 'xxxxxxxxxxxxxx'
end

def tweet_general
  TweetStream::Client.new.track('opensource') do |status|   =>  Thiss Tweatstream client will keep tracking the keyword "opensource"
    if ( status.lang == 'en' )
      push(
                'id' => status[:id],
               'text' => status.text,
               'username' => status.user.screen_name,
               'userid' => status.user[:id],
               'name' => status.user.name,
               'profile_image_url' => status.user.profile_image_url,
               'received_at' => Time.new.to_i,
               'user_link' => "http://twitter.com/"
             )
        end
  end
end
def push(data)
      @db = Redis.new
      @db.lpush('tweets_general', data.to_json)         => LPUSHing Tweets into the RedisDB
    end

Redis DB

In Redis, im going to use the LIST data type. LIST are simply list of strings, sorted by insertion order. It is possible to add elements to a Redis List pushing new elements on the head (on the left) or on the tail (on the right) of the list.

So the new tweets i will be pushing in from the head, and prevent the over population, i will be calling LTRIM operation preiodically to clear of the old tweets from the database. All these operations are done from the Feeder by corresponding ruby methods/functions.

Sinatra Frontend

I’m using SINATRA for building the frontend. SInce i’m not much familiar with HTML’s and UI’s, i decided to use Twitter Bootstrap for building the layout. And for each category, i’ve created a collapsible table, so that we can expand and collapse the required tables. Now, the next task is scrolling the tweets, for that i found a jquery plugin called Totem Ticker.ANd i enabled, refreshing for the div element which contains this scroller, so that after each refresh, the variable which supplies tweets to the scroller, will get updated with the newer tweets from the corresponding Redis DB.

As of now the app is working fine. But i’m planning to extend this to add more features, like adding the keywords dynamically from the Web Frontend, and displaying only those tables with the keywords. I will be digging more take it more powerfull :-). I’m going to push the working code soon into my GitHub account, so that others can also paly around on this, and can extend it for their own requirements.

Standard