Ansible, Redis

Building an Automated Config Management Server using Ansible+Flask+Redis

It’s almost 2 months since i’ve started playing full time on ansible. Like most of the SYS-Admin’s, ive been using ansible via cli most of the time. Unlike Salt/Puppet, ansible is an agent less one. So we need to invoke things from the box which contains ansible and the respective playbooks installed. Also, if you want to use ansible with ec2 features like auto-scaling, we need to either buy Ansible Tower, or need to use ansible-fetch along with the userdata script. I’ve also seen people, who uses custom scripts, that fetches their repo and execute ansible playbook locally to bootstrap.

Being a good fan of Flask, i’ve used flask on creating many backend API’s to automate a bunch of my tasks. So this time i decided to write a simple Flask API for executing Ansible playbook/ Ansible Adhoc commands etc.. Ansible also provides a Python API, which also made my work easier. Like most of the Ansible user’s, i use Role’s for all my playbooks. We can directly expose an API to ansible and can execute playbooks. But there are cases, where the playbook execution takes > 5min, and offcourse if there is any network latency it will affect our package download etc. I don’t want to force my HTTP clients to wait for the final output of the playbook execution to get a response back.

So i decided to go ahead with a JOB Queue feature. Each time a request comes to my API, the job is queued in Redis and the JOB ID will be returned to the clients. Then my job-workers pick the job’s from the redis queue and performs the job execution on the backend and workers will keep updating the job status. So now, i need to expose 2 API’s first, ie, one for receiving jobs and one for job status. For Redis Queue, there is an awesome library called rq. I’ve been using rq for all queuing tasks.

Flask API

The JOB accepts a bunch of parameters like host, role, env via HTTP POST method. Since the role/host etc.. have to be retrieved from the HTTP request, my playbook yml file has to be a dynamic one. So i’ve decided to use Jinja templating to dynamically create my playbook yml file. Below is my sample API for Role based playbook execution.

@app.route('/ansible/role/', methods=['POST'])
def role():
  inst_ip = request.form['host']                          # Host to which the playbook has to be executed
  inst_role = request.form['role']                        # Role to be applied on the Playbook
  env = request.form['env']               # Extra evn variables to be passed while executing the playbook
  ans_remote_user = "ubuntu"                  # Default remote user
  ans_private_key = "/home/ubuntu/.ssh/id_rsa"        # Default ssh private key
  job = q.enqueue_call(                   # Queuing the job on to Redis
            func=ansble_run, args=(inst_ip, inst_role, env, ans_remote_user, ans_private_key,), result_ttl=5000, timeout=2000
        )
  return job.get_id()                     # Returns job id if the job is successfully queued to Redis

Below is a sample templating function that generates the playbook yml file via Jinja2 templating

def gen_pbook_yml(ip, role):
  r_text = ''
  templateLoader = jinja2.FileSystemLoader( searchpath="/" )
  templateEnv = jinja2.Environment( loader=templateLoader )
  TEMPLATE_FILE = "/opt/ansible/playbook.jinja"                # Jinja template file location
  template = templateEnv.get_template( TEMPLATE_FILE )
  role = role.split(',')                       # Make Role as an array if Multiple Roles are mentioned in the POST request
  r_text = ''.join([random.choice(string.ascii_letters + string.digits) for n in xrange(32)])  
  temp_file = "/tmp/" + "ans-" + r_text + ".yml"           # Crating a unique playbook yml file
  templateVars = { "hst": ip,
                   "roles": role
                 }
  outputText = template.render( templateVars )             # Rendering template
  text_file = open(temp_file, "w")
  text_file.write(outputText)                      # Saving the template output to the temp file
  text_file.close()
  return temp_file

Once the playbook file is ready, we need to invoke Ansible’s API to perform our bootstrapping. This is actually done by the Job workers. Below is a sample function which invokes the playbook API from Ansible CORE.

def ansble_run(ans_inst_ip, ans_inst_role, ans_env, ans_user, ans_key_file):
  yml_pbook = gen_pbook_yml(ans_inst_ip, ans_inst_role)   # Generating the playbook yml file
  run_pbook = ansible.playbook.PlayBook(          # Invoking Ansible's playbook API
                 playbook=yml_pbook,
                 callbacks=playbook_cb,
                 runner_callbacks=runner_cb,
                 stats=stats,
                 remote_user=ans_user,
                 private_key_file=ans_key_file,
                 host_list="/etc/ansible/hosts",          # use either host_file or inventory
#                Inventory='path/to/inventory/file',
                 extra_vars={
                    'env': ans_env
                 }
                 ).run()
  return run_pbook                    # We can tune the output that has to be returned

Now the job-workers executes and updates the status on the Redis. Now we need to expose our JOB status API. Below is a sample Flask API for the same.

@app.route("/ansible/results/<job_key>", methods=['GET'])
def get_results(job_key):

    job = Job.fetch(job_key, connection=conn)
    if job.is_finished:
        ret = job.return_value
    elif job.is_queued:
        ret = {'status':'in-queue'}
    elif job.is_started:
        ret = {'status':'waiting'}
    elif job.is_failed:
        ret = {'status': 'failed'}

    return json.dumps(ret), 200

Now, we have a fully fledged API server for executing Role based playbooks. This API can also be used with user data scripts in autoscaling, where in we need to perform an HTTP POST request to the API server, and our API server will start the Bootstrapping. I’ve tested this app locally with various scenarios and the results are promising. Now as a next step, i’m planning to extend the API to do more jobs like, automating Code Pushes, Running AD-Hoc commands via API etc… With applications like Ansible, Redis, Flask, i’m sure SYS Admins can attain the DevOps Nirvana :). I’ll be pushing the latest working code to my Github account soon…

Standard
Docker, Jenkins

Virtual Cluster Testing Using Jenkins and Docker

Nowadays CI or Conitnous Integration is being implemented in almost all IT companies. Many of the DevOps work’s are in related to the CI. The common scenario is, Developers push the codes to the GIT/SVN repo and triggers jenkins to perform tests and sometimes packaging, and if it’s a fuly automated system the new changes are deployed to the staging. And the QA team takes over the testing part. But when you are in small team, all these has to be achieved with the minimal team. So before the new change is completely pushed to staging, i decided to have a simple testing of all the components quickly. I read about blogs where many DevOps engineers spins up new instances like a full replica of their entire architecture and performs the new code deployment and load test on this new cluster and if all the components are behaving properly with the new code change, it’s then further deployed to Staging for next level of full scale QA.

Though the above step seems to be interesting, i didn’t want to waste up resources by spinnig up a new set of instances each time. Being a hardcore Docker fan, i decided to replace the instance lauch iwth Docker containers. So instead of launching ne instances, Jenkins will launch new Docker containers with SDN(Software Defined Network). Below is simple architecture diagram of my new design.

So the work flow goes like this,

1) Developers pushes the new code changes along with the new Tag to the corresponding Repositories.

2) Github webhook then triggers jenkins to start the Build jobs.

3) Jenkins performs the build and if the build succeeds, jenkins triggers Debian pacakging for the application.

4) Once the packaging is completed, Jenkins will trigger Docker image creation for the corresponding application using the newly build packages.

5) Once the image build is completed, Jenkins uses Docker Compose to build our Virtual clusters which is an exact replica of our Prod/Staging.

6) Once the cluster is up, we perform automated testing of all our components and makes sure that the components are behaving normally with the new code changes.

Now once the test results are normal, we can initiate the code deployment to staging and can start the full scale QA.

By using Docker, i was able to reduce the resource usage. All these containers are running on a Single M3.Medium box. Sice i’m concentrating more on the components working part and not on the load test side, with this smaller box i was able to achieve my results properly.

A bit about docker-compose. I’m using docker-compose for managing the docker cluster. Compose is a tool for defining and running complex applications with Docker. With Compose, we can define a multi-container application in a single file, then spin our applications up in a single command which does everything that needs to be done to get it running. Below is my docker-compose yml file content.

  web:
    image:        web:latest
    links:
      - redis
    ports:
      - "8080:80"
    environment:
      - ENV1
          - ENV2
  redis:
    image:        my_redis:latest
    ports:
      - "172.16.16.17:6379:6379"
  backend:
    image:        my_backend:latest
    net:          "host"

From the initial test results, i was very much satisfied. Now i’m planning to extend this setup to next level including a fully automated load test.

Standard
pkgr

Packaging Node/Python App Using Pkgr

pkgr is a tool for building deb/rpm packages for Python/Ruby/Node/GO applications. It uses heroku buildpack and embed all the dependencies related to the application runtime within the package. It also gives us a nice executable, which closely replicates the Heroku toolbelt utility. There are only 2 requirements for pkgr, 1) It must have a Procfile and 2) It should be Heroku compatible.

By default, pkgr supports packaging Ruby/GO/Node apps. But it also supports custom buildpacks, so we can use heroku-python build pack to pacakge Python apps too.

Installing pkgr

$ apt-get update

$ apt-get install -y build-essential ruby1.9.1-full rubygems1.9.1

$ gem install pkgr

Packaging a Node application

For pacakging a Node application, run the below command

$ pkgr package <path-to-node-app-source> --verbose --debug --env "HOME=/tmp" --auto

Packaging a Python application,

For pacakging a Python application, run the below command

$ pkgr package <path-to-python-app-source> --verbose --debug --env HOME=/tmp --auto --buildpack=https://github.com/heroku/heroku-buildpack-python

Note: python buld pack, we need to have libssl0.9.8 installed, other wise pip install will throw hashlib errors.

Standard
mongodb

Promoting a MongoDB Slave

MongoDB is one of the commonly used NOSQL document store. For smaller use cases, we might not need a full scaled replica set, instead we can use MongoDB in a traditional way like a Master-Slave architecture. In this blog, i’m going to explain how to convert a Standalone MongoDB server to a Master-Slave Model, and Promoting a Slave instance into a Master node in case of master crash.

Standalone to Master-slave Model.

First, on the master node, we need to add master=true on to the mongodb config file and restart the mongo service. On the new mongo node, which is going to be the slave, add the below config options to the mongodb configuration file.

slave=true
source=xxx.xxx.xxx.xxx:yyyyy        # replace with Mongo Master IP:PORT
autoresync=true

Now restart the mongo service on the slave node and tail the mongo logs, we can see the replication info on it. Below is a sample replication details that can be seen on the mongo logs.

2015-04-18T05:09:41.800+0000 I STORAGE  [replslave] copying indexes for: { name: "xxxxxxxx" }
2015-04-18T05:09:41.801+0000 I STORAGE  [replslave] copying indexes for: { name: "xxxxxxxx" }
2015-04-18T05:09:41.801+0000 I STORAGE  [replslave] copying indexes for: { name: "xxxxxxx" }
2015-04-18T05:09:41.802+0000 I STORAGE  [replslave] copying indexes for: { name: "xxxxxxx" }
2015-04-18T05:09:41.802+0000 I REPL     [replslave] resync: done with initial clone for db: testdb
2015-04-18T05:09:51.135+0000 I REPL     [replslave] repl:   applied 1 operations
2015-04-18T05:09:51.135+0000 I REPL     [replslave] repl:  end sync_pullOpLog syncedTo: Apr 18 05:15:40 5531e87c:1
2015-04-18T05:09:51.135+0000 I REPL     [replslave] repl: sleep 1 sec before next pass
2015-04-18T05:09:52.135+0000 I REPL     [replslave] repl: syncing from host:xxx.xxx.xxx.xxx:yyyyy
2015-04-18T05:10:01.135+0000 I REPL     [replslave] repl:   applied 1 operations
2015-04-18T05:10:01.135+0000 I REPL     [replslave] repl:  end sync_pullOpLog syncedTo: Apr 18 05:15:50 5531e886:1
2015-04-18T05:10:01.135+0000 I REPL     [replslave] repl: syncing from host:xxx.xxx.xxx.xxx:yyyyy
2015-04-18T05:10:11.135+0000 I REPL     [replslave] repl:   applied 1 operations

We can also check the replication status from the Mongo master cli via rs.printReplicationInfo() or db.serverStatus( { repl: 1 } ). We can also check the same on the slave nodes, but by default, read queries are not allowed on the slave and it will throw an error. We can allow reads by running db.getMongo().setSlaveOk() on the slave mongo shell. This will override the restriction and we can use the rs.printReplicationInfo() or db.serverStatus( { repl: 1 } ) to see the replication status.

Promoting a Slave node to Master

This is one the requirement that we keep slave nodes. In case of Master crash, we can easily promote the Slave node and can minimize the interruption. Now promoting a Slave node to Master, follow the below steps.

1) Stop the mongo service on the slave

2) remove all the local files from the mongo data directory

$ cd <mongo_data_directory> && rm -rvf local*

3) Remove the slave configurations from mongo config file, and set `master=true` (This is required if we have more than 1 Slaves, so that the rest of the slaves can connect to new master).

4) Restart the mongo service, now this new master ready to accept writes.

If we have multiple slaves, we need to change the slave source IP, so that they can connect to the new master. But even if the connect to the new master, replication will fail. So we have two methods, either remove the data and perform a new data replication or use force a complete resync to all the slaves using the below command

#On the mongo master shell, run

$ use admin

$ db.runCommand( { resync: 1 }      # This will force a complete resync on all the slaves.

This procedure is useful, if you are using a Standalone/Master-Slave method. For a real HA/Fault tolerant design, replica set proves to be more efficient, where primary master selection takes place automatically if the actual primary node crashes, thus preventing the down time to minimum.

Standard
kannel

Kannel: Open Source SMS Gateway

It’s been quite a while since my last blog. This time i’m coming with a bunch of topics to write, starting with Kannel. After moving to my new role, the first task i got was to set up an SMPP server with one of our carriers. After digging sometime in internet i found one project kannel, which is a perfect game player for me. So in this blog, i’ll be explaining on how to setup an SMPP SMS gateway locally.

Installing Kannel

Download the latest source code from kannel site.

$ wget http://www.kannel.org/download/kannel-snapshot.tar.gz -O /opt/

$ cd /opt && tar xvzf kannel-snapshot.tar.gz && cd kannel-snapshot

$ apt-get install -y libxml2-dev libxml2 openssl libssl-dev build-essential      # installing dependencies

$ ./configure --prefix=/usr/local/kannel/

$ make && make install

$ adduser --system --home /usr/local/kannel/lib/kannel/ --no-create-home --gecos "Kannel" kannel

$ mkdir /var/log/kannel && mkdir /var/run/kannel

$ chown -c kannel.root /var/log/kannel && chown -c kannel.root /var/run/kannel 

Now we have the kannel installed on our custom prefix folder. Let’s go ahead setting the Kannel application.

Setting up Kannel

Kannel comprises of two processes, smsbox and bearerbox. Bearerbox service is the one which is in contact with the carrier gateways, responsible for sending and receiving SMS. smsbox is the service which interacts between our application and bearerbox. ie, it receives incoming sms from our bearer box and sends it our application and vice versa. The kannel config consists of multiple parts, which are explained below.

1) Basic configuration: We define the basic details like, bindip, log file path, adminUI port, adminUI password, whitelist ip for accessing admin ui, smsbox port etc..

# sample configuration
group = core
admin-port = 13000
smsbox-port = 13001
admin-password = changeme
admin-deny-ip = "*.*.*.*"
admin-allow-ip = "127.0.0.1"
wdp-interface-name = "*"
log-file = "/var/log/kannel/bearerbox.log"
access-log = "/var/log/kannel/access.log"
box-deny-ip = "*.*.*.*"
box-allow-ip = "127.0.0.1"

2) SMSC configuration:  We define the smmp details of our carrier's. which includes carrier's smpp ip, smpp port, auth credentials etc..

# sample configuration
group = smsc            # Default group name, no need to modify
smsc = smpp         
host = x.x.x.x
port = yyyy
smsc-id = fake-carrier          # Unique name for this connection
smsc-username = xxxxxxxx
smsc-password = yyyyyyyy

3) smsbox configuration: We define the configurations for the smsbox which includes bindip, a unique id for the smsbox etc..

# sample configuration
group = smsbox
bearerbox-host = localhost
sendsms-interface = 0.0.0.0
smsbox-id = mysmpp
sendsms-port = 10200                     # Applications make HTTP request to this port
log-file = "/var/log/kannel/smsbox.log"

4) Kannel gateway configuration: We define the user name, password, ratelimit etc for the messages from the smsbox

# sample configuration
group=sendsms-user      # default group name, no need to change
username=xxxxxxx
password=yyyyyyy
max-messages=3          # sms rate limit

5) SMS service configuration: We define settings for incoming sms from the bearer box, which includes to which URL our application URL to which the SMS details has to be forwarded.

# sample configuration
group = sms-service                 # Default group name
keyword = default
post-url = http://localhost:5000/incoming # When a message is received from SMS center this URL is called. Refer manual for wildcards details.
catch-all = 1
max-messages = 0
omit-empty = true
send-sender = true

Add all the above configurations according to the requirement on to the kannel.conf file. A sample init script for Debian/Ubuntu is available here

Once the SMPP service is started, check the bearebox logs for the connectivity with the carrier’s smpp gateway. Once the connection is up, we can start to send/receive sms. For incoming sms, smsbox will make an HTTP request based on our configuration. For example, if we are using a POST method, the sms details like From, To can be retrieved from the POST HEADERS and the sms text from the request data. Below are some of the headers that come along with the POST requests.

X-Kannel-From   => sender id
X-Kannel-To     => recepient id

Similarly for outbound sms, our application makes a HTTP GET request to the smsbox url and smsbox will carry it over to the bearerbox which then carry over to the carrier for delivery.

Standard