Ansible, Ubuntu

Stepping into Ansible

For the past 2 year’s, i played with config management tools like Puppet and Salt. But all these tools were mostly Client-Server Model, except Salt where it supports Push model also. But for the last 6 months, Ansible is gaining more popularity. Ansible is a Push model system which relies on SSH. So before i adopt Ansible completely, i decided to have a try. I need to make sure that the Ansible supports all basic features what other competitors supports. Which is really helpful in migration also.

Installation

Ansible is pretty easy to install. We can install it from source or via package managers or even via PIP.We can use the official ubuntu ppa for installing Ansible.

apt-get install software-properties-common
apt-add-repository ppa:ansible/ansible
apt-get update
apt-get install ansible

Since Ansible relies on SSH, things like Host Key verification errors will prevent the SSH connections resulting in failures. We can disable the Host Key Verfication check in the ansible.cfg file

host_key_checking = False      # add this option to the config file

or we can set an env variable export ANSIBLE_HOST_KEY_CHECKING=False for the current session. By default ansible uses the hosts file present in the ansible home directory. So we can define the static machines there. We can add either the IP or DNS resolvable FQDN. Once the IP/FQDN is added, we can test the connectivity via ping module. Make sure that the Ansible server’s SSH key is added to the authorized_keys on the remote machines.

ansible all -m ping

# Sample output

ansible-ubuntu | success >> {
    "changed": false,
    "ping": "pong"
  }

Managing Custom Facts

Config management tools like puppet/Salt supports custom facts to be defined on the remote machines. We can define the custom facts and the config management server can use these facts. Even though Ansible is an agentless server, we can define the custom facts on the remote systems. Whenever we query for facts, ansible connects to the remote machines and fetches the facts using its default library. But it also looks for custom facts in /etc/ansible/facts.d/. We need to put our custom facts file in this directory. The file has to be of .fact extension,must be executable and should return a valid JSON. This is in the case of a script. If we just want to define some facts directly, we can simple create a file like below

[myfact]
role=test
profile=staging

The above fact file will add two fact variables called role and profile with the value as mentioned in the file. Now let’s use the system module and see if we are able to retrieve the new custom facts.

ansible <remote_host_name> -m setup

Below is the part of the output showing the custom facts

"ansible_local": {
       "myfacts": {
        "myfact": {
           "profile": "staging",
           "role": "test"
           }
       }
    },

Managing Dynamic Inventory

In the Cloud environment, it’s difficult to maintain a static inventory. Ansible does supports Dynamic inventory for vendors including AWS EC2. Ansible provides us an Inventory script. We can also use this script directly and query EC2 to get the list of all instances. To successfully make an API call to AWS, we will need to configure Boto. The simplest is just to export two environment variables:

export AWS_ACCESS_KEY_ID='AK123'
export AWS_SECRET_ACCESS_KEY='abc123'

./ec2.py --list   # Displays the list of all instances

Now we have the inventory script ready. Let’s make some fact’s query to the ec2 instances. We can use many filters here like, Tags etc…

ansible tag_Name_test -i /etc/ansible/plugins/inventory/ec2.py -m ping # Querying all instances with tag "Name=test"

We can also use regex with these say like tag_Name_test*. For rackspace user’s there is an official module called rax that works perfectly with ansible

Enrcypting YAML Data files

This is an important feature that most of the config management system lacks. In most of the current systems, we need to define the sensitive data like say ssh-keys, API’s AuthID/Token etc… in plain text which increases the security risk. Ansible Vault comes for rescue here. Vault feature can encrypt any structured data file used by Ansible. This can include “group_vars/” or “host_vars/” inventory variables, variables loaded by “include_vars” or “vars_files”, or variable files passed on the ansible-playbook command line with “-e @file.yml” or “-e @file.json”. Role variables and defaults are also included!. While invoking any playbook, we can pass the --ask-vault-pass along the vault password, so Ansible can can decrypt the file and use its contents while performing any execution.

ansible-vault encrypt foo.yml    # Encrypting a file

ansible-vault edit foo.yml       # Editing encrpypted file

ansible-vault decrypt foo.yml    # Decrypting a file

Ansible indeed is truly an awesome product. It does have many new features like vault compared to its competitors. It’s backed by an awesome community. So we can expect more exciting features in future.

Standard
Uncategorized

Setting Up Docker Private Registry

Last year Containers based technology showed up big boom. A lot of OpenSource projects and startups wrapped over Docker. Now Docker became a favourite tool for both Dev and Ops guys. I’m a big fan of Docker and i do all my hacks on containers. This time i decided to play with Docker private registry, so that i sync all my docker clients with a central registry. In this test setup i’m using Ubuntu 12.04 server with Nginx as a reverse proxy. With the Nginx proxy i can easily enforce basic auth and can protect my private docker registry from unauthorized access.

Installing Docker Registry

Download the latest release of Docker Registry from the Docker’s github repo

$ wget https://github.com/docker/docker-registry/archive/0.9.0.tar.gz -O /usr/local/src/0.9.0.tar.gz

$ tar xvzf 0.9.0.tar.gz && mv docker-registry-0.9.0 docker-registry


Let's install the dependencies,

$ apt-get update && apt-get install swig python-pip python-dev libssl-dev liblzma-dev libevent1-dev patch


Once the dependencies are installed, lets go ahead and install the docker-registry app

$ cat /usr/local/src/docker-registry/config/boto.cfg > /etc/boto.cfg

$ pip install /usr/local/src/docker-registry/depends/docker-registry-core/

$ pip install file:///usr/local/src/docker-registry

$ patch $(python -c 'import boto; import os; print os.path.dirname(boto.__file__)')/connection.py < /usr/local/src/docker-registry/contrib/boto_header_patch.diff

$ cp /usr/local/src/docker-registry/config/config_sample.yml /usr/local/src/docker-registry/config/config.yml 


We can edit the `config.yml` file, if we want to change the local storage path (default => /tmp/registry) and also we can use redis as a local cache + sqlite based search backend. The repo already contains a sample init [init](https://raw.githubusercontent.com/docker/docker-registry/master/contrib/docker-registry_debian.sh) script that can be used directly.

$ cp -rvf /usr/local/src/docker-registry/contrib/docker-registry_debian.sh /etc/init.d/docker-registry

Also let's setup a default file, `/etc/default/docker-registry`, so that init script can read the necessary env variables. Below is the content of my default file.

DOCKER_REGISTRY_HOME=/usr/local/src/docker-registry
DOCKER_REGISTRY_CONFIG=/usr/local/src/docker-registry/config/config.yml
SETTINGS_FLAVOR=dev
GUNICORN_OPTS=[--preload]
LOGLEVEL=debug

Let's start the registry service,

$ /etc/init.d/docker-registry start

Setting up Docker Client

Now we have a private Docker registry running, Now let’s setup an Nginx proxy, so that we dont have to expose the registry directly to outside world.

$ apt-get install nginx-extras

The docker-registry repo also contains basic nginx config that can be used directly.

$ cat /usr/local/src/docker-registry/contrib/nginx/nginx.conf > /etc/nginx/sites-enabled/default

$ cp /usr/local/src/docker-registry/contrib/nginx/docker-registry.conf /etc/nginx/

Also create a basic auth file that contains the username password. We can use htpasswd to generate the password. The filename mentioned the nginx config is docker-registry.htpasswd

$ echo "dockeradmin:$apr1$.BzsRrxN$fng.12mJL/TJenKjkZSMS0" >> /etc/nginx/docker-registry.htpasswd  # replace the username and password with the one generated by htpasswd

Now let’s generate a self signed SSL certificate that can be used with nginx. There are websites like StartSSL which provides free 1 year SSL certificate.

$ mkdir /opt/certs && cd /opt/certs

$ openssl genrsa -out devdockerCA.key 2048

$ openssl req -x509 -new -nodes -key devdockerCA.key -days 10000 -out devdockerCA.crt

$ openssl genrsa -out dev-docker-registry.com.key 2048

$ openssl req -new -key dev-docker-registry.com.key -out dev-docker-registry.com.csr

    Country Name (2 letter code) [AU]: US
        State or Province Name (full name) [Some-State]: CA
        Locality Name (eg, city) []: SF
        Organization Name (eg, company) [Internet Widgits Pty Ltd]: Beingasysadmin
        Organizational Unit Name (eg, section) []: tech
        Common Name (e.g. server FQDN or YOUR name) []: docker.example.com
        Email Address []: docker@example.com

        Please enter the following 'extra' attributes
        to be sent with your certificate request
        A challenge password []:                          # leave the password blank
        An optional company name []:

$ openssl x509 -req -in dev-docker-registry.com.csr -CA devdockerCA.crt -CAkey devdockerCA.key -CAcreateserial -out dev-docker-registry.com.crt -days 10000

Copy the certificates to the SSL path mentioned the nginx config,

$ cp dev-docker-registry.com.crt /etc/ssl/certs/docker-registry

$ cp dev-docker-registry.com.key /etc/ssl/private/docker-registry

Now let’s restart the nginx process to reflect the changes

$ service nginx restart

Now once the nginx is up, we can check the connectivity between docker client and registry server. Since registry is using a self signed certificate, we need to whitelist the CA on the Docker client machine.

$ cd /opt/certs

$ mkdir /usr/local/share/ca-certificates/docker-dev-cert

$ cp devdockerCA.crt /usr/local/share/ca-certificates/docker-dev-cert

$ update-ca-certificates

Note: If the CA is not added to trusted list, Docker client wont be able to authenticate against the registry server. Once the CA is added to trusted list, we can test the connectivity between Docker client and Registry server. If the Docker daemon was running before adding the CA, then we need to restart the Docker daemon

$ root@docker:~# docker login https://docker.example.com      # use the nginx basic auth creds here, email can be blank
    Username: dockeradmin
    Password:xxxxxxxx
    Email:                          # This we can leave blank
    Login Succeeded                                         # Successful login message

Once the login is succeeded, lets add some base docker images to our Private registry.

$ docker pull ubuntu:12.04                             #  Pulling the latest image of Ubuntu 12.04
    ubuntu:12.04: The image you are pulling has been verified
    ed52aaa56e98: Pull complete
    b875af6dcb23: Pull complete
    41959ee20b93: Pull complete
    f959d044ebdf: Pull complete
    511136ea3c5a: Already exists
    Status: Downloaded newer image for ubuntu:12.04


$ docker images                             # Check the downloaded images
    REPOSITORY                  TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
    ubuntu                      12.04               f959d044ebdf        5 days ago          130.1 MB

So, as per the Documentation, we need to create a tag for the image that we are going to push to our private registry. The tag must be of the syntax ”/:”

$ docker tag ubuntu:12.04 docker.example.com/ubuntu:12.04     # We need to create a tag of format "<registry-server-fqdn>/<image-name>:<optional-tag>"


$ docker push docker.example.com/ubuntu:12.04                 # push the image to our new private registry
    The push refers to a repository [docker.example.com/ubuntu] (len: 1)
    Sending image list
    Pushing repository docker.example.com/ubuntu (1 tags)
    Image 511136ea3c5a already pushed, skipping
    ed52aaa56e98: Image successfully pushed
    b875af6dcb23: Image successfully pushed
    41959ee20b93: Image successfully pushed
    f959d044ebdf: Image successfully pushed
    Pushing tag for rev [f959d044ebdf] on {https://docker.example.com/v1/repositories/ubuntu/tags/12.04}

Let’s query the registry API for the pushed image

curl http://localhost:5000/v1/search
    {"num_results": 2, "query": "", "results": [{"description": "", "name": "library/wheezy"}, {"description": "", "name": "library/ubuntu"}]}

Currently both the Docker Client and Registry resides on the same machine, we can test push/pull image from a remote machine. The only dependency is we need to add the Self Signed CA to the trusted CA list, otherwise docker client will raise an SSL error while trying to login against the private registry.

Now let’s try pulling the images from the private registry.

$ docker pull docker.example.com/ubuntu:12.04
    Pulling repository docker.example.com/ubuntu
    f959d044ebdf: Download complete
    511136ea3c5a: Download complete
    ed52aaa56e98: Download complete
    b875af6dcb23: Download complete
    41959ee20b93: Download complete
    Status: Downloaded newer image for docker.example.com/ubuntu:12.04

$ docker images
    REPOSITORY                  TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
    debian                      wheezy              479215127fa7        5 days ago          84.99 MB
    docker.example.com/wheezy   latest              479215127fa7        5 days ago          84.99 MB
    docker.example.com/ubuntu   12.04               f959d044ebdf        5 days ago          130.1 MB

Setting up S3 Backend for Docker Registry

Docker registry by default supports S3 backend for storing the images. But if we are using S3, it’s better to cache the image locally so that we don’t have to fetch S3 all the time. Redis really comes to the rescue. We can set up Redis Server as an LRU Cache and can define the settings in the config.yml of the registry or as an env variable.

$ apt-get install redis-server

Once Redis server is installed, we need to define the maxmemory to be allocated for the cache and maxmemory-policy which tells Redis how to clean the old cache when the maxmemory limit is reached. Add below settings to the redis.conf file

maxmemory 2000mb              # i'm allocating 2GB of cache size

maxmemory-policy volatile-lru     # removes the key with an expire set using an LRU algorithm

Now let’s define the env variables so that docker-registry can use them while starting up. Add the below variables to the /etc/default/docker-registry file.

CACHE_REDIS_HOST=localhost
CACHE_REDIS_PORT=6379
CACHE_REDIS_DB=0

CACHE_LRU_REDIS_HOST=localhost
CACHE_LRU_REDIS_PORT=6379
CACHE_LRU_REDIS_DB=0

Let’s start the Docker Registry in foreground and see if it’s starting with Redis Cache.

root@docker:~# docker-registry

   13/Jan/2015:19:07:42 +0000 INFO: Enabling storage cache on Redis
   13/Jan/2015:19:07:42 +0000 INFO: Redis host: localhost:6379 (db0)
   13/Jan/2015:19:07:42 +0000 INFO: Enabling lru cache on Redis
       13/Jan/2015:19:07:42 +0000 INFO: Redis lru host: localhost:6379 (db0)
       13/Jan/2015:19:07:42 +0000 INFO: Enabling storage cache on Redis

The above logs shows us that registry has started with Redis cache. Now we need to setup the S3 backend storage. By default for dev env, defaul backend is file storage. We need to change it to S3 in the config.yml

dev: &dev
    <<: *s3                            #by default this will be local, which is local file storage
    loglevel: _env:LOGLEVEL:debug
    debug: _env:DEBUG:true
    search_backend: _env:SEARCH_BACKEND:sqlalchemy

Now if we check the config.yml, in the S3 backend section, the mandatory variables are the ones mentioned below. The boto variables are needed only if we are using any non-Amazon S3-compliant object store.

AWS_REGION   => S3 region where the bucket is located
AWS_BUCKET   => S3 bucket name
STORAGE_PATH => the sub "folder" where image data will be stored
AWS_ENCRYPT  => if true, the container will be encrypted on the server-side by S3 and will be stored in an encrypted form while at rest in S3. Default value is `True`
AWS_SECURE   => true for HTTPS to S3
AWS_KEY      => S3 Access key
AWS_SECRET   => S3 secret key

We can define the above variables in the /etc/default/docker-registry file. And we need to restart the registry process to make the changes effective.

$ docker-registry         

    13/Jan/2015:23:40:39 +0000 INFO: Enabling storage cache on Redis
    13/Jan/2015:23:40:39 +0000 INFO: Redis host: localhost:6379 (db0)
    13/Jan/2015:23:40:39 +0000 INFO: Enabling lru cache on Redis
    13/Jan/2015:23:40:39 +0000 INFO: Redis lru host: localhost:6379 (db0)
    13/Jan/2015:23:40:39 +0000 INFO: Enabling storage cache on Redis
    13/Jan/2015:23:40:39 +0000 INFO: Redis config: {'path': '/registry1', 'host': 'localhost', 'password': None, 'db': 0, 'port': 6379}
    13/Jan/2015:23:40:39 +0000 DEBUG: Will return docker-registry.drivers.s3.Storage
    13/Jan/2015:23:40:39 +0000 DEBUG: Using access key provided by client.
    13/Jan/2015:23:40:39 +0000 DEBUG: Using secret key provided by client.
    13/Jan/2015:23:40:39 +0000 DEBUG: path=/
    13/Jan/2015:23:40:39 +0000 DEBUG: auth_path=/my-docker/
    13/Jan/2015:23:40:39 +0000 DEBUG: Method: HEAD
    13/Jan/2015:23:40:39 +0000 DEBUG: Path: /
    13/Jan/2015:23:40:39 +0000 DEBUG: Data:
    13/Jan/2015:23:40:39 +0000 DEBUG: Headers: {}
    13/Jan/2015:23:40:39 +0000 DEBUG: Host: my-docker.s3-us-west-2.amazonaws.com
    13/Jan/2015:23:40:39 +0000 DEBUG: Port: 443
    13/Jan/2015:23:40:39 +0000 DEBUG: Params: {}
    13/Jan/2015:23:40:39 +0000 DEBUG: establishing HTTPS connection: host=my-docker.s3-us-west-2.amazonaws.com, kwargs={'port': 443, 'timeout': 70}
    13/Jan/2015:23:40:39 +0000 DEBUG: Token: None
    13/Jan/2015:23:40:39 +0000 DEBUG: StringToSign:
    HEAD


    Tue, 13 Jan 2015 23:40:39 GMT
    /my-docker/
    13/Jan/2015:23:40:39 +0000 DEBUG: Signature:
    AWS XXXXXXXXXXXXXXXXXXXX:********************
    13/Jan/2015:23:40:39 +0000 DEBUG: Final headers: {'Date': 'Tue, 13 Jan 2015 23:40:39 GMT', 'Content-Length': '0', 'Authorization': u'AWS XXXXXXXXXXXXXXXXXXXX:********************', 'User-Agent': 'Boto/2.34.0 Python/2.7.3 Linux/3.8.0-44-generic'}
    13/Jan/2015:23:40:39 +0000 DEBUG: Response headers: [('x-amz-id-2', '*************************************************'), ('server', 'AmazonS3'), ('transfer-encoding', 'chunked'), ('x-amz-request-id', 'XXXXXXXXXXXXXX'), ('date', 'Tue, 13 Jan 2015 23:40:40 GMT'), ('content-type', 'application/xml')]
    13/Jan/2015:23:40:39 +0000 INFO: Boto based storage initialized
    2015-01-13 23:40:39 [21909] [INFO] Starting gunicorn 19.1.0
    2015-01-13 23:40:39 [21909] [INFO] Listening at: http://0.0.0.0:5000 (21909)
    2015-01-13 23:40:39 [21909] [INFO] Using worker: gevent
    2015-01-13 23:40:39 [21919] [INFO] Booting worker with pid: 21919
    2015-01-13 23:40:39 [21920] [INFO] Booting worker with pid: 21920
    2015-01-13 23:40:39 [21921] [INFO] Booting worker with pid: 21921
    2015-01-13 23:40:39 [21922] [INFO] Booting worker with pid: 21922
    2015-01-13 23:40:39 [21909] [INFO] 4 workers

So now we have the Docker registry with S3 backend and Redis cache. Let’s push one of our local image see if registry can upload it to the S3 bucket.

$ docker push docker.example.com/mydocker/debian:wheezy

    The push refers to a repository [docker.example.com/mydocker/debian] (len: 1)
    Sending image list
    Pushing repository docker.example.com/mydocker/debian (1 tags)
    511136ea3c5a: Image successfully pushed
    1aeada447715: Image successfully pushed
    479215127fa7: Image successfully pushed
    3192d5ea7137: Image successfully pushed
    Pushing tag for rev [3192d5ea7137] on {https://docker.example.com/v1/repositories/mydocker/debian/tags/wheezy}

Now let’s check debug logs to have a more glimpse on what’s happening in the background.

# Sample output of S3 image upload

    Tue, 13 Jan 2015 20:05:05 GMT
    /my-docker/registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/json
    13/Jan/2015:20:05:05 +0000 DEBUG: Signature:
    AWS XXXXXXXXXXXXXXXXXXXX:********************
    13/Jan/2015:20:05:05 +0000 DEBUG: Final headers: {'Date': 'Tue, 13 Jan 2015 20:05:05 GMT', 'Content-Length': '0', 'Authorization': u'AWS   XXXXXXXXXXXXXXXXXXXX:********************', 'User-Agent': 'Boto/2.34.0 Python/2.7.3 Linux/3.8.0-44-generic'}
    13/Jan/2015:20:05:05 +0000 DEBUG: Response headers: [('x-amz-id-2', '*****************************************************'), ('server', 'AmazonS3'), ('transfer-encoding', 'chunked'), ('x-amz-request-id', 'XXXXXXXXXXXXX'), ('date', 'Tue, 13 Jan 2015 20:05:05 GMT'), ('content-type', 'application/xml')]
    13/Jan/2015:20:05:05 +0000 DEBUG: path=/registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_inprogress
    13/Jan/2015:20:05:05 +0000 DEBUG: auth_path=/my-docker/registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_inprogress
    13/Jan/2015:20:05:05 +0000 DEBUG: Method: PUT
    13/Jan/2015:20:05:05 +0000 DEBUG: Path: /registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_inprogress
    13/Jan/2015:20:05:05 +0000 DEBUG: Data:
    13/Jan/2015:20:05:05 +0000 DEBUG: Headers: {'Content-MD5': u'xxxxxxxxxxxxxxx', 'Content-Length': '4', 'Expect': '100-Continue', 'x-amz-server-side-encryption': 'AES256', 'Content-Type': 'application/octet-stream', 'User-Agent': 'Boto/2.34.0 Python/2.7.3 Linux/3.8.0-44-generic'}
    13/Jan/2015:20:05:05 +0000 DEBUG: Host: my-docker.s3-us-west-2.amazonaws.com
    13/Jan/2015:20:05:05 +0000 DEBUG: Port: 443
    13/Jan/2015:20:05:05 +0000 DEBUG: Params: {}
    13/Jan/2015:20:05:05 +0000 DEBUG: Token: None
    13/Jan/2015:20:05:05 +0000 DEBUG: StringToSign:
    PUT
    xxxxxxxxxxxxxxx
    application/octet-stream
    Tue, 13 Jan 2015 20:05:05 GMT
    x-amz-server-side-encryption:AES256
    /my-docker/registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_inprogress
    13/Jan/2015:20:05:05 +0000 DEBUG: Signature:
    AWS XXXXXXXXXXXXXXXXXXXX:********************
    13/Jan/2015:20:05:05 +0000 DEBUG: Final headers: {'Content-MD5': 'xxxxxxxxxxxxx', 'Content-Length': '4', 'Expect': '100-Continue', 'Date': 'Tue,              13 Jan 2015 20:05:05 GMT', 'x-amz-server-side-encryption': 'AES256', 'Content-Type': 'application/octet-stream', 'Authorization': u'AWS XXXXXXXXXXXXXXXXXXXX:********************', 'User-Agent': 'Boto/2.34.0 Python/2.7.3 Linux/3.8.0-44-generic'}
    13/Jan/2015:20:05:05 +0000 DEBUG: Response headers: [('content-length', '0'), ('x-amz-id-2', '*************************************'), ('server', 'AmazonS3'), ('x-amz-request-id', 'xxxxxxxxxxxxxx'), ('etag', '"b326b5062b2f0e69046810717534cb09"'), ('date', 'Tue, 13 Jan 2015 20:05:06 GMT'), ('x-amz-server-side-encryption', 'AES256')]
    13/Jan/2015:20:05:05 +0000 DEBUG: path=/registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_checksum
    13/Jan/2015:20:05:05 +0000 DEBUG: auth_path=/my-docker/registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_checksum
    13/Jan/2015:20:05:05 +0000 DEBUG: Method: HEAD
    13/Jan/2015:20:05:05 +0000 DEBUG: Path: /registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_checksum
    13/Jan/2015:20:05:05 +0000 DEBUG: Data:
    13/Jan/2015:20:05:05 +0000 DEBUG: Headers: {}
    13/Jan/2015:20:05:05 +0000 DEBUG: Host: my-docker.s3-us-west-2.amazonaws.com
    13/Jan/2015:20:05:05 +0000 DEBUG: Port: 443
    13/Jan/2015:20:05:05 +0000 DEBUG: Params: {}
    13/Jan/2015:20:05:05 +0000 DEBUG: Token: None
    13/Jan/2015:20:05:05 +0000 DEBUG: StringToSign:
    HEAD

    Tue, 13 Jan 2015 20:05:05 GMT
    /my-docker/registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_checksum
    13/Jan/2015:20:05:05 +0000 DEBUG: Signature:
    AWS XXXXXXXXXXXXXXXXXXXX:********************
    13/Jan/2015:20:05:05 +0000 DEBUG: Final headers: {'Date': 'Tue, 13 Jan 2015 20:05:05 GMT', 'Content-Length': '0', 'Authorization': u'AWS   XXXXXXXXXXXXXXXXXXXX:********************', 'User-Agent': 'Boto/2.34.0 Python/2.7.3 Linux/3.8.0-44-generic'}
    13/Jan/2015:20:05:05 +0000 DEBUG: Response headers: [('x-amz-id-2', '***************************************************'), ('server', 'AmazonS3'), ('transfer-encoding', 'chunked'), ('x-amz-request-id', 'xxxxxxxxxxxxxxx'), ('date', 'Tue, 13 Jan 2015 20:05:05 GMT'), ('content-type', 'application/xml')]
    13/Jan/2015:20:05:05 +0000 DEBUG: path=/
    13/Jan/2015:20:05:05 +0000 DEBUG: auth_path=/my-docker/
    13/Jan/2015:20:05:05 +0000 DEBUG: path=/?delimiter=/&prefix=registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_checksum/
    13/Jan/2015:20:05:05 +0000 DEBUG: auth_path=/my-docker/? delimiter=/&prefix=registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_checksum/
    13/Jan/2015:20:05:05 +0000 DEBUG: Method: GET
    13/Jan/2015:20:05:05 +0000 DEBUG: Path: /?delimiter=/&prefix=registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_checksum/
    13/Jan/2015:20:05:05 +0000 DEBUG: Data:
    13/Jan/2015:20:05:05 +0000 DEBUG: Headers: {}
    13/Jan/2015:20:05:05 +0000 DEBUG: Host: my-docker.s3-us-west-2.amazonaws.com
    13/Jan/2015:20:05:05 +0000 DEBUG: Port: 443
    13/Jan/2015:20:05:05 +0000 DEBUG: Params: {}
    13/Jan/2015:20:05:05 +0000 DEBUG: Token: None
    13/Jan/2015:20:05:05 +0000 DEBUG: StringToSign:
    13/Jan/2015:20:05:07 +0000 DEBUG: args = {'image_id': u'3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63'}
    13/Jan/2015:20:05:07 +0000 DEBUG: path=/registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/layer
    13/Jan/2015:20:05:07 +0000 DEBUG: auth_path=/my-docker/registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/layer
    13/Jan/2015:20:05:07 +0000 DEBUG: Method: HEAD
    13/Jan/2015:20:05:07 +0000 DEBUG: Path: /registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/layer
    13/Jan/2015:20:05:07 +0000 DEBUG: Data:
    13/Jan/2015:20:05:07 +0000 DEBUG: Headers: {}
    13/Jan/2015:20:05:07 +0000 DEBUG: Host: my-docker.s3-us-west-2.amazonaws.com
    13/Jan/2015:20:05:07 +0000 DEBUG: Port: 443
    13/Jan/2015:20:05:07 +0000 DEBUG: Params: {}
    13/Jan/2015:20:05:07 +0000 DEBUG: Token: None
    13/Jan/2015:20:05:07 +0000 DEBUG: StringToSign:
    HEAD

    Tue, 13 Jan 2015 20:05:07 GMT
    /my-docker/registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/layer
    13/Jan/2015:20:05:07 +0000 DEBUG: Signature:
    AWS XXXXXXXXXXXXXXXXXXXX:********************
    13/Jan/2015:20:05:07 +0000 DEBUG: Final headers: {'Date': 'Tue, 13 Jan 2015 20:05:07 GMT', 'Content-Length': '0', 'Authorization': u'AWS   XXXXXXXXXXXXXXXXXXXX:********************', 'User-Agent': 'Boto/2.34.0 Python/2.7.3 Linux/3.8.0-44-generic'}   
    13/Jan/2015:20:05:07 +0000 DEBUG: Token: None
    13/Jan/2015:20:05:07 +0000 DEBUG: StringToSign:
    HEAD

Now let’s query the registry for the newly uploaded image

$ curl -k https://dockeradmin:xxxxxxx@docker.example.com/v1/repositories/mydocker/debian/tags/wheezy

    "3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63"

$ curl http://localhost:5000/v1/search
    {"num_results": 1, "query": "", "results": [{"description": "", "name": "mydocker/debian"}]}

Also let’s see if our Redis server is caching the image.

$ redis 127.0.0.1:6379> keys *
 1) "cache_path:/tmp/registryrepositories/mydocker/debian/tag_wheezy"
 2) "cache_path:/registry1images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/ancestry"
 3) "cache_path:/registry1images/1aeada4477158496dc31ee5c6e7174240140d83fddf94bc57fc02bee1b04e44f/json"
 4) "cache_path:/registryimages/511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158/ancestry"
 5) "diff-worker"
 6) "cache_path:/registry1images/511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158/json"
 7) "cache_path:/registry1repositories/mydocker/debian/_index_images"
 8) "cache_path:/registryimages/511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158/_inprogress"
 9) "cache_path:/registry1images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_checksum"
10) "cache_path:/registry1repositories/mydocker/debian/tagwheezy_json"
11) "cache_path:/registryimages/511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158/_checksum"
12) "cache_path:/registry1images/479215127fa7b852902ed734f3a7ac69177c0d4d9446ad3a1648938230c3c8ab/ancestry"
13) "cache_path:/registry1repositories/mydocker/debian/tag_wheezy"
14) "cache_path:/registry1images/1aeada4477158496dc31ee5c6e7174240140d83fddf94bc57fc02bee1b04e44f/ancestry"
15) "cache_path:/registry1images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/json"
16) "cache_path:/registryrepositories/mydocker/debian/_index_images"
17) "cache_path:/registryimages/511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158/json"
18) "cache_path:/registry1images/511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158/ancestry"
19) "cache_path:/registry1images/479215127fa7b852902ed734f3a7ac69177c0d4d9446ad3a1648938230c3c8ab/_checksum"
20) "cache_path:/registry1images/1aeada4477158496dc31ee5c6e7174240140d83fddf94bc57fc02bee1b04e44f/_checksum"
21) "cache_path:/registry1images/511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158/_checksum"
22) "cache_path:/registry1images/479215127fa7b852902ed734f3a7ac69177c0d4d9446ad3a1648938230c3c8ab/json"

Now for those who want to have a Continous Integration system, we can set up Jenkins to build the autmated images and upload to our Private registry and use Mesos/CoreOS to deploy the image through out our infrastructure in a fully automated fashion.

Standard
Debian, debian-packaging, debuild

Automating Debian Package Management

With the rise of CI tools like Jenkins/Gitlab and Config management tools like Salt/Ansible, Continous integration became so flexible. Now if we check, most of the Projects are using GIT as a Version control and CI tools like Jenkins to build and test the packages automatically whenever any change is pushed to the repo. And finally once the build is successful, the packages are pushed to repo so that config management systems like Salt/Puppet/Ansible can go ahead and perform the upgrade. In my previous blogs, i’ve explained on how to build a Debian package and how to create and manage APT repo’s via aptly. In this blog i’ll explain how to automate these two processes.

So the flow is like this. We have a Github repo, and once a changed is pushed to the repo, Github will send a hook to our Jenkins server which in turn triggers the Jenkins package build. Once the package has been successfully built, jenkins will automatically add the new packages to our repo and publish the same to our APT repo via aptly

Installing Jenkins

First, let’s setup a Jenkins build server.

$ wget -q -O - https://jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -

$ echo "deb http://pkg.jenkins-ci.org/debian binary/" > /etc/apt/sources.list.d/jenkins.list

$ apt-get update && apt-get install jenkins

$ /etc/init.d/jenkins restart

Once the Jenkins service is started, we can access the Jenkins UI via ”http://jenkins-server-ip:8080”. By default there is no authentication for this URL, so accessing the URL will open up the Jenkins UI.

Creating a Build Job in Jenkins

In order to use a Git repo, we have to install the Git plugin first. In Jenkins UI, Go to ”Manage Jenkins” – > ”Manage Plugins” – > ”Available” and search for ”GIT plugin” and install it. Once the Git plugin has been installed we can create a new build job.

Click on ”New Item” on the Home Page and Select ”Freestyle Project” and Click on “OK”. On the Next page, we need to configure all the necessary steps for build job. Fill in the necessary details like Project Name, Description etc. Under “Source Code Management”, select Git and enter the Repo URL. Make sure that the jenkins user has access to the repo. We can also use Deploy keys, but i’ve generated a separate ssh key for Jenkins user and the same has been added to Github. Under ”Build Triggers” select ‘Build when a change is pushed to GitHub’ so that Jenkins will start the build job everytime when a change has been pushed to repo.

Under the Build section, Click on ”Add build step” and select ’Execute shell’ and let’s add our package build script which is stage 1.

set -e
set -x
debuild -us -uc

In Stage 2, i’m going publish my newly built packages to my APT repo

aptly repo add myapt ../openvpn*.deb
/usr/bin/env script -qfc "aptly publish -passphrase=<GPG passphrase> update myapt"

If you see my above command, i’ve used the script command. This is because, i was getting this error “aptly stderr: gpg: cannot open tty /dev/tty': No such device or address“, whenever i try to update a repo via aptly using Jenkins. This is due to a bug in aptly. The fix has been placed on the Master branch but its not yet released. The script command is a temporary work around for this bug.

Now we have a Build job ready. We can manually trigger a build to test if the Job is working fine. If the build is successfull, we are done with our build server. Now the final step is Configuring Github to send a trigger whenever any change is pushed to Github.

Configuring Github Triggers

Go the Github repo and Click on the Repo settings. Open ”Webhooks and Services” and select ”Add Service” and select ”GitHub plugin“.Now it will ask for Jenkin’s Hook URL, which is ”http://:8080/github-webhook/” and add the service. Once the service is set, we can click on “Test service” to check if the webhook is working fine.

Once the test hook is created, go to the Jenkins job page and select ”GitHub Hook Log”. The test hook should get displayed there. If not there is something wrong on the config.

Now we have a fully automated build and release management. Config management tools like Salt/Ansible etc.. can go ahead and start the deployment process.

Standard
Debian, debian-packaging

Managing Debian APT Repository via Aptly

In my previous blog, i’ve explained how to build a Debian pacakge from source. In this blog i’m to explain how to create and manage our own apt repository. Enter aptly,is a swiss army knife for Debian repository management: it allows us to mirror remote repositories, manage local package repositories, take snapshots, pull new versions of packages along with dependencies, publish as Debian repository. Aptly can upload the repo to Amazon S3, but we need to install APT S3 support, in order to use it from S3.

First, let’s install aptly on our build server. A more detailed documentation on installation is available in the website

$ echo "deb http://repo.aptly.info/ squeeze main" > /etc/apt/sources.list

$ gpg --keyserver keys.gnupg.net --recv-keys 2A194991

$ gpg -a --export 2A194991 | sudo apt-key add -

$ apt-get update && apt-get install aptly

Let’s create a repo,

$ aptly repo create -distribution=wheezy -component=main my-repo    # where my-repo is the name of the repository

Once the repo is created, we can start adding our newly created packages to our new repo.

$ aptly repo add <repo name> <your debian file>    # in my case aptly repo add myrepo openvpn_2.3.6_amd64.deb

The above command will add the new package to the repo. Now in order to make this repo usable, we need to publish this repo. A valid GPG key is required for publishing the repo. So let’s create the gpg key for aptly.

$ gpg --gen-key

$ gpg --export --armor <email-id-used-fo-gpg-key-creation> > myrepo-pubkey.asc   # creates a pubkey that distributed

$ gpg --send-key KEYNAME     # This command can be used if we want to send the key to a public server, we can also pass --keyserver <server-url>, if we want to specifiy a specific keyserver

Once we have our GPG key, we can publish our repo. By default aptly can publish the repo to S3 or it can publish it locally and we can use any webserver to servce this repo.

$ aptly publish --distribution="wheezy" repo my-repo

Once published, we can point the webserver to “~/.aptly/”, where our repo files will be created. Aptly also comes with an embedded webserver which can be invoked by running aptly serve. Aptly really makes the repo management so easy. We can actually integrate this into our jenkins job so that each time when we build a package, we can directly add and upload the same to our repository.

Standard
Debian, debian-packaging, debuild

Building a Debian Package

Installing applications via packages saves us a lot of time. Especially being an OPS oriented guy, compiling applications from source is somtimes pain and time consuming. Especially the dependencies. But later, after the rise of config management system, people started creating corresponding automated scripts that will install necessary dependencies and the ususal make && make install. But if you check applications like Freeswitch was taking 15+min to finish compiliations, which is defintely a bad idea when you want to deploy the a new patch on a cluster. In such cases packages are really a life saver. Build the packages once as per our requirement and deploy it throughout the infrastructure. Now with the tools like jenkins,TravisCI etc we can attain a good level of CI.

In this blog, i’m going to explain on how to build a debian package from scratch. First let’s install two main dependencies for a build machine

$ apt-get install devscripts build-essential

For the past few days i was playing with OpenVPN and SoftEther. I’m going to build a simple debian package for OpenVPN from source. The current stable version of OpenVPN is available 2.3.6. First let’s get the OpenVPN source code.

$ wget http://swupdate.openvpn.org/community/releases/openvpn-2.3.6.tar.gz

$ tar xvzf openvpn-2.3.6.tar.gz && cd openvpn-2.3.6

Now for building a package, first we need to create a debian folder. And in this folder we are going to place all necessary files required for building a package.

$ mkdir debian

As per the Debian pacakging Documentation, the mandatory files are rules control, changelog. Changlog file content should match the exact syntax, otherwise packaging will fail at the initial stage itself. There are some more optional files that we can use. Below are the files present in my debian folder

changelog           => Changelog for my Package
control             => Contains Details about the package including the dependencies
dirs                => specifies any directories which we need but which are not created by the normal installation procedure, handled by 'dh_installdirs'
openvpn.default     => this file will be copied to /etc/default/openvpn
openvpn.init        => this file will be copied to /etc/init.d/openvpn, handled by 'dh_installinit'
postinst.debhelper  => Any action that need to be performed once the package installation is completed, like creating a specific user, starting service etc
postrm.debhelper        => Any action that need to be performed once the package removal is completed, like deleting a specific user
prerm.debhelper     => Any action that need to be performed before the package removal is initiated, like stopping the service
rules           => Contains rules for build procedure

In my case i wanted to install the openvpn on a custom location say ’/opt/openvpn’. So if we are building from scratch manually, we can mention the prefix like ’./configure –prefix=/opt/openvpn’. but in the build process, dh_auto_configure is running our ’./configure’ operation with dfault option ie, no custom prefix. So we need to overide this process if we want to have a custom prefix. Below is the content of my rules file.

# rules file

    #!/usr/bin/make -f
    # vim: tabstop=4 softtabstop=4 noexpandtab fileencoding=utf-8

    # Uncomment this to turn on verbose mode.
    export DH_VERBOSE=1

    DEB_DIR=$(CURDIR)/debian/openvpn

    %:
        dh $@
    override_dh_auto_configure:                      # override of configure
        ./configure --prefix=/opt/openvpn

Once we have all the necessary files in place, we can start the build process. Make sure that all the dependency packages mentioned in the control file is installed on the build server.

    $ debuild -us -uc

If the build command is completed successfully, we will see the deb package as well as the source package just above our openvpn source folderm which is the default path where dh_builddeb places the files. We can overide the same too.

#!/usr/bin/make -f
# vim: tabstop=4 softtabstop=4 noexpandtab fileencoding=utf-8

# Uncomment this to turn on verbose mode.
export DH_VERBOSE=1

DEB_DIR=$(CURDIR)/debian/openvpn

%:
    dh $@
override_dh_auto_configure:
    ./configure --prefix=/opt/openvpn
override_dh_builddeb:
    dh_builddeb --destdir=./deb-pkg/

So now we have the Debian package. We can test installing it manually via ’dpkg -i’. This was just a go thorugh on how to build a simple debian package. In my next blog, i’ll be discussing about how to create and manage a private apt repository using a awsme tool called aptly

Standard
Docker, FreeSwitch, sipp, sippy_cup

Sippy_cup – FreeSwitch Load Test Simplified

Ever since the entry of Docker, everyone is busy porting their applications to Docker Containers. Now with the tools like Mesos, CoreOS etc we can easily achieve scalability also. @Plivo we always dedicate ourselves to play around such new technologies. In my previous blog posts, i’ve explained how to containerize the Freeswitch, how to perform some basic load test using simple dialplans etc. My previous load tests required a bunch of basic Freeswitch servers to originate calls to flood the calls to the FreeSwitch container. So this time i’m going to use a simple method, which everyone can use even from their laptops.

Enter SIPp. SIPp is a free Open Source test tool / traffic generator for the SIP protocol. But the main issue for beginer like me is in generating a proper XML for SIPp that can match to my exact production scenarios. After googling, i came across a super simple ruby wrapper over SIPp called sippy_cup. SIPpy_cup is a simple ruby wrapper over SIPp. We just need to create a simple yaml file and sippy_cup parses this yml file and generates the XML equivalent which will be then used to generate calls. sippy_cup can also be used to generate only the XML file for SIPp.

Setting up sippy_cup is very simple. There are only two dependencies

      1) ruby (2.1.2 recomended)
      2) SIPp

Another important dependency is our local internet bandwidth. Flooding too many calls will definitely result in network bottlenecks, which i faced when i generated 1k calls from my laptop. Now let’s install SIPp.

sudo apt-get install pcaputils libpcap-dev libncurses5-dev

wget 'http://sourceforge.net/projects/sipp/files/sipp/3.2/sipp.svn.tar.gz/download'

tar zxvf sipp.svn.tar.gz

# compile sipp
make

# compile sipp with pcapplay support
make pcapplay

Once we have installed SIPp and ruby, we can install sippy_cup via ruby gems.

gem install sippy_cup

Configuring sippy_cup

First we need to create yml file for our call flow. There is a good documentation available on the Readme on various options that can be used to create the yml to suit to our call flow. My call flow is pretty simple, i’ve a DialPlan in my Docker FS, which will play an mp3 file. So below is a simple yml config for this call flow

source: <local_machine_ip>
destination: <docker_fs_ip>:<fs_port>
max_concurrent: <no_of_concurrent_calls>
calls_per_second: <calls_per_second>
number_of_calls: <total_no_of_calls>
to_user: <to_number>            # => should match the FS Dialplan
steps:                      # call flow steps
  - invite                  # Initial Call INVITE
  - wait_for_answer         # Waiting for Answer, handles 100, 180/183 and finally 200 OK
  - ack_answer              # ACK for the 200 OK
  - sleep 1000              # Sleeps for 1000 seconds
  - send_bye                # Sends BYE signal to FS

Now let’s run sippy_cup using our config yml

sippy_cup -r test.yml

Below is the output of a sample load test. Total 20 calls with 10 concurrent calls

         INVITE ---------->      20        1         0
         100 <----------         20        0         0         0
         180 <----------         0         0         0         0
         183 <----------         0         0         0         0
         200 <----------  E-RTD1 20        0         0         0
         ACK ---------->         20        0
              [ NOP ]
         Pause [    30.0s]       20                            0
         BYE ---------->         20        0
------------------------------ Test Terminated --------------------------------


----------------------------- Statistics Screen ------- [1-9]: Change Screen --
  Start Time             | 2014-10-22   19:12:40.494470 1414030360.494470
  Last Reset Time        | 2014-10-22   19:13:45.355358 1414030425.355358
  Current Time           | 2014-10-22   19:13:45.355609 1414030425.355609
-------------------------+---------------------------+--------------------------
  Counter Name           | Periodic value            | Cumulative value
-------------------------+---------------------------+--------------------------
  Elapsed Time           | 00:00:00:000000           | 00:01:04:861000
  Call Rate              |    0.000 cps              |    0.308 cps
-------------------------+---------------------------+--------------------------
  Incoming call created  |        0                  |        0
  OutGoing call created  |        0                  |       20
  Total Call created     |                           |       20
  Current Call           |        0                  |
-------------------------+---------------------------+--------------------------
  Successful call        |        0                  |       20
  Failed call            |        0                  |        0
-------------------------+---------------------------+--------------------------
  Response Time 1        | 00:00:00:000000           | 00:00:01:252000
  Call Length            | 00:00:00:000000           | 00:00:31:255000
------------------------------ Test Terminated --------------------------------


I, [2014-10-22T19:13:45.357508 #17234]  INFO -- : Test completed successfully!

I tried to perform a large scale load test by making 1k calls with 250 concurrent calls. My local internet was flooding with network traffic as there was real Media packets coming from the servers, though it bottlenecked my internet, but still i was able to make 994 successfull calls. I suggest to do such heavy load test on machines wich has good network throughput. Below are the output for this test.

------------------------------ Scenario Screen -------- [1-9]: Change Screen --
  Call-rate(length)   Port   Total-time  Total-calls  Remote-host
   5.0(0 ms)/1.000s   8836     585.61 s         1000  54.235.170.44:5060(UDP)

  Call limit reached (-m 1000), 0.507 s period  1 ms scheduler resolution
  6 calls (limit 250)                    Peak was 176 calls, after 150 s
  0 Running, 8 Paused, 1 Woken up
  604 dead call msg (discarded)          0 out-of-call msg (discarded)
  3 open sockets
  1490603 Total RTP pckts sent           0.000 last period RTP rate (kB/s)

                                 Messages  Retrans   Timeout   Unexpected-Msg
         INVITE ---------->      1000      332       0
         100 <----------         954       53        0         0
         180 <----------         0         0         0         0
         183 <----------         0         0         0         0
         200 <------2014-10-22  19:19:23.202714 1414030763.202714: Dead call 990-17510@192.168.1.146 (successful), 

received 'SIP/2.0 200 OK
Via: SIP/2.0/UDP 192.168.1.146:8836;received=208.66.27.62;branch=z9hG4bK-17510-990-8
From: "sipp" <sip:sipp@192.168.1.146>;tag=990
To: <sip:14158872327@54.235.170.44:5060>;tag=9p6t351mvXZXg
Call-ID: 990-17510@192.168.1.146
CSeq: 2 BYE
User-Agent: Plivo
Allow: INVITE, ACK, BYE, CANCEL, OPTIONS, MESSAGE, INFO, UPDATE, REFER, NOTIFY
Supported: timer, precondition, path, replaces
Conte----  E-RTD1        994       126       0         0
         ACK ---------->         994       126
              [ NOP ]
       Pause [    30.0s]         994                           0
         BYE ---------->         994       0
------------------------------ Test Terminated --------------------------------


----------------------------- Statistics Screen ------- [1-9]: Change Screen --
  Start Time             | 2014-10-22   19:15:29.941276 1414030529.941276
  Last Reset Time        | 2014-10-22   19:25:15.056475 1414031115.056475
  Current Time           | 2014-10-22   19:25:15.564038 1414031115.564038
-------------------------+---------------------------+--------------------------
  Counter Name           | Periodic value            | Cumulative value
-------------------------+---------------------------+--------------------------
  Elapsed Time           | 00:00:00:507000           | 00:09:45:622000
  Call Rate              |    0.000 cps              |    1.708 cps
-------------------------+---------------------------+--------------------------
  Incoming call created  |        0                  |        0
  OutGoing call created  |        0                  |     1000
  Total Call created     |                           |     1000
  Current Call           |        6                  |
-------------------------+---------------------------+--------------------------
  Successful call        |        0                  |      994
  Failed call            |        0                  |        0
-------------------------+---------------------------+--------------------------
  Response Time 1        | 00:00:00:000000           | 00:00:01:670000
  Call Length            | 00:00:00:000000           | 00:00:31:673000
------------------------------ Test Terminated --------------------------------

Sippy_cup is definitely a good tool for all beginers who finds really hard time to work around with SIPp XML’s. I’m really excited to see how Docker is going to contirbute to VOIP world.

Standard
CollectD, Elasticsearch, Kibana, logstash, Monitoring, Redis

Monitoring Redis Using CollectD and ELK

Redis is an open-source, networked, in-memory, key-value data store. It’s being heavily used every where from Web stack to Monitoring to Message queues. Monitoring tools like Sensu already has some good scripts to Monitor Redis. Last Month during PyCon 2014 @Plivo, opensourced a new rate limited queue called SHARQ which is based on Redis. So apart from just Monitoring checks, we decided to have a tsdb of what’s happening in our Redis Cluster. Since we are heavily using ELK stack to visualize our infrastructure, we decided to go ahead with the same.

CollectD Redis Plugin

There is a cool CollectD plugin for Redis. It pulls a verity of Data from Redis which includes, Memory used, Commands Processed, No. of Connected Clients and slaves, No. of blocked Clients, No. of Keys stored/db, uptime and challenges since last save. The installation is pretty simple and straight forward.

$ apt-get update && apt-get install collectd

$ git clone https://github.com/powdahound/redis-collectd-plugin.git /tmp/redis-collectd-plugin

Now place the redis_info.py file onto the collectd folder and enable the Python Plugins so that collectd can use this python file. Below is our collectd conf

Hostname    "<redis-server-fqdn>"
Interval 10
Timeout 4
Include "/etc/collectd/filters.conf"
Include "/etc/collectd/thresholds.conf"
LoadPlugin network
ReportStats true

        LogLevel info

Include "/etc/collectd/redis.conf"      # This is the configuration for the Redis plugin
<Plugin network>
    Server "<logstash-fqdn>" "<logstash-collectd-port>"
</Plugin>

Now copy the redis python plugin and the conf file to collectd folder.

$ mkdir /etc/collectd/plugin            # This is where we are going to place our custom plugins

$ cp /tmp/redis-collectd-plugin/redis_info.py /etc/collectd/plugin/

$ cp /tmp/redis-collectd-plugin/redis.conf /etc/collectd/

By default, the plugin folder in the redis.conf is defined as ‘/opt/collectd/lib/collectd/plugins/python’. Make sure to replace this with the location where we are copying the plugin file, in our case “/etc/collectd/plugin”. Now lets restart the collectd daemon to enable the redis plugin.

$ /etc/init.d/collectd stop

$ /etc/init.d/collectd start

In my previous Blog, i’ve mentioned how to enable and use the ColectD input plugin in Logstash and to use Kibana to plot the data coming from the collectd. Below are the Data’s that we are receiving from the CollectD on Logstash,

  1) type_instance: blocked_clients
  2) type_instance: evicted_keys
  3) type_instance: connected_slaves
  4) type_instance: commands_processed
  5) type_instance: connected_clients
  6) type_instance: used_memory 
  7) type_instance: <dbname>-keys
  8) type_instance: changes_since_last_save
  9) type_instance: uptime_in_seconds
10) type_instance: connections_received

Now we need to Visualize these via Kibana. Lets create some ElasticSearch queries so that visualize them directly. Below are some sample queries created in Kibana UI.

1) type_instance: "commands_processed" AND host: "<redis-host-fqdn>"
2) type_instance: "used_memory" AND host: "<redis-host-fqdn>"
3) type_instance: "connections_received" AND host: "<redis-host-fqdn>"
4) type_instance: "<dbname>-keys" AND host: "<redis-host-fqdn>"

Now We have some sample queries, lets visualize them.

Now create histograms in the same procedure by changing the Selected Queries.

Standard