Uncategorized

Setting Up Docker Private Registry

Last year Containers based technology showed up big boom. A lot of OpenSource projects and startups wrapped over Docker. Now Docker became a favourite tool for both Dev and Ops guys. I’m a big fan of Docker and i do all my hacks on containers. This time i decided to play with Docker private registry, so that i sync all my docker clients with a central registry. In this test setup i’m using Ubuntu 12.04 server with Nginx as a reverse proxy. With the Nginx proxy i can easily enforce basic auth and can protect my private docker registry from unauthorized access.

Installing Docker Registry

Download the latest release of Docker Registry from the Docker’s github repo

$ wget https://github.com/docker/docker-registry/archive/0.9.0.tar.gz -O /usr/local/src/0.9.0.tar.gz

$ tar xvzf 0.9.0.tar.gz && mv docker-registry-0.9.0 docker-registry


Let's install the dependencies,

$ apt-get update && apt-get install swig python-pip python-dev libssl-dev liblzma-dev libevent1-dev patch


Once the dependencies are installed, lets go ahead and install the docker-registry app

$ cat /usr/local/src/docker-registry/config/boto.cfg > /etc/boto.cfg

$ pip install /usr/local/src/docker-registry/depends/docker-registry-core/

$ pip install file:///usr/local/src/docker-registry

$ patch $(python -c 'import boto; import os; print os.path.dirname(boto.__file__)')/connection.py < /usr/local/src/docker-registry/contrib/boto_header_patch.diff

$ cp /usr/local/src/docker-registry/config/config_sample.yml /usr/local/src/docker-registry/config/config.yml 


We can edit the `config.yml` file, if we want to change the local storage path (default => /tmp/registry) and also we can use redis as a local cache + sqlite based search backend. The repo already contains a sample init [init](https://raw.githubusercontent.com/docker/docker-registry/master/contrib/docker-registry_debian.sh) script that can be used directly.

$ cp -rvf /usr/local/src/docker-registry/contrib/docker-registry_debian.sh /etc/init.d/docker-registry

Also let's setup a default file, `/etc/default/docker-registry`, so that init script can read the necessary env variables. Below is the content of my default file.

DOCKER_REGISTRY_HOME=/usr/local/src/docker-registry
DOCKER_REGISTRY_CONFIG=/usr/local/src/docker-registry/config/config.yml
SETTINGS_FLAVOR=dev
GUNICORN_OPTS=[--preload]
LOGLEVEL=debug

Let's start the registry service,

$ /etc/init.d/docker-registry start

Setting up Docker Client

Now we have a private Docker registry running, Now let’s setup an Nginx proxy, so that we dont have to expose the registry directly to outside world.

$ apt-get install nginx-extras

The docker-registry repo also contains basic nginx config that can be used directly.

$ cat /usr/local/src/docker-registry/contrib/nginx/nginx.conf > /etc/nginx/sites-enabled/default

$ cp /usr/local/src/docker-registry/contrib/nginx/docker-registry.conf /etc/nginx/

Also create a basic auth file that contains the username password. We can use htpasswd to generate the password. The filename mentioned the nginx config is docker-registry.htpasswd

$ echo "dockeradmin:$apr1$.BzsRrxN$fng.12mJL/TJenKjkZSMS0" >> /etc/nginx/docker-registry.htpasswd  # replace the username and password with the one generated by htpasswd

Now let’s generate a self signed SSL certificate that can be used with nginx. There are websites like StartSSL which provides free 1 year SSL certificate.

$ mkdir /opt/certs && cd /opt/certs

$ openssl genrsa -out devdockerCA.key 2048

$ openssl req -x509 -new -nodes -key devdockerCA.key -days 10000 -out devdockerCA.crt

$ openssl genrsa -out dev-docker-registry.com.key 2048

$ openssl req -new -key dev-docker-registry.com.key -out dev-docker-registry.com.csr

    Country Name (2 letter code) [AU]: US
        State or Province Name (full name) [Some-State]: CA
        Locality Name (eg, city) []: SF
        Organization Name (eg, company) [Internet Widgits Pty Ltd]: Beingasysadmin
        Organizational Unit Name (eg, section) []: tech
        Common Name (e.g. server FQDN or YOUR name) []: docker.example.com
        Email Address []: docker@example.com

        Please enter the following 'extra' attributes
        to be sent with your certificate request
        A challenge password []:                          # leave the password blank
        An optional company name []:

$ openssl x509 -req -in dev-docker-registry.com.csr -CA devdockerCA.crt -CAkey devdockerCA.key -CAcreateserial -out dev-docker-registry.com.crt -days 10000

Copy the certificates to the SSL path mentioned the nginx config,

$ cp dev-docker-registry.com.crt /etc/ssl/certs/docker-registry

$ cp dev-docker-registry.com.key /etc/ssl/private/docker-registry

Now let’s restart the nginx process to reflect the changes

$ service nginx restart

Now once the nginx is up, we can check the connectivity between docker client and registry server. Since registry is using a self signed certificate, we need to whitelist the CA on the Docker client machine.

$ cd /opt/certs

$ mkdir /usr/local/share/ca-certificates/docker-dev-cert

$ cp devdockerCA.crt /usr/local/share/ca-certificates/docker-dev-cert

$ update-ca-certificates

Note: If the CA is not added to trusted list, Docker client wont be able to authenticate against the registry server. Once the CA is added to trusted list, we can test the connectivity between Docker client and Registry server. If the Docker daemon was running before adding the CA, then we need to restart the Docker daemon

$ root@docker:~# docker login https://docker.example.com      # use the nginx basic auth creds here, email can be blank
    Username: dockeradmin
    Password:xxxxxxxx
    Email:                          # This we can leave blank
    Login Succeeded                                         # Successful login message

Once the login is succeeded, lets add some base docker images to our Private registry.

$ docker pull ubuntu:12.04                             #  Pulling the latest image of Ubuntu 12.04
    ubuntu:12.04: The image you are pulling has been verified
    ed52aaa56e98: Pull complete
    b875af6dcb23: Pull complete
    41959ee20b93: Pull complete
    f959d044ebdf: Pull complete
    511136ea3c5a: Already exists
    Status: Downloaded newer image for ubuntu:12.04


$ docker images                             # Check the downloaded images
    REPOSITORY                  TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
    ubuntu                      12.04               f959d044ebdf        5 days ago          130.1 MB

So, as per the Documentation, we need to create a tag for the image that we are going to push to our private registry. The tag must be of the syntax ”/:”

$ docker tag ubuntu:12.04 docker.example.com/ubuntu:12.04     # We need to create a tag of format "<registry-server-fqdn>/<image-name>:<optional-tag>"


$ docker push docker.example.com/ubuntu:12.04                 # push the image to our new private registry
    The push refers to a repository [docker.example.com/ubuntu] (len: 1)
    Sending image list
    Pushing repository docker.example.com/ubuntu (1 tags)
    Image 511136ea3c5a already pushed, skipping
    ed52aaa56e98: Image successfully pushed
    b875af6dcb23: Image successfully pushed
    41959ee20b93: Image successfully pushed
    f959d044ebdf: Image successfully pushed
    Pushing tag for rev [f959d044ebdf] on {https://docker.example.com/v1/repositories/ubuntu/tags/12.04}

Let’s query the registry API for the pushed image

curl http://localhost:5000/v1/search
    {"num_results": 2, "query": "", "results": [{"description": "", "name": "library/wheezy"}, {"description": "", "name": "library/ubuntu"}]}

Currently both the Docker Client and Registry resides on the same machine, we can test push/pull image from a remote machine. The only dependency is we need to add the Self Signed CA to the trusted CA list, otherwise docker client will raise an SSL error while trying to login against the private registry.

Now let’s try pulling the images from the private registry.

$ docker pull docker.example.com/ubuntu:12.04
    Pulling repository docker.example.com/ubuntu
    f959d044ebdf: Download complete
    511136ea3c5a: Download complete
    ed52aaa56e98: Download complete
    b875af6dcb23: Download complete
    41959ee20b93: Download complete
    Status: Downloaded newer image for docker.example.com/ubuntu:12.04

$ docker images
    REPOSITORY                  TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
    debian                      wheezy              479215127fa7        5 days ago          84.99 MB
    docker.example.com/wheezy   latest              479215127fa7        5 days ago          84.99 MB
    docker.example.com/ubuntu   12.04               f959d044ebdf        5 days ago          130.1 MB

Setting up S3 Backend for Docker Registry

Docker registry by default supports S3 backend for storing the images. But if we are using S3, it’s better to cache the image locally so that we don’t have to fetch S3 all the time. Redis really comes to the rescue. We can set up Redis Server as an LRU Cache and can define the settings in the config.yml of the registry or as an env variable.

$ apt-get install redis-server

Once Redis server is installed, we need to define the maxmemory to be allocated for the cache and maxmemory-policy which tells Redis how to clean the old cache when the maxmemory limit is reached. Add below settings to the redis.conf file

maxmemory 2000mb              # i'm allocating 2GB of cache size

maxmemory-policy volatile-lru     # removes the key with an expire set using an LRU algorithm

Now let’s define the env variables so that docker-registry can use them while starting up. Add the below variables to the /etc/default/docker-registry file.

CACHE_REDIS_HOST=localhost
CACHE_REDIS_PORT=6379
CACHE_REDIS_DB=0

CACHE_LRU_REDIS_HOST=localhost
CACHE_LRU_REDIS_PORT=6379
CACHE_LRU_REDIS_DB=0

Let’s start the Docker Registry in foreground and see if it’s starting with Redis Cache.

root@docker:~# docker-registry

   13/Jan/2015:19:07:42 +0000 INFO: Enabling storage cache on Redis
   13/Jan/2015:19:07:42 +0000 INFO: Redis host: localhost:6379 (db0)
   13/Jan/2015:19:07:42 +0000 INFO: Enabling lru cache on Redis
       13/Jan/2015:19:07:42 +0000 INFO: Redis lru host: localhost:6379 (db0)
       13/Jan/2015:19:07:42 +0000 INFO: Enabling storage cache on Redis

The above logs shows us that registry has started with Redis cache. Now we need to setup the S3 backend storage. By default for dev env, defaul backend is file storage. We need to change it to S3 in the config.yml

dev: &dev
    <<: *s3                            #by default this will be local, which is local file storage
    loglevel: _env:LOGLEVEL:debug
    debug: _env:DEBUG:true
    search_backend: _env:SEARCH_BACKEND:sqlalchemy

Now if we check the config.yml, in the S3 backend section, the mandatory variables are the ones mentioned below. The boto variables are needed only if we are using any non-Amazon S3-compliant object store.

AWS_REGION   => S3 region where the bucket is located
AWS_BUCKET   => S3 bucket name
STORAGE_PATH => the sub "folder" where image data will be stored
AWS_ENCRYPT  => if true, the container will be encrypted on the server-side by S3 and will be stored in an encrypted form while at rest in S3. Default value is `True`
AWS_SECURE   => true for HTTPS to S3
AWS_KEY      => S3 Access key
AWS_SECRET   => S3 secret key

We can define the above variables in the /etc/default/docker-registry file. And we need to restart the registry process to make the changes effective.

$ docker-registry         

    13/Jan/2015:23:40:39 +0000 INFO: Enabling storage cache on Redis
    13/Jan/2015:23:40:39 +0000 INFO: Redis host: localhost:6379 (db0)
    13/Jan/2015:23:40:39 +0000 INFO: Enabling lru cache on Redis
    13/Jan/2015:23:40:39 +0000 INFO: Redis lru host: localhost:6379 (db0)
    13/Jan/2015:23:40:39 +0000 INFO: Enabling storage cache on Redis
    13/Jan/2015:23:40:39 +0000 INFO: Redis config: {'path': '/registry1', 'host': 'localhost', 'password': None, 'db': 0, 'port': 6379}
    13/Jan/2015:23:40:39 +0000 DEBUG: Will return docker-registry.drivers.s3.Storage
    13/Jan/2015:23:40:39 +0000 DEBUG: Using access key provided by client.
    13/Jan/2015:23:40:39 +0000 DEBUG: Using secret key provided by client.
    13/Jan/2015:23:40:39 +0000 DEBUG: path=/
    13/Jan/2015:23:40:39 +0000 DEBUG: auth_path=/my-docker/
    13/Jan/2015:23:40:39 +0000 DEBUG: Method: HEAD
    13/Jan/2015:23:40:39 +0000 DEBUG: Path: /
    13/Jan/2015:23:40:39 +0000 DEBUG: Data:
    13/Jan/2015:23:40:39 +0000 DEBUG: Headers: {}
    13/Jan/2015:23:40:39 +0000 DEBUG: Host: my-docker.s3-us-west-2.amazonaws.com
    13/Jan/2015:23:40:39 +0000 DEBUG: Port: 443
    13/Jan/2015:23:40:39 +0000 DEBUG: Params: {}
    13/Jan/2015:23:40:39 +0000 DEBUG: establishing HTTPS connection: host=my-docker.s3-us-west-2.amazonaws.com, kwargs={'port': 443, 'timeout': 70}
    13/Jan/2015:23:40:39 +0000 DEBUG: Token: None
    13/Jan/2015:23:40:39 +0000 DEBUG: StringToSign:
    HEAD


    Tue, 13 Jan 2015 23:40:39 GMT
    /my-docker/
    13/Jan/2015:23:40:39 +0000 DEBUG: Signature:
    AWS XXXXXXXXXXXXXXXXXXXX:********************
    13/Jan/2015:23:40:39 +0000 DEBUG: Final headers: {'Date': 'Tue, 13 Jan 2015 23:40:39 GMT', 'Content-Length': '0', 'Authorization': u'AWS XXXXXXXXXXXXXXXXXXXX:********************', 'User-Agent': 'Boto/2.34.0 Python/2.7.3 Linux/3.8.0-44-generic'}
    13/Jan/2015:23:40:39 +0000 DEBUG: Response headers: [('x-amz-id-2', '*************************************************'), ('server', 'AmazonS3'), ('transfer-encoding', 'chunked'), ('x-amz-request-id', 'XXXXXXXXXXXXXX'), ('date', 'Tue, 13 Jan 2015 23:40:40 GMT'), ('content-type', 'application/xml')]
    13/Jan/2015:23:40:39 +0000 INFO: Boto based storage initialized
    2015-01-13 23:40:39 [21909] [INFO] Starting gunicorn 19.1.0
    2015-01-13 23:40:39 [21909] [INFO] Listening at: http://0.0.0.0:5000 (21909)
    2015-01-13 23:40:39 [21909] [INFO] Using worker: gevent
    2015-01-13 23:40:39 [21919] [INFO] Booting worker with pid: 21919
    2015-01-13 23:40:39 [21920] [INFO] Booting worker with pid: 21920
    2015-01-13 23:40:39 [21921] [INFO] Booting worker with pid: 21921
    2015-01-13 23:40:39 [21922] [INFO] Booting worker with pid: 21922
    2015-01-13 23:40:39 [21909] [INFO] 4 workers

So now we have the Docker registry with S3 backend and Redis cache. Let’s push one of our local image see if registry can upload it to the S3 bucket.

$ docker push docker.example.com/mydocker/debian:wheezy

    The push refers to a repository [docker.example.com/mydocker/debian] (len: 1)
    Sending image list
    Pushing repository docker.example.com/mydocker/debian (1 tags)
    511136ea3c5a: Image successfully pushed
    1aeada447715: Image successfully pushed
    479215127fa7: Image successfully pushed
    3192d5ea7137: Image successfully pushed
    Pushing tag for rev [3192d5ea7137] on {https://docker.example.com/v1/repositories/mydocker/debian/tags/wheezy}

Now let’s check debug logs to have a more glimpse on what’s happening in the background.

# Sample output of S3 image upload

    Tue, 13 Jan 2015 20:05:05 GMT
    /my-docker/registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/json
    13/Jan/2015:20:05:05 +0000 DEBUG: Signature:
    AWS XXXXXXXXXXXXXXXXXXXX:********************
    13/Jan/2015:20:05:05 +0000 DEBUG: Final headers: {'Date': 'Tue, 13 Jan 2015 20:05:05 GMT', 'Content-Length': '0', 'Authorization': u'AWS   XXXXXXXXXXXXXXXXXXXX:********************', 'User-Agent': 'Boto/2.34.0 Python/2.7.3 Linux/3.8.0-44-generic'}
    13/Jan/2015:20:05:05 +0000 DEBUG: Response headers: [('x-amz-id-2', '*****************************************************'), ('server', 'AmazonS3'), ('transfer-encoding', 'chunked'), ('x-amz-request-id', 'XXXXXXXXXXXXX'), ('date', 'Tue, 13 Jan 2015 20:05:05 GMT'), ('content-type', 'application/xml')]
    13/Jan/2015:20:05:05 +0000 DEBUG: path=/registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_inprogress
    13/Jan/2015:20:05:05 +0000 DEBUG: auth_path=/my-docker/registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_inprogress
    13/Jan/2015:20:05:05 +0000 DEBUG: Method: PUT
    13/Jan/2015:20:05:05 +0000 DEBUG: Path: /registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_inprogress
    13/Jan/2015:20:05:05 +0000 DEBUG: Data:
    13/Jan/2015:20:05:05 +0000 DEBUG: Headers: {'Content-MD5': u'xxxxxxxxxxxxxxx', 'Content-Length': '4', 'Expect': '100-Continue', 'x-amz-server-side-encryption': 'AES256', 'Content-Type': 'application/octet-stream', 'User-Agent': 'Boto/2.34.0 Python/2.7.3 Linux/3.8.0-44-generic'}
    13/Jan/2015:20:05:05 +0000 DEBUG: Host: my-docker.s3-us-west-2.amazonaws.com
    13/Jan/2015:20:05:05 +0000 DEBUG: Port: 443
    13/Jan/2015:20:05:05 +0000 DEBUG: Params: {}
    13/Jan/2015:20:05:05 +0000 DEBUG: Token: None
    13/Jan/2015:20:05:05 +0000 DEBUG: StringToSign:
    PUT
    xxxxxxxxxxxxxxx
    application/octet-stream
    Tue, 13 Jan 2015 20:05:05 GMT
    x-amz-server-side-encryption:AES256
    /my-docker/registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_inprogress
    13/Jan/2015:20:05:05 +0000 DEBUG: Signature:
    AWS XXXXXXXXXXXXXXXXXXXX:********************
    13/Jan/2015:20:05:05 +0000 DEBUG: Final headers: {'Content-MD5': 'xxxxxxxxxxxxx', 'Content-Length': '4', 'Expect': '100-Continue', 'Date': 'Tue,              13 Jan 2015 20:05:05 GMT', 'x-amz-server-side-encryption': 'AES256', 'Content-Type': 'application/octet-stream', 'Authorization': u'AWS XXXXXXXXXXXXXXXXXXXX:********************', 'User-Agent': 'Boto/2.34.0 Python/2.7.3 Linux/3.8.0-44-generic'}
    13/Jan/2015:20:05:05 +0000 DEBUG: Response headers: [('content-length', '0'), ('x-amz-id-2', '*************************************'), ('server', 'AmazonS3'), ('x-amz-request-id', 'xxxxxxxxxxxxxx'), ('etag', '"b326b5062b2f0e69046810717534cb09"'), ('date', 'Tue, 13 Jan 2015 20:05:06 GMT'), ('x-amz-server-side-encryption', 'AES256')]
    13/Jan/2015:20:05:05 +0000 DEBUG: path=/registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_checksum
    13/Jan/2015:20:05:05 +0000 DEBUG: auth_path=/my-docker/registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_checksum
    13/Jan/2015:20:05:05 +0000 DEBUG: Method: HEAD
    13/Jan/2015:20:05:05 +0000 DEBUG: Path: /registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_checksum
    13/Jan/2015:20:05:05 +0000 DEBUG: Data:
    13/Jan/2015:20:05:05 +0000 DEBUG: Headers: {}
    13/Jan/2015:20:05:05 +0000 DEBUG: Host: my-docker.s3-us-west-2.amazonaws.com
    13/Jan/2015:20:05:05 +0000 DEBUG: Port: 443
    13/Jan/2015:20:05:05 +0000 DEBUG: Params: {}
    13/Jan/2015:20:05:05 +0000 DEBUG: Token: None
    13/Jan/2015:20:05:05 +0000 DEBUG: StringToSign:
    HEAD

    Tue, 13 Jan 2015 20:05:05 GMT
    /my-docker/registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_checksum
    13/Jan/2015:20:05:05 +0000 DEBUG: Signature:
    AWS XXXXXXXXXXXXXXXXXXXX:********************
    13/Jan/2015:20:05:05 +0000 DEBUG: Final headers: {'Date': 'Tue, 13 Jan 2015 20:05:05 GMT', 'Content-Length': '0', 'Authorization': u'AWS   XXXXXXXXXXXXXXXXXXXX:********************', 'User-Agent': 'Boto/2.34.0 Python/2.7.3 Linux/3.8.0-44-generic'}
    13/Jan/2015:20:05:05 +0000 DEBUG: Response headers: [('x-amz-id-2', '***************************************************'), ('server', 'AmazonS3'), ('transfer-encoding', 'chunked'), ('x-amz-request-id', 'xxxxxxxxxxxxxxx'), ('date', 'Tue, 13 Jan 2015 20:05:05 GMT'), ('content-type', 'application/xml')]
    13/Jan/2015:20:05:05 +0000 DEBUG: path=/
    13/Jan/2015:20:05:05 +0000 DEBUG: auth_path=/my-docker/
    13/Jan/2015:20:05:05 +0000 DEBUG: path=/?delimiter=/&prefix=registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_checksum/
    13/Jan/2015:20:05:05 +0000 DEBUG: auth_path=/my-docker/? delimiter=/&prefix=registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_checksum/
    13/Jan/2015:20:05:05 +0000 DEBUG: Method: GET
    13/Jan/2015:20:05:05 +0000 DEBUG: Path: /?delimiter=/&prefix=registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_checksum/
    13/Jan/2015:20:05:05 +0000 DEBUG: Data:
    13/Jan/2015:20:05:05 +0000 DEBUG: Headers: {}
    13/Jan/2015:20:05:05 +0000 DEBUG: Host: my-docker.s3-us-west-2.amazonaws.com
    13/Jan/2015:20:05:05 +0000 DEBUG: Port: 443
    13/Jan/2015:20:05:05 +0000 DEBUG: Params: {}
    13/Jan/2015:20:05:05 +0000 DEBUG: Token: None
    13/Jan/2015:20:05:05 +0000 DEBUG: StringToSign:
    13/Jan/2015:20:05:07 +0000 DEBUG: args = {'image_id': u'3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63'}
    13/Jan/2015:20:05:07 +0000 DEBUG: path=/registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/layer
    13/Jan/2015:20:05:07 +0000 DEBUG: auth_path=/my-docker/registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/layer
    13/Jan/2015:20:05:07 +0000 DEBUG: Method: HEAD
    13/Jan/2015:20:05:07 +0000 DEBUG: Path: /registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/layer
    13/Jan/2015:20:05:07 +0000 DEBUG: Data:
    13/Jan/2015:20:05:07 +0000 DEBUG: Headers: {}
    13/Jan/2015:20:05:07 +0000 DEBUG: Host: my-docker.s3-us-west-2.amazonaws.com
    13/Jan/2015:20:05:07 +0000 DEBUG: Port: 443
    13/Jan/2015:20:05:07 +0000 DEBUG: Params: {}
    13/Jan/2015:20:05:07 +0000 DEBUG: Token: None
    13/Jan/2015:20:05:07 +0000 DEBUG: StringToSign:
    HEAD

    Tue, 13 Jan 2015 20:05:07 GMT
    /my-docker/registry1/images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/layer
    13/Jan/2015:20:05:07 +0000 DEBUG: Signature:
    AWS XXXXXXXXXXXXXXXXXXXX:********************
    13/Jan/2015:20:05:07 +0000 DEBUG: Final headers: {'Date': 'Tue, 13 Jan 2015 20:05:07 GMT', 'Content-Length': '0', 'Authorization': u'AWS   XXXXXXXXXXXXXXXXXXXX:********************', 'User-Agent': 'Boto/2.34.0 Python/2.7.3 Linux/3.8.0-44-generic'}   
    13/Jan/2015:20:05:07 +0000 DEBUG: Token: None
    13/Jan/2015:20:05:07 +0000 DEBUG: StringToSign:
    HEAD

Now let’s query the registry for the newly uploaded image

$ curl -k https://dockeradmin:xxxxxxx@docker.example.com/v1/repositories/mydocker/debian/tags/wheezy

    "3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63"

$ curl http://localhost:5000/v1/search
    {"num_results": 1, "query": "", "results": [{"description": "", "name": "mydocker/debian"}]}

Also let’s see if our Redis server is caching the image.

$ redis 127.0.0.1:6379> keys *
 1) "cache_path:/tmp/registryrepositories/mydocker/debian/tag_wheezy"
 2) "cache_path:/registry1images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/ancestry"
 3) "cache_path:/registry1images/1aeada4477158496dc31ee5c6e7174240140d83fddf94bc57fc02bee1b04e44f/json"
 4) "cache_path:/registryimages/511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158/ancestry"
 5) "diff-worker"
 6) "cache_path:/registry1images/511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158/json"
 7) "cache_path:/registry1repositories/mydocker/debian/_index_images"
 8) "cache_path:/registryimages/511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158/_inprogress"
 9) "cache_path:/registry1images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/_checksum"
10) "cache_path:/registry1repositories/mydocker/debian/tagwheezy_json"
11) "cache_path:/registryimages/511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158/_checksum"
12) "cache_path:/registry1images/479215127fa7b852902ed734f3a7ac69177c0d4d9446ad3a1648938230c3c8ab/ancestry"
13) "cache_path:/registry1repositories/mydocker/debian/tag_wheezy"
14) "cache_path:/registry1images/1aeada4477158496dc31ee5c6e7174240140d83fddf94bc57fc02bee1b04e44f/ancestry"
15) "cache_path:/registry1images/3192d5ea7137e4f47f4624a5cc7786af2159a44f49511aeed28aa672416cec63/json"
16) "cache_path:/registryrepositories/mydocker/debian/_index_images"
17) "cache_path:/registryimages/511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158/json"
18) "cache_path:/registry1images/511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158/ancestry"
19) "cache_path:/registry1images/479215127fa7b852902ed734f3a7ac69177c0d4d9446ad3a1648938230c3c8ab/_checksum"
20) "cache_path:/registry1images/1aeada4477158496dc31ee5c6e7174240140d83fddf94bc57fc02bee1b04e44f/_checksum"
21) "cache_path:/registry1images/511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158/_checksum"
22) "cache_path:/registry1images/479215127fa7b852902ed734f3a7ac69177c0d4d9446ad3a1648938230c3c8ab/json"

Now for those who want to have a Continous Integration system, we can set up Jenkins to build the autmated images and upload to our Private registry and use Mesos/CoreOS to deploy the image through out our infrastructure in a fully automated fashion.

Standard
Uncategorized

SIP Trunking With PLIVO and FreeSwitch

It’s more than a month since i joined the DevOps family at Plivo, since i’m pretty new to the Telecom Technology, i was digging more around it. This time i decided to play around with FreeSwitch, a free and open source communications software for the creation of voice and messaging products. Thanks to Anthony Minessale for designing and opensourcing such a powerfull powerfull application. FreeSwitch is well documented and there are pretty good blogs also available on how to setup a PBX using FreeSwitch. This time i’m going to explain on how to make a private Freeswitch server to use Plivo as a SIP Trunking service.

A bit about Plivo. Plivo is a cloud based API Platform for building Voice and SMS enabled Applications. Plivo provides Application Programming Interfaces (APIs) to make and receive calls, send SMS, make a conference call, and more. These APIs are used in conjunction with XML responses to control the flow of a call or a message. We can create Session Initiation Protocol (SIP) endpoints to perform the telephony operations and the APIs are platform independent and can be used in any programming environment such as PHP, Ruby, Python, etc. It also provides helper libraries for these programming languages.

First we need a valid Plivo account. Once we have the Plivo account, we can log into the Plivo Cloud service. Now go to the ”Endpoints” tab and create a SIP endpoint and attach a DirectDial app to it. Once this is done we can go ahead and start setting up the FreeSwitch instance.

Installing FreeSwitch

Clone the Official FreeSwitch Github and Repo and compile from the source.

$ git clone git://git.freeswitch.org/freeswitch.git && cd freeswitch

$ ./bootstrap.sh && ./configure --prefix=/usr/local/freeswitch

$ make && make install

$  make all cd-sounds-install cd-moh-install    # optional, run this if you want IVR and Music on Hold features

Now if we have more than one ip address on the machine, and if we want to bind to a particular ip, we need to modify two files ”/usr/local/freeswitch/conf/sip_profiles/external.xml” and ”/usr/local/freeswitch/conf/sip_profiles/internal.xml”. In both the files, change the parameters ”name=”rtp-ip”” and ”param name=”sip-ip”” with the bind ip as the values.

By default, Freeswitch will create a set of users, which includes numerical usernames ie, 1000-1019. So we can test the basic connectivity between user’s by making a call between two user accounts. We can register two of the accounts in two SoftPhones and we can make a test call and make sure that FreeSwitch is working fine. We can use the FS binary file to start FreeSwitch service in forground.

$ /usr/local/freeswitch/bin/freeswitch

Configuring Gateway

Once the FreeSwitch is working fine, we can start configuring the SIP trunking via Plivo. So first we need to create an external gateway to connect to Plivo. I’m going to use the SIP endpoint created on the Plivo Cloud to initiate the connection. The SIP domain for Plivo is ”phone.plivo.com”. We need to create a gateway config. Go to ”/usr/local/freeswitch/conf/sip_profiles/external/”, here we can create an XML gateway config file. My config file name is plivo. Below is the content for the same.

<include>
  <gateway name="plivo">
  <param name="realm" value="phone.plivo.com" />
  <param name="username" value="<Plivo_SIP_Endpoint_User_Name" />
  <param name="password" value="<Plivo_SIP_EndPoint_Password" />
  <param name="register" value="false" />
  <param name="ping" value="5" />
  <param name="ping-max" value="3" />
  <param name="retry-seconds" value="5" />
  <param name="expire-seconds" value="60" />
  <variables>
        <variable name="verbose_sdp" value="true"/>
  </variables>
  </gateway>
</include>

There are a lot of other parameters which we can add it here, like caller id etc. Replace the username and password with the Plivo endpoint credentials. If we want to keep this endpoint registered, we can set the register param as true and we can set the expiry time at expire-seconds, so that the Fs will keep on registering the Endpoint with Plivo’s Registrar server. once the gateway file is created, we can either restart the service or we can run “reload mod_sofia” on the FScli. If the FreeSwitch service si started in foreground, we will get the FScli, so we can run the reload command directly on it.

Setting up Dialplan

Now we have the Gateway added. Now we need to the setup the Dial Plan to route the outgoing calls through Plivo. Go to ”/usr/local/freeswitch/conf/dialplan/” folder and add an extension on the ”public.xml” file. Below is a sample extension config.

    <extension name="Calls to Plivo">
      <condition field="destination_number" expression="^(<ur_regex_here>)$">
        <action application="transfer" data="$1 XML default"/>
      </condition>
    </extension>

So now all calls matching to the Regex will be transferred to the default dial plan. Now on the the default dial plan, i’m creating an exntension and will use the FreeSwitch’s ”bridge” application to brdige the call with Plivo using the Plivo Gateway. So on the ”default.xml” add the below extension.

       <extension name="Dial through Plivo">
         <condition field="destination_number" expression="^(<ur_regex_here>)$">
           <action application="bridge" data="sofia/gateway/plivo/$1"/>
         </condition>
       </extension>

Now we can restart the FS service or we can reload “mod_dialplan_xml” from the FScli. Once the changes are into effect, we can test whether the call is getting routed via Plivo. Configure a soft phone with a default FS user and make an outbound call which matches the regex that we have mentioned for routing to Plivo. Now if all works we should get a call on the destination number. We can check the FS logs at ”/usr/local/freeswitch/log/freeswitch.log”.

If the Regex is matched, we can see the below lines in the log.

1f249a72-9abf-4713-ba69-c2881111a0e8 Dialplan: sofia/internal/1001@192.168.56.11 parsing [public->Calls from BoxB] continue=false
1f249a72-9abf-4713-ba69-c2881111a0e8 EXECUTE sofia/internal/1001@192.168.56.11 transfer(xxxxxxxxxxxx XML default)
1f249a72-9abf-4713-ba69-c2881111a0e8 Dialplan: sofia/internal/1001@192.168.56.11 Regex (PASS) [Dial through Plivo] destination_number(xxxxxxxxxxxx) =~ /^(xxxxxxxxxxxx)$/ break=on-false
1f249a72-9abf-4713-ba69-c2881111a0e8 Dialplan: sofia/internal/1001@192.168.56.11 Action bridge(sofia/external/plivo/xxxxxxxxxxxx@phone.plivo.com)   
1f249a72-9abf-4713-ba69-c2881111a0e8 EXECUTE sofia/internal/1001@192.168.56.11 bridge(sofia/external/plivo/xxxxxxxxxxxx@phone.plivo.com)
1f249a72-9abf-4713-ba69-c2881111a0e8 2014-02-14 06:32:48.244757 [DEBUG] mod_sofia.c:4499 [zrtp_passthru] Setting a-leg inherit_codec=true
1f249a72-9abf-4713-ba69-c2881111a0e8 2014-02-14 06:32:48.244757 [DEBUG] mod_sofia.c:4502 [zrtp_passthru] Setting b-leg absolute_codec_string=GSM@8000h@20i@13200b,PCMA@8000h@20i@64000b,PCMU@8000h@20i@64000b

We can also mention the CallerID on the Direct Dial app which we have mapped to the SIP endpoint. Now for Incoming calls, create a app that can forward the calls to one of the user’s present in the FreeSwitch, using the Plivo’s Dial XML. So the XML should look something like below. I will be writing a more detailed blog about Inbound calls once i’ve have tested it out completely.

<Response>
  <Dial>
    <User>FSuser@FSserverIP</User>
  </Dial>
</Response>

But for security, we need to allow connections from Plivo server. So we need to allow those IP’s on the FS acl. We can allow the IP’s in the ”acl.conf.xml” file at ”/usr/local/freeswitch/conf/autoload_configs”. And make sure that the FS server is accessible via a public ip atleast for the Plivo server’s which will forward the calls.

Standard
Uncategorized

Using Mongo Discovery Method in MCollective

I’ve been playing around with MCollective for the past few months. But this time i wanted to try out theMongo discovery method. The response time was quite faster for the Mongo discovery method. So i really wanted to try it out. Setting out MCollective Server/Client is prettyl simple. You can go through my previous blog. Now we need to install the Meta registration plugin on all the MCollective Servers. Just Download and Copy meta.rb in the MCollective registration plugin folder. In my case, i’ve Debian based machine’s, so the location is, /usr/share/mcollective/plugins/mcollective/registration/. This will make the metadata available to other nodes.

Now add the below three lines into the server.cfg of all the MCollective server’s.

registration = Meta
registerinterval = 300
factsource = facter

Now install the mongodb registration agent on one of the nodes, which will be our slave node.. Do not install this on all the nodes. There is a small bug in this agent. So follow the steps mentioned here and modify the registration.rb file. Now install mongoDB server on the slave node. Also add the below lines to the server.cfg in the slave machine.

plugin.registration.mongohost = localhost
plugin.registration.mongodb = puppet
plugin.registration.collection = nodes

Now restart the mcollective service. If we increase the log level to debug, then we can see the below lines in the mcollective.log. This indicates that the plugin is getting activated and it is receiving request from the machines, whose fqdn is shown in the below line.

D, [2012-11-29T15:51:34.391762 #12731] DEBUG -- : registration.rb:97:in `handlemsg' Updated data for host vagrant-debian-squeeze.vagrantup.com with id 50b650d4454bc346e4000002 in 0.0027310848236084s
D, [2012-11-29T15:50:05.810180 #12731] DEBUG -- : registration.rb:97:in `handlemsg' Updated data for host ubuntults.vargrantup.com with id 50b650c0454bc346e4000001 in 0.00200200080871582s

Initially, i used the default registration.rb file which i downloaded from the github. But it was giving me an error handlemsg Got stats without a FQDN in facts. So don’t forget to modify theregistration.rb

Now go connect to mongoDB and verify that the nodes are getting registered in it.

$ mongo
 MongoDB shell version: 2.0.4
 connecting to: test
 > use puppet
 switched to db puppet
 > db.nodes.find().count()
 2

So, now both my master and slave have been registered into the mongoDB. Now in order to use theMongo Discovery Method, we need to install the mongodb discovery plugin and also we need to enable the direct addressing mode. so we need to add direct_addressing = 1 in the server.cfg file.

Now we can use the –dm option to specify the discovery method.

$ mco rpc rpcutil ping --dm=mongo -v
  Discovering hosts using the mongo method .... 2

  * [ ========================================================> ] 2 / 2


  vagrant-debian-squeeze                  : OK
    {:pong=>1354187911}

  ubuntults                               : OK
	    {:pong=>1354187880}



  ---- rpcutil#ping call stats ----
            Nodes: 2 / 2
      Pass / Fail: 2 / 0
      Start Time: Thu Nov 29 16:48:00 +0530 2012
  Discovery Time: 68.48ms
  	  Agent Time: 108.35ms
  	  Total Time: 176.83ms

$ mco rpc rpcutil ping --dm=mc -v
  Discovering hosts using the mc method for 2 second(s) .... 2

  * [ ========================================================> ] 2 / 2


  vagrant-debian-squeeze                  : OK
 	    {:pong=>1354188083}

  ubuntults                               : OK
	    {:pong=>1354188053}



  ---- rpcutil#ping call stats ----
            Nodes: 2 / 2
   	  Pass / Fail: 2 / 0
   	  Start Time: Thu Nov 29 16:50:52 +0530 2012
  Discovery Time: 2004.24ms
  	  Agent Time: 104.28ms
  	  Total Time: 2108.51ms

From the above commands, we can see the difference in the Discovery Time.

Now for those who want GUIR.I.Pienaar has develeoped a web gui called Mco-Rpc-Web. He has uploaded a few screencasts, which will give us a short demo on all of these.

Standard
Uncategorized

Hubot @ your service

A few months back, i got a chance to attend the rootconf2012, where i first came to know about “Hubot” , developed by github. I was very much interested, especially the gtalk plugin, where we can integrate “hubot” with a gmail account. We can make Hubot listen to every words and make it to respond back. There are so many default hubot-scripts which we can use to play around with it.

Configuring Hubot is very simple.

First, we’ll install all of the dependencies necessary to get Hubot up and running.

apt-get install build-essential  libssl-dev  git-core  redis-server  libexpat1-dev curl libcurl4-nss-dev libcurl4-openssl-dev”

 

Now, Download and extract Node.js

wget http://nodejs.org/dist/v0.9.2/node-v0.9.2.tar.gz”

tar xf  node-v0.9.2.tar.gz -C  /opt  &&  cd /opt/node-v0.9.2″

“./configure && make && make install”

For Gtalk Plugin we need “node-xmpp“. So we can use “npm” to install it. We also need CoffeeScipt.

“npm install node-xmpp”

“npm install  -g coffee-script”

Now clone the Hubot repository from GitHub.

“git clone git://github.com/github/hubot.git”

Now go inside the hubot folder and using the “hubot” binary inside the bin folder create a deployable hubot.

“./bin/hubot -c /opt/hubot”

Now go inside the new hubot folder and open the “package.json” using a text editor and add the hubot-gtalk dependency package.

“dependencies”: {
   “hubot”: “2.3.2”,
   “hubot-gtalk”: “>= 0.0.1”,
    “hubot-scripts”: “>= 2.1.0”,
   “coffee-script”: “1.3.3”,
    “optparse”: “1.0.3”,
    “scoped-http-client”: “0.9.7”,
    “log”: “1.3.0”,
    “connect”: “2.3.4”,
    “connect_router”: “1.8.6”,

Now use “npm install”  to install the dependencies.

Before starting the hubot, we need to configure the below parameters for Gtalk Adapter.

The GTalk adapter requires only the following environment variables.

  • HUBOT_GTALK_USERNAME (Should be full email address, e. g. username@gmail.com)
  • HUBOT_GTALK_PASSWORD

And the following are optional.

  • HUBOT_GTALK_WHITELIST_DOMAINS
  • HUBOT_GTALK_WHITELIST_USERS
  • HUBOT_GTALK_REGEXP_TRANSFORMATIONS

Once all the parameters are set, we can start the Hubot with Gtalk adapter.

“./bin/hubot -a gtalk”

Now Hubot is online with Gtalk. No we can add the Hubot gmail account to our Gtalk Account and start playing around with it. Hubot comes with a bunch of default scripts. If we type “help”, we will get a a bunch of options for each of these scripts.

Today I was able to execute some Bash commands, using my custom coffee scripts, which gave me some weird ideas, to use “Hubot” for “ChatOPS“. Let’s see how it works. Once it’s done i’ll update it in my blog. Wait for more……………….

Standard
Uncategorized

A Small Munin-Graphite Client

Yesterday I found a munin-graphite client, which was used in the carnin-eye project. It just need one simple “client.yml” file, whose location can be mentioned in the munin-graphite.rb file. You can get the munin-graphite.rb file from carnin-eye github page.

We just have to mention the munin-node details in the client.yml. Below is the content of the client.yml file,

Image

Finally, we have to create cron job to execute the munin-graphite.rb file, which will populate the our munin data into graphite.

Standard
Uncategorized

Setting Up MCollective with ActiveMQ in Ubuntu 12.04

Setting up MCollective with activemq is very easy. The following steps worked perfectly in Ubuntu 12.04. The only dependency is either openjdk or sun java must be installed.

ActiveMQ

Download the latest activemq tar file from ActiveMQ website, I’ve used activemq 5.6 from the below link.

http://apache.techartifact.com/mirror/activemq/apache-activemq/5.6.0/apache-activemq-5.6.0-bin.tar.gz

Untar the file to any folder say inside /opt and rename it to “activemq“.

Now create a symlink of the “activemq” binary inside “/etc/init.d

ln -s /opt/activemq/bin/activemq /etc/init.d/activemq

MCollective requires activemq with “STOMP” support. Edit the activemq.xml inside the activemq directory and add the stomp option in it. You can get the sample from the puppet labs website.

http://docs.puppetlabs.com/mcollective/reference/basic/gettingstarted.html

Copy the contents and paste it into activemq.xml present inside the activemq directory. In my case /opt/activemq/config/activemq.xml

Now start the activemq service using the init script (symlink we created) inside the init.d.

MCollective Server

For MCollective server we need to install two packages, mcollective-common and mcollective. Dowlnoad the latest packages from “http://downloads.puppetlabs.com/mcollective/” and install the packages.

The config file will be present in “/etc/mcollective/server.cfg”. Edit the file, stomp host should be the the machine where we have installed the activemq. Stomp port will be “6163” (can be changed by modifying activemq.xml file)

Also change modify the stomp user and password to the following,

plugin.stomp.user = mcollective

plugin.stomp.password = marionette

The above password can changed by modifying the activemq.xml file

And restart the mcollective service.

MCollective Client

For Mcollective client, download and install mcollective-common mcollective-client packages, and edit the the client.cfg file present inside the /etc/mcollective folder.

Now we can use the “mco” command to check the connectivity. we can use mco find to find the mcollective servers.

Standard