confd, Docker, etcd, serf

Scaling HA-Proxy with Docker, ConfD, Serf, ETCD

Containers, Microservices are the hottest topics in the IT world now. Container technology like LXC, Docker, rkt etc are being used heavily in production now. It's easy to use these technologies on stateless services, so even if a container is crashed, a new container can be launched within seconds and can start serving traffic. Containers are widely used for serving web traffics. Tools like Marathon, provides a HA-Proxy based service discovery, wherein it reloads the HA-Proxy service based on the start/stop of the containers in the Mesos cluster. A simplest similar method is to use etcD with Docker and Haproxy.

With this we can apply the same logic here also ie, when a container is launched, we can save the IP and Port that is being mapped from the HOST machine and the same can be updated dynamically to HA-Proxy config file using ConfD and reload the HA-Proxy service. Now imagine if a container is terminated due to some reason, the HA-Proxy host check will fail, but the entry will be still there in the haproxy.cfg. Since we will be launching new containers either manually or using frameworks like Marathon/Helios etc, there is no need to to keep the old entries in the haproxy.cfg as well as in the ETCD.

But we need a way to keep track of the containers which got terminated, so that we can remove those entries. Enter serf, Serf is a decentralized solution for cluster membership, failure detection, and orchestration. Serf relies on an efficient and lightweight gossip protocol to communicate with nodes. The Serf agents periodically exchange messages with each other in much the same way that a zombie apocalypse would occur: it starts with one zombie but soon infects everyone. Serf is able to quickly detect failed members and notify the rest of the cluster. This failure detection is built into the heart of the gossip protocol used by Serf.

We will be running Serf agents on each container. These serf clusters are isolated ie, we can control the joining members, thereby maintaining separate serf clusters. So if a Container goes down, other agents will receive this member-failed event, and we can define an event handler that can preform the key removal from the etcD which in turn invoke the confD. Then ConfD will remove the server entry from the HA-Proxy config file.

Now let's see this in action ;), i'm using Ubuntu 14.04 as the Base Host

Setting up HA-Proxy and Docker Web Containers

Let's install HA-Proxy on the base Host,

$ apt-get install haproxy

Enable the Ha-Proxy service in /etc/default/haproxy and start the service

$ /etc/init.d/haproxy start

Now install Docker container,

$ curl -sSL https://get.docker.com/ | sh

Once Docker is installed, lets pull an Ubuntu image from the Docker Registry

$ docker pull ubuntu:14.04

Setting up ETCD

Download the latest version of EtcD

$ curl -L  https://github.com/coreos/etcd/releases/download/v2.1.3/etcd-v2.1.3-linux-amd64.tar.gz -o etcd-v2.1.3-linux-amd64.tar.gz

$ tar xvzf etcd-v2.1.3-linux-amd64.tar.gz

Let's start the EtcD services,

$ ./etcd -name default --listen-peer-urls "http://172.17.42.1:2380,http://172.17.42.1:7001" --listen-client-urls "http://172.17.42.1:2379,http://172.17.42.1:4001" --advertise-client-urls "http://172.17.42.1:2379,http://172.17.42.1:4001" --initial-advertise-peer-urls "http://172.17.42.1:2380,http://172.17.42.1:7001" --initial-cluster 'default=http://172.17.42.1:2380,default=http://172.17.42.1:7001'

where 172.17.42.1 is my Host machine IP

Setting up ConfD

Let's setup the ConfD service. For ConfD, we basically need two files, 1) a toml file which contains the info like which keys to lookup in etcd, which etcd cluster lookup, path to the template file etc… and 2) a tmpl (template) file

Download the latest version of confd

$ wget https://github.com/kelseyhightower/confd/releases/download/v0.10.0/confd-0.10.0-linux-amd64

$ cp -rvf confd-0.10.0-linux-amd64 /usr/local/bin/

$ mkdir -p /etc/confd/{conf.d,templates}

Now lets create the tmpl and toml file

toml: /etc/confd/conf.d/myconfig.toml

[template]
src = "haproxy.cfg.tmpl"
dest = "/etc/haproxy/haproxy.cfg"
keys = [
  "/proxy/frontend",
]
reload_cmd = "/etc/init.d/haproxy reload"

tmpl: /etc/confd/templates/haproxy.cfg.tmpl

global
    log /dev/log    local0
    log /dev/log    local1 notice
    chroot /var/lib/haproxy
    user haproxy
    group haproxy
    daemon

defaults
    log    global
    mode    http
    option    httplog
    option    dontlognull
        contimeout 5000
        clitimeout 50000
        srvtimeout 50000
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 408 /etc/haproxy/errors/408.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/502.http
    errorfile 503 /etc/haproxy/errors/503.http
    errorfile 504 /etc/haproxy/errors/504.http

frontend localnodes
    bind *:80
    mode http
    default_backend nodes

backend nodes
    mode http
    balance roundrobin
    option forwardfor
{{range gets "/proxy/frontend/*"}}
    server {{base .Key}} {{.Value}} check
{{end}}

Start the service in foreground by running the below command,

$ confd -backend etcd -node 172.17.42.1:4001 --log-level="debug" -interval=10  # confd will perform lookup for every 10sec

For now, there is no key present in the ETCD, ConfD will perform md5sum lookup of the current file and expected state of the file, if there is any change it will perform updating the file. Below are the debug logs from the ConfD service

2015-09-07T07:09:48Z vagrant-ubuntu-trusty-64 ./confd[14378]: INFO Backend set to etcd
2015-09-07T07:09:48Z vagrant-ubuntu-trusty-64 ./confd[14378]: INFO Starting confd
2015-09-07T07:09:48Z vagrant-ubuntu-trusty-64 ./confd[14378]: INFO Backend nodes set to 172.17.42.1:4001
2015-09-07T07:09:48Z vagrant-ubuntu-trusty-64 ./confd[14378]: DEBUG Loading template resources from confdir /etc/confd
2015-09-07T07:09:48Z vagrant-ubuntu-trusty-64 ./confd[14378]: DEBUG Loading template resource from /etc/confd/conf.d/myconfig.toml
2015-09-07T07:09:48Z vagrant-ubuntu-trusty-64 ./confd[14378]: DEBUG Retrieving keys from store
2015-09-07T07:09:48Z vagrant-ubuntu-trusty-64 ./confd[14378]: DEBUG Key prefix set to /
2015-09-07T07:09:48Z vagrant-ubuntu-trusty-64 ./confd[14378]: DEBUG Using source template /etc/confd/templates/haproxy.cfg.tmpl
2015-09-07T07:09:48Z vagrant-ubuntu-trusty-64 ./confd[14378]: DEBUG Compiling source template /etc/confd/templates/haproxy.cfg.tmpl
2015-09-07T07:09:48Z vagrant-ubuntu-trusty-64 ./confd[14378]: DEBUG Comparing candidate config to /etc/haproxy/haproxy.cfg
2015-09-07T07:09:48Z vagrant-ubuntu-trusty-64 ./confd[14378]: INFO /etc/haproxy/haproxy.cfg has md5sum aa00603556cb147c532d5d97f90aaa17 should be fadab72f5cef00c13855a27893d6e39c
2015-09-07T07:09:48Z vagrant-ubuntu-trusty-64 ./confd[14378]: INFO Target config /etc/haproxy/haproxy.cfg out of sync
2015-09-07T07:09:48Z vagrant-ubuntu-trusty-64 ./confd[14378]: DEBUG Overwriting target config /etc/haproxy/haproxy.cfg
2015-09-07T07:09:48Z vagrant-ubuntu-trusty-64 ./confd[14378]: DEBUG Running /etc/init.d/haproxy reload
2015-09-07T07:09:48Z vagrant-ubuntu-trusty-64 ./confd[14378]: DEBUG " * Reloading haproxy haproxy\n   ...done.\n"

Configure the Docker Web Containers

Since in the end, our Containers will be running on a different host, HA-Proxy needs to know the HOST IP where the containers resides. If the host IP has a Public IP we can easily perform a lookup and get the public ip, in my current scenario, im running in a Vagrant VM, so i'll be passing the HOST IP via Docker ENV variable during startup. If we do a strict port mapping like 80:80 or 443:443, we wont be able to run more than one Web container in a single HOST. So if we go with random ports, our HA-Proxy needs to know these random ports. These port info will be stored in the ETCD key store, so ConfD can utilize the same.

let's start our First container,

  $ docker run --rm=true -it -p 9001:80 -p 8010:7373 -p 8011:5000 -h=web1 -e "FD_IP=192.168.33.102" -e "FD_PORT=9001" -e "SERF_RPC_PORT=8010" -e "SERF_PORT=8011" ubuntu:apache

Where port 9000 is mapped to containers port 80, and port 8010 and 8011 for the Serf Clients, FD_IP is the host IP

Now, lets setup the services for the container

  $ apt-get update && apt-get install apache2

  $ wget https://dl.bintray.com/mitchellh/serf/0.6.4_linux_amd64.zip

  $ unzip 0.6.4_linux_amd64.zip

Let's check the serf in foreground on a screen/tmux

$ ./serf agent -node=web2 -bind=0.0.0.0:5000 -rpc-addr=0.0.0.0:7373 -log-level=debug

If the Serf agent is starting fine, then lets go ahead and create an event handler for the event member-failed, which happens when a container is crashed. Below is a simple event handler script

#serf_handler.py
#!/usr/bin/python
import sys
import os
import requests
import json

base_url = "http://172.17.42.1:4001"   # Use any config mgmt tool or user-data script to populate this value for each host
base_key = "/v2/keys"
cluster_key = ["/proxy/frontend/", "/proxy/serf/", "/proxy/serf-rpc/"]    # EtcD keys that has to be removed

for line in sys.stdin:
    event_type = os.environ['SERF_EVENT']
    if event_type == 'member-failed':
        address = line.split('\t')[0]
        for key in cluster_key:
            full_url = base_url + base_key + key + address
            print full_url
            r = requests.get(full_url)
            if r.status_code == 200:
                r = requests.delete(full_url)
                print "Key Successfully removed from ETCD"
            if r.status_code == 404:
                print "Key already removed by another Serf Agent"

Also we need a bootstrapping script that can add the necessary keys onto ETCD, when the container has started for the first time. This script has to be invoked every during the starting of the container, so ConfD is aware of the new Containers that have started. Below is a simple shell script for the same,

#etcd_bootstrap.sh
HOST=`echo $HOSTNAME`
ETCD_VAL=`echo "$FD_IP:$FD_PORT"`
ETCD_SERF_PORT=`echo $FD_IP:$SERF_PORT`
ETCD_SERF_RPC_PORT=`echo $FD_IP:$SERF_RPC_PORT`
curl -s -X PUT "http://172.17.42.1:4001/v2/keys/proxy/frontend/$HOST" -d value=$ETCD_VAL > /dev/null
exit_stat=`echo $?`
if [ $exit_stat == 0 ]; then
  curl -s -X POST --data-urlencode 'payload={"channel": "#docker", "username": "etcdbot", "text": "`Host: '"$HOST"'`, FD key has been added successfully to ETCD cluster", "icon_emoji": ":ghost:"}' https://hooks.slack.com/services/xxxxxxx/yyyyyyy/xyxyxyxyxyxyxyxyxyxyxyx
else
  curl -s -X POST --data-urlencode 'payload={"channel": "#docker", "username": "etcdbot", "text": "`Host: '"$HOST"'`, Failed to add FD key to the ETCD cluster", "icon_emoji": ":ghost:"}' https://hooks.slack.com/services/xxxxxxx/yyyyyyy/xyxyxyxyxyxyxyxyxyxyxyx
fi
curl -s -X PUT "http://172.17.42.1:4001/v2/keys/proxy/serf/$HOST" -d value=$ETCD_SERF_PORT > /dev/null
exit_stat=`echo $?`
if [ $exit_stat == 0 ]; then
  curl -s -X POST --data-urlencode 'payload={"channel": "#docker", "username": "etcdbot", "text": "`Host: '"$HOST"'`, SERF PORT key has been added successfully to ETCD cluster", "icon_emoji": ":ghost:"}' https://hooks.slack.com/services/xxxxxxx/yyyyyyy/xyxyxyxyxyxyxyxyxyxyxyx
else
  curl -s -X POST --data-urlencode 'payload={"channel": "#docker", "username": "etcdbot", "text": "`Host: '"$HOST"'`, Failed to add SERF PORT key to the ETCD cluster", "icon_emoji": ":ghost:"}' https://hooks.slack.com/services/xxxxxxx/yyyyyyy/xyxyxyxyxyxyxyxyxyxyxyx
fi
curl -s -X PUT "http://172.17.42.1:4001/v2/keys/proxy/serf-rpc/$HOST" -d value=$ETCD_SERF_RPC_PORT > /dev/null
exit_stat=`echo $?`
if [ $exit_stat == 0 ]; then
  curl -s -X POST --data-urlencode 'payload={"channel": "#docker", "username": "etcdbot", "text": "`Host: '"$HOST"'`, SERF RPC PORT key has been added successfully to ETCD cluster", "icon_emoji": ":ghost:"}' https://hooks.slack.com/services/xxxxxxx/yyyyyyy/xyxyxyxyxyxyxyxyxyxyxyx
else
  curl -s -X POST --data-urlencode 'payload={"channel": "#docker", "username": "etcdbot", "text": "`Host: '"$HOST"'`, Failed to add SERF RPC PORT key to the ETCD cluster", "icon_emoji": ":ghost:"}' https://hooks.slack.com/services/xxxxxxx/yyyyyyy/xyxyxyxyxyxyxyxyxyxyxyx
fi

Now lets add the necessary keys to ETCD,

$ bash /opt/scripts/etcd_bootstrap.sh

Once the keys are added, lets start the serf agent wit the event handler script,

$ ./serf agent -node=web1 -bind=0.0.0.0:5000 -rpc-addr=0.0.0.0:7373 -log-level=debug -event-handler=/opt/scripts/serf_handler.py

# Script output
==> Starting Serf agent...
==> Starting Serf agent RPC...
== > Serf agent running!
      Node name: 'web1'
      Bind addr: '0.0.0.0:5000'
        RPC addr: '0.0.0.0:7373'
      Encrypted: false
        Snapshot: false
        Profile: lan

==> Log data will now stream in as it occurs:

  2015/09/07 07:37:12 [INFO] agent: Serf agent starting
  2015/09/07 07:37:12 [INFO] serf: EventMemberJoin: web1 172.17.0.20
  2015/09/07 07:37:13 [INFO] agent: Received event: member-join

Now we have a running Web Container, since we have added the keys to ETCD, lets verify that ConfD has detected the changes. Below is the logs from the ConfD,

2015-09-07T07:40:17Z vagrant-ubuntu-trusty-64 ./confd[14970]: DEBUG Retrieving keys from store
2015-09-07T07:40:17Z vagrant-ubuntu-trusty-64 ./confd[14970]: DEBUG Key prefix set to /
2015-09-07T07:40:17Z vagrant-ubuntu-trusty-64 ./confd[14970]: DEBUG Using source template /etc/confd/templates/haproxy.cfg.tmpl
2015-09-07T07:40:17Z vagrant-ubuntu-trusty-64 ./confd[14970]: DEBUG Compiling source template /etc/confd/templates/haproxy.cfg.tmpl
2015-09-07T07:40:17Z vagrant-ubuntu-trusty-64 ./confd[14970]: DEBUG Comparing candidate config to /etc/haproxy/haproxy.cfg
2015-09-07T07:40:17Z vagrant-ubuntu-trusty-64 ./confd[14970]: INFO /etc/haproxy/haproxy.cfg has md5sum fe4fbaa3f5782c7738a365010a5f6d48 should be fadab72f5cef00c13855a27893d6e39c
2015-09-07T07:40:17Z vagrant-ubuntu-trusty-64 ./confd[14970]: INFO Target config /etc/haproxy/haproxy.cfg out of sync
2015-09-07T07:40:17Z vagrant-ubuntu-trusty-64 ./confd[14970]: DEBUG Overwriting target config /etc/haproxy/haproxy.cfg
2015-09-07T07:40:17Z vagrant-ubuntu-trusty-64 ./confd[14970]: DEBUG Running /etc/init.d/haproxy reload
2015-09-07T07:40:17Z vagrant-ubuntu-trusty-64 ./confd[14970]: DEBUG " * Reloading haproxy haproxy\n   ...done.\n"
2015-09-07T07:40:17Z vagrant-ubuntu-trusty-64 ./confd[14970]: INFO Target config /etc/haproxy/haproxy.cfg has been updated

Also, lets confirm that the haproxy.cfg file has been updated,

# Updated part from the file
backend nodes
    mode http
    balance roundrobin
    option forwardfor

    server web1 192.168.33.102:9000 check    # ConfD has updated the server entry based on the key present in the EtcD

Now let's add the second Web container using the steps we followed for the first container, make sure that the ports are updated on the run script (For the second container, im using 9001 as the web port, 8020/8021 as the Serf ports). Once the bootstrapping scrip is executed successfully, lets verify if the haproxy.cfg file is update with the second entry.

backend nodes
    mode http
    balance roundrobin
    option forwardfor

    server web1 192.168.33.102:9000 check

    server web2 192.168.33.102:9001 check

For now, the serf agents on both the containers are running as a standalone service and they are not aware of each other. Lets execute serf join command to join the cluster. Below is a simple python script for the same.

# serf_starter.py
#!/usr/bin/python
import sys
import os
import etcd
import random
import subprocess

node_keys = []
client = etcd.Client(host='172.17.42.1', port=4001)

r = client.read('/proxy/serf-rpc', recursive = True)
for child in r.children:
    node_keys.append(child.key)
node_key = (random.choice(node_keys))   # Pick one server randomly
remote_serf = client.read(node_key)
print remote_serf.value
local_serf =  os.environ['FD_IP'] + ":" + os.environ['SERF_PORT']
serf_args = "-rpc-addr=%s %s" %(remote_serf.value, local_serf)
print subprocess.call(["/serf", "join", "-rpc-addr=%s" %remote_serf.value, local_serf])

Run the join script on the second container and check for the events on the first container's serf agent logs

$ /opt/scripts/serf_starter.py

Below are the event logs on the first container,

2015/09/07 07:53:02 [INFO] agent.ipc: Accepted client: 172.17.42.1:37326         # Connection from Web2
2015/09/07 07:53:02 [INFO] agent: joining: [192.168.33.102:8021] replay: false
2015/09/07 07:53:02 [DEBUG] memberlist: Initiating push/pull sync with: 192.168.33.102:8021
2015/09/07 07:53:02 [INFO] serf: EventMemberJoin: web2 172.17.0.35
2015/09/07 07:53:02 [INFO] agent: joined: 1 nodes
2015/09/07 07:53:02 [DEBUG] serf: messageJoinType: web2

Now both the agents are connected, lets verify the same

./serf members     # we can run this command on any one of the containers

web2  172.17.0.35:5000  alive
web1  172.17.0.20:5000  alive

Our Serf Cluster is up and running. Now lets terminate one container and see if Serf is able to detect the event. Once the serf detects the event it will call the event_handler script which should remove the keys from the etcD. In this case, i'm terminating the second container. We could see the event popping up in the first container's serf agent logs.

2015/09/07 07:57:37 [INFO] memberlist: Suspect web2 has failed, no acks received
2015/09/07 07:57:39 [INFO] memberlist: Suspect web2 has failed, no acks received
2015/09/07 07:57:41 [INFO] memberlist: Suspect web2 has failed, no acks received
2015/09/07 07:57:42 [INFO] memberlist: Suspect web2 has failed, no acks received
2015/09/07 07:57:42 [INFO] memberlist: Marking web2 as failed, suspect timeout reached
2015/09/07 07:57:42 [INFO] serf: EventMemberFailed: web2 172.17.0.35
2015/09/07 07:57:42 [INFO] serf: attempting reconnect to web2 172.17.0.35:5000
2015/09/07 07:57:43 [INFO] agent: Received event: member-failed
2015/09/07 07:57:43 [DEBUG] agent: Event 'member-failed' script output: http://172.17.42.1:4001/v2/keys/proxy/frontend/web2
Key Successfully removed from ETCD
http://172.17.42.1:4001/v2/keys/proxy/serf/web2
Key Successfully removed from ETCD
http://172.17.42.1:4001/v2/keys/proxy/serf-rpc/web2
Key Successfully removed from ETCD

As per the event_handler script output, the key has been successfully removed from the ETCD. Lets check the confd logs

# Confd logs
2015-09-07T07:57:47Z vagrant-ubuntu-trusty-64 ./confd[14970]: DEBUG Retrieving keys from store
2015-09-07T07:57:47Z vagrant-ubuntu-trusty-64 ./confd[14970]: DEBUG Key prefix set to /
2015-09-07T07:57:47Z vagrant-ubuntu-trusty-64 ./confd[14970]: DEBUG Using source template /etc/confd/templates/haproxy.cfg.tmpl
2015-09-07T07:57:47Z vagrant-ubuntu-trusty-64 ./confd[14970]: DEBUG Compiling source template /etc/confd/templates/haproxy.cfg.tmpl
2015-09-07T07:57:47Z vagrant-ubuntu-trusty-64 ./confd[14970]: DEBUG Comparing candidate config to /etc/haproxy/haproxy.cfg
2015-09-07T07:57:47Z vagrant-ubuntu-trusty-64 ./confd[14970]: INFO /etc/haproxy/haproxy.cfg has md5sum fadab72f5cef00c13855a27893d6e39c should be df9f09eb110366f2ecfa43964a3c862d
2015-09-07T07:57:47Z vagrant-ubuntu-trusty-64 ./confd[14970]: INFO Target config /etc/haproxy/haproxy.cfg out of sync
2015-09-07T07:57:47Z vagrant-ubuntu-trusty-64 ./confd[14970]: DEBUG Overwriting target config /etc/haproxy/haproxy.cfg
2015-09-07T07:57:47Z vagrant-ubuntu-trusty-64 ./confd[14970]: DEBUG Running /etc/init.d/haproxy reload
2015-09-07T07:57:47Z vagrant-ubuntu-trusty-64 ./confd[14970]: DEBUG " * Reloading haproxy haproxy\n   ...done.\n"
2015-09-07T07:57:47Z vagrant-ubuntu-trusty-64 ./confd[14970]: INFO Target config /etc/haproxy/haproxy.cfg has been updated

# haproxy.cfg file

backend nodes
mode http
balance roundrobin
option forwardfor

server web1 192.168.33.102:9000 check

Bingo 🙂 , confd has detected the key removal and it has updated the haproxy.cfg file :). We can even remove the bootstrap script and can use Serf event handlers to populate the key when the member-join event is triggered

With ConfD, Serf, ETCD we can keep our service config files up to date with any human intervention. No matter if we scale up or scale down our Docker containers, our system will get updated automatically :). Put an option to log the events onto Slack and that's it, we wont miss any events and our whole team can keep track of what's happening under the hood

Advertisements
Standard
etcd, serf

Building Self Discovering Clusters With ETCD and SERF

Now a days its very difficult to contain all the services in a single box. Also, companies have started adopting micro service based architectures. For Docker user’s, they can achieve a highly scalable system like Amazon ASG using Apache mesos and their supporting frameworks like Marathon etc… But there are a lot services, where in a new node needs to know the existing members in a cluster to join successfully. Common method will be having a static ip’s to the machines. But we have started moving all our clusters slowly to become immutable via ASG. So either we need to go for Elastic IP’s, and use these static ip’s, but the next challenge would, what if we want to scale the cluster up. Also, if these instances are inside a private network, assigning static local ip’s is also difficult. So if a new machine is launched or if asg relaunches a faulty machine, these machine’s should know the existing cluster nodes. But again not all applications provide service-discovery features. I was testing around with a new Erlang based MQTT broker called verne. For a new Verne node to join the cluster, we need to pass the ip’s of all existing nodes to the JOIN command.

So a quick solution came to mind was using a Distributed K-V pair system like etcd. Idea was to use etcd to have a key-value that can tell us the nodes present in a cluster. But quickly i found another hurdle. Imagine a node belonging to a cluster, went down, the key will be still present in etcd. If we have something like ASG, will launch a new machine, so this new machine’s IP will not be present in the etcd key. We can delete manaully and launch a new machine, then add its IP to the etcd. But our ultimate goal is to automate all these, otherwise our clusters cannot become a full immutable and auto healing.

So we need a system, that can keep track of our Node’s activity atleast, when they join/leave the cluster.

SERF

Enter SERF. Serf is a decentralized solution for cluster membership, failure detection, and orchestration. Serf relies on an efficient and lightweight gossip protocol to communicate with nodes. The Serf agents periodically exchange messages with each other in much the same way that a zombie apocalypse would occur: it starts with one zombie but soon infects everyone. Serf is able to quickly detect failed members and notify the rest of the cluster. This failure detection is built into the heart of the gossip protocol used by Serf.

Serf, also need to know IP’s of other Serf Agents in the cluster. So initially when we start the cluster, we know the IP’s of the other Serf agents, and the serf agents will then populate our Cluster Key’s, on ETCD. Now if a new machine is launched/relaunched, etcd will use the Discovery URL and will connect to the ETCD Cluster. Once it has joined the existing cluter, our Serf agent will start and it will talk to the etcd agent running locally and will create the necessary key for the node. Once the Key is being created, Serf Agent will join the serf cluster using one of the existing nodes IP that was present in the ETCD cluster.

ETCD Discovery URL

A bit about etcd Discovery URL. It’s an inbuild discovery method in etcd for identifying the dynamic members in an etcd cluster. First, we need an ETCD server, this server won’t be a part of our cluster, instead this is used only for providing the discovery URL. We can use this server for multiple ETCD clusters, provided each cluster should have a unique URL. So all the new nodes can talk to their unique url and can join with the rest of the etcd nodes.

# For starting the ETCD Discovery Server

$ ./etcd -name test --listen-peer-urls "http://172.16.16.99:2380,http://172.16.16.99:7001" --listen-client-urls "http://172.16.16.99:2379,http://172.16.16.99:4001" --advertise-client-urls "http://172.16.16.99:2379,http://172.16.16.99:4001" --initial-advertise-peer-urls "http://172.16.16.99:2380,http://172.16.16.99:7001" --initial-cluster 'test=http://172.16.16.99:2380,test=http://172.16.16.99:7001'

Now we need to create a unique Discovery URL for our clusters.

$ curl -X PUT 172.16.16.99:4001/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83/_config/size -d value=3

 where, 6c007a14875d53d9bf0ef5a6fc0257c817f0fb83 => a unique UUID
        value => Size of the cluster, if the number of etcd nodes that tries to join with the specific url is > value, then the extra etcd processes will fall back to being proxies by default.

SERF Events

Serf comes with inbuilt event handler system. It has some inbuilt event handlers for Member Events like member-join, member-leave. We are going to concentrate on these two events. Below is a simple handler script, that will add/remove keys in ETCD, whenever a node joins/leave the cluster.

# handler.py
#!/usr/bin/python
import sys
import os
import requests
import json

base_url = "http://<ip-of-the-node>:4001"   # Use any config mgmt tool or user-data script to populate this value for each host
base_key = "/v2/keys"
cluster_key = "/cluster/<cluster-name>/"    # Use any config mgmt tool or user-data script to populate this value for each host

for line in sys.stdin:
    event_type = os.environ['SERF_EVENT']
    if event_type == 'member-join':
        address = line.split('\t')[1]
        full_url = base_url + base_key + cluster_key + address
        r = requests.get(full_url)
        if r.status_code == 404:
            r = requests.put(full_url, data="member")     # If the key doesn't exists, it will create a new key
            if r.status_code == 201:
                print "Key Successfully created in ETCD"
            else:
                print "Failed to create the Key in ETCD"
        else:
            print "Key Already exists in ETCD"
    if event_type == 'member-leave':
        address = line.split('\t')[1]
        full_url = base_url + base_key + cluster_key + address
        r = requests.get(full_url)
        if r.status_code == 200:                         # If the node leaves the cluster and the key still exists, remove it
            r = requests.delete(full_url)
        if r.status_code == 404:            
                print "Key already removed by another Serf Agent"
            else:
                print "Key Successfully removed from ETCD"
        else:
            print "Key already removed from ETCD"

Design

Now we have some vague idea of serf and etcd. Let’s start building our self-discovering cluster. Make sure that the etcd Discovery node is up and a unique URL has been generated for the same. Once the URL is ready, lets start our ETCD cluster.

# Node1

$ /etcd -name test1 --listen-peer-urls "http://172.16.16.102:2380,http://172.16.16.102:7001" --listen-client-urls "http://172.16.16.102:2379,http://172.16.16.102:4001" --advertise-client-urls "http://172.16.16.102:2379,http://172.16.16.102:4001" --initial-advertise-peer-urls "http://172.16.16.102:2380,http://172.16.16.102:7001" --discovery http://172.16.16.99:4001/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83/

# Sample Output

2015/06/01 19:51:10 etcd: listening for peers on http://172.16.16.102:2380
2015/06/01 19:51:10 etcd: listening for peers on http://172.16.16.102:7001
2015/06/01 19:51:10 etcd: listening for client requests on http://172.16.16.102:2379
2015/06/01 19:51:10 etcd: listening for client requests on http://172.16.16.102:4001
2015/06/01 19:51:10 etcdserver: datadir is valid for the 2.0.1 format
2015/06/01 19:51:10 discovery: found self a9e216fb7b8f3283 in the cluster
2015/06/01 19:51:10 discovery: found 1 peer(s), waiting for 2 more      # Since our cluster Size is 3, and this is the first node, its waiting for other 2 nodes to join

Lets start the other two nodes,

./etcd -name test2 --listen-peer-urls "http://172.16.16.101:2380,http://172.16.16.101:7001" --listen-client-urls "http://172.16.16.101:2379,http://172.16.16.101:4001" --advertise-client-urls "http://172.16.16.101:2379,http://172.16.16.101:4001" --initial-advertise-peer-urls "http://172.16.16.101:2380,http://172.16.16.101:7001" --discovery http://172.16.16.99:4001/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83/

./etcd -name test3 --listen-peer-urls "http://172.16.16.100:2380,http://172.16.16.100:7001" --listen-client-urls "http://172.16.16.100:2379,http://172.16.16.100:4001" --advertise-client-urls "http://172.16.16.100:2379,http://172.16.16.100:4001" --initial-advertise-peer-urls "http://172.16.16.100:2380,http://172.16.16.100:7001" --discovery http://172.16.16.99:4001/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83/

Now we have the ETCD cluster up and running, lets configure the serf.

$ ./serf agent -node=test1 -bind=172.16.16.102:5000 -rpc-addr=172.16.16.102:7373 -log-level=debug -event-handler=/root/handler.py

$ ./serf agent -node=test2 -bind=172.16.16.101:5000 -rpc-addr=172.16.16.101:7373 -log-level=debug -event-handler=/root/handler.py

$ ./serf agent -node=test3 -bind=172.16.16.100:5000 -rpc-addr=172.16.16.100:7373 -log-level=debug -event-handler=/root/handler.py

Now these 3 serf agents are running in standalone mode and they are not aware of each other. Lets make the agents to join and form the serf cluster.

$ ./serf join -rpc-addr=172.16.16.100:7373 172.16.16.101:5000

$ ./serf join -rpc-addr=172.16.16.102:7373 172.16.16.100:5000

Let’s query the serf agents and see if the cluster has been created.

$ root@sentinel:~# ./serf members -rpc-addr=172.16.16.100:7373
  test3  172.16.16.100:5000  alive
  test2  172.16.16.101:5000  alive
  test1  172.16.16.102:5000  alive

    $ root@sentinel:~# ./serf members -rpc-addr=172.16.16.101:7373
      test3  172.16.16.100:5000  alive
      test2  172.16.16.101:5000  alive
      test1  172.16.16.102:5000  alive

    $ root@sentinel:~# ./serf members -rpc-addr=172.16.16.102:7373
      test3  172.16.16.100:5000  alive
      test2  172.16.16.101:5000  alive
      test1  172.16.16.102:5000  alive

Now our serf nodes are up, let’s check our etcd cluster to see if serf has created the keys.

$ ./etcdctl -C 172.16.16.100:4001 ls /cluster/test/
  /cluster/test/172.16.16.100
  /cluster/test/172.16.16.101
  /cluster/test/172.16.16.102

Serf has successfully created the key. Now, let’s terminate one of the node’s. one of our Serf agent will immidiately receives an EVENT for member-leave and it will start the handler. Below is the log for the First agent who received the event.

2015/06/01 22:56:43 [INFO] serf: EventMemberLeave: test1 172.16.16.102
2015/06/01 22:56:44 [INFO] agent: Received event: member-leave
2015/06/01 22:56:45 [DEBUG] agent: Event 'member-leave' script output: member-leave => 172.16.16.102
Key Successfully removed from ETCD                                
2015/06/01 22:56:45 [DEBUG] memberlist: Initiating push/pull sync with: 172.16.16.100:5000

This event will later gets passed to the other agent,

2015/06/01 22:56:43 [INFO] serf: EventMemberLeave: test1 172.16.16.102
2015/06/01 22:56:44 [INFO] agent: Received event: member-leave
2015/06/01 22:56:45 [DEBUG] agent: Event 'member-leave' script output: member-leave => 172.16.16.102
Key already removed by another Serf Agent

Let’s make sure that the KEY was successfully removed from ETCD.

$ ./etcdctl -C 172.16.16.100:4001 ls /cluster/test/
  /cluster/test/172.16.16.100
  /cluster/test/172.16.16.101

Now let’s bring the old node back online,

$ ./serf agent -node=test1 -bind=172.16.16.102:5000 -rpc-addr=172.16.16.102:7373 -log-level=debug -event-handler=/root/handler.py
  ==> Starting Serf agent...
  ==> Starting Serf agent RPC...
  ==> Serf agent running!
           Node name: 'test1'
           Bind addr: '172.16.16.102:5000'
            RPC addr: '172.16.16.102:7373'
           Encrypted: false
            Snapshot: false
             Profile: lan

  ==> Log data will now stream in as it occurs:

      2015/06/01 23:01:55 [INFO] agent: Serf agent starting
      2015/06/01 23:01:55 [INFO] serf: EventMemberJoin: test1 172.16.16.102
      2015/06/01 23:01:56 [INFO] agent: Received event: member-join
      2015/06/01 23:01:56 [DEBUG] agent: Event 'member-join' script output: member-join => 172.16.16.102
  Key Successfully created in ETCD

The standalone agent has already created the key. Now let’s make the Serf agent to join the cluster. It can use any one of the IP from the K-V apart from it’s own IP

$ ./serf join -rpc-addr=172.16.16.102:7373 172.16.16.100:5000
  Successfully joined cluster by contacting 1 nodes.

Once the node joins the cluster, the other existing serf nodes will receive the member-join event, and will also tries to execute the handler. But since the key is already being created, it wont recereate it.

    2015/06/01 23:04:56 [INFO] agent: Received event: member-join
    2015/06/01 23:04:57 [DEBUG] agent: Event 'member-join' script output: member-join => 172.16.16.102
Key Already exists in ETCD

Let’s check our ETCD cluster to see if the keys are recreated.

$ ./etcdctl -C 172.16.16.100:4001 ls /cluster/test/
  /cluster/test/172.16.16.100
  /cluster/test/172.16.16.101
  /cluster/test/172.16.16.102

So now we have IP’s of our active nodes in the cluster in etcd. Now we need to extract the IP’S from etcd and need to use them with our Applications. Below is a simple python script that returns the IP’s in as a python LIST.

import requests
import json

base_url = "http://<local_ip_of_server>:4001"
base_key = "/v2/keys"
cluster_key = "/cluster/<cluster_name>/"
full_url = base_url + base_key + cluster_key
r = requests.get(full_url)
json_data = json.loads(r.text)
node_list = json_data['node']['nodes']
nodes = []
for i in node_list:
   nodes.append(i['key'].split(cluster_key)[1])
print nodes

We can use ETCD and confd to generate the config’s dynamically for our applications.

Conclusion

ETCD and SERF provide a powerfull combination for achieving a self-discovery feature onto our clusters. But it still needs heavy QA testing before it’s being used in production. Especially, we need to test on the confd automation part more, as the we are not using a single KEY-VALUE, as the SERF creates a separate K-V for each node. And the IP discovery from ETCD is done by iterating the key-directory. But these tools are backed by an awesome community, so i’m sure soon these tools will helps us to achieve a whole new level for clustering our applications.

Standard