SIP Trunking With PLIVO and FreeSwitch

It’s more than a month since i joined the DevOps family at Plivo, since i’m pretty new to the Telecom Technology, i was digging more around it. This time i decided to play around with FreeSwitch, a free and open source communications software for the creation of voice and messaging products. Thanks to Anthony Minessale for designing and opensourcing such a powerfull powerfull application. FreeSwitch is well documented and there are pretty good blogs also available on how to setup a PBX using FreeSwitch. This time i’m going to explain on how to make a private Freeswitch server to use Plivo as a SIP Trunking service.

A bit about Plivo. Plivo is a cloud based API Platform for building Voice and SMS enabled Applications. Plivo provides Application Programming Interfaces (APIs) to make and receive calls, send SMS, make a conference call, and more. These APIs are used in conjunction with XML responses to control the flow of a call or a message. We can create Session Initiation Protocol (SIP) endpoints to perform the telephony operations and the APIs are platform independent and can be used in any programming environment such as PHP, Ruby, Python, etc. It also provides helper libraries for these programming languages.

First we need a valid Plivo account. Once we have the Plivo account, we can log into the Plivo Cloud service. Now go to the ”Endpoints” tab and create a SIP endpoint and attach a DirectDial app to it. Once this is done we can go ahead and start setting up the FreeSwitch instance.

Installing FreeSwitch

Clone the Official FreeSwitch Github and Repo and compile from the source.

$ git clone git:// && cd freeswitch

$ ./ && ./configure --prefix=/usr/local/freeswitch

$ make && make install

$  make all cd-sounds-install cd-moh-install    # optional, run this if you want IVR and Music on Hold features

Now if we have more than one ip address on the machine, and if we want to bind to a particular ip, we need to modify two files ”/usr/local/freeswitch/conf/sip_profiles/external.xml” and ”/usr/local/freeswitch/conf/sip_profiles/internal.xml”. In both the files, change the parameters ”name=”rtp-ip”” and ”param name=”sip-ip”” with the bind ip as the values.

By default, Freeswitch will create a set of users, which includes numerical usernames ie, 1000-1019. So we can test the basic connectivity between user’s by making a call between two user accounts. We can register two of the accounts in two SoftPhones and we can make a test call and make sure that FreeSwitch is working fine. We can use the FS binary file to start FreeSwitch service in forground.

$ /usr/local/freeswitch/bin/freeswitch

Configuring Gateway

Once the FreeSwitch is working fine, we can start configuring the SIP trunking via Plivo. So first we need to create an external gateway to connect to Plivo. I’m going to use the SIP endpoint created on the Plivo Cloud to initiate the connection. The SIP domain for Plivo is ””. We need to create a gateway config. Go to ”/usr/local/freeswitch/conf/sip_profiles/external/”, here we can create an XML gateway config file. My config file name is plivo. Below is the content for the same.

  <gateway name="plivo">
  <param name="realm" value="" />
  <param name="username" value="<Plivo_SIP_Endpoint_User_Name" />
  <param name="password" value="<Plivo_SIP_EndPoint_Password" />
  <param name="register" value="false" />
  <param name="ping" value="5" />
  <param name="ping-max" value="3" />
  <param name="retry-seconds" value="5" />
  <param name="expire-seconds" value="60" />
        <variable name="verbose_sdp" value="true"/>

There are a lot of other parameters which we can add it here, like caller id etc. Replace the username and password with the Plivo endpoint credentials. If we want to keep this endpoint registered, we can set the register param as true and we can set the expiry time at expire-seconds, so that the Fs will keep on registering the Endpoint with Plivo’s Registrar server. once the gateway file is created, we can either restart the service or we can run “reload mod_sofia” on the FScli. If the FreeSwitch service si started in foreground, we will get the FScli, so we can run the reload command directly on it.

Setting up Dialplan

Now we have the Gateway added. Now we need to the setup the Dial Plan to route the outgoing calls through Plivo. Go to ”/usr/local/freeswitch/conf/dialplan/” folder and add an extension on the ”public.xml” file. Below is a sample extension config.

    <extension name="Calls to Plivo">
      <condition field="destination_number" expression="^(<ur_regex_here>)$">
        <action application="transfer" data="$1 XML default"/>

So now all calls matching to the Regex will be transferred to the default dial plan. Now on the the default dial plan, i’m creating an exntension and will use the FreeSwitch’s ”bridge” application to brdige the call with Plivo using the Plivo Gateway. So on the ”default.xml” add the below extension.

       <extension name="Dial through Plivo">
         <condition field="destination_number" expression="^(<ur_regex_here>)$">
           <action application="bridge" data="sofia/gateway/plivo/$1"/>

Now we can restart the FS service or we can reload “mod_dialplan_xml” from the FScli. Once the changes are into effect, we can test whether the call is getting routed via Plivo. Configure a soft phone with a default FS user and make an outbound call which matches the regex that we have mentioned for routing to Plivo. Now if all works we should get a call on the destination number. We can check the FS logs at ”/usr/local/freeswitch/log/freeswitch.log”.

If the Regex is matched, we can see the below lines in the log.

1f249a72-9abf-4713-ba69-c2881111a0e8 Dialplan: sofia/internal/1001@ parsing [public->Calls from BoxB] continue=false
1f249a72-9abf-4713-ba69-c2881111a0e8 EXECUTE sofia/internal/1001@ transfer(xxxxxxxxxxxx XML default)
1f249a72-9abf-4713-ba69-c2881111a0e8 Dialplan: sofia/internal/1001@ Regex (PASS) [Dial through Plivo] destination_number(xxxxxxxxxxxx) =~ /^(xxxxxxxxxxxx)$/ break=on-false
1f249a72-9abf-4713-ba69-c2881111a0e8 Dialplan: sofia/internal/1001@ Action bridge(sofia/external/plivo/   
1f249a72-9abf-4713-ba69-c2881111a0e8 EXECUTE sofia/internal/1001@ bridge(sofia/external/plivo/
1f249a72-9abf-4713-ba69-c2881111a0e8 2014-02-14 06:32:48.244757 [DEBUG] mod_sofia.c:4499 [zrtp_passthru] Setting a-leg inherit_codec=true
1f249a72-9abf-4713-ba69-c2881111a0e8 2014-02-14 06:32:48.244757 [DEBUG] mod_sofia.c:4502 [zrtp_passthru] Setting b-leg absolute_codec_string=GSM@8000h@20i@13200b,PCMA@8000h@20i@64000b,PCMU@8000h@20i@64000b

We can also mention the CallerID on the Direct Dial app which we have mapped to the SIP endpoint. Now for Incoming calls, create a app that can forward the calls to one of the user’s present in the FreeSwitch, using the Plivo’s Dial XML. So the XML should look something like below. I will be writing a more detailed blog about Inbound calls once i’ve have tested it out completely.


But for security, we need to allow connections from Plivo server. So we need to allow those IP’s on the FS acl. We can allow the IP’s in the ”acl.conf.xml” file at ”/usr/local/freeswitch/conf/autoload_configs”. And make sure that the FS server is accessible via a public ip atleast for the Plivo server’s which will forward the calls.

Using Mongo Discovery Method in MCollective

I’ve been playing around with MCollective for the past few months. But this time i wanted to try out theMongo discovery method. The response time was quite faster for the Mongo discovery method. So i really wanted to try it out. Setting out MCollective Server/Client is prettyl simple. You can go through my previous blog. Now we need to install the Meta registration plugin on all the MCollective Servers. Just Download and Copy meta.rb in the MCollective registration plugin folder. In my case, i’ve Debian based machine’s, so the location is, /usr/share/mcollective/plugins/mcollective/registration/. This will make the metadata available to other nodes.

Now add the below three lines into the server.cfg of all the MCollective server’s.

registration = Meta
registerinterval = 300
factsource = facter

Now install the mongodb registration agent on one of the nodes, which will be our slave node.. Do not install this on all the nodes. There is a small bug in this agent. So follow the steps mentioned here and modify the registration.rb file. Now install mongoDB server on the slave node. Also add the below lines to the server.cfg in the slave machine.

plugin.registration.mongohost = localhost
plugin.registration.mongodb = puppet
plugin.registration.collection = nodes

Now restart the mcollective service. If we increase the log level to debug, then we can see the below lines in the mcollective.log. This indicates that the plugin is getting activated and it is receiving request from the machines, whose fqdn is shown in the below line.

D, [2012-11-29T15:51:34.391762 #12731] DEBUG -- : registration.rb:97:in `handlemsg' Updated data for host with id 50b650d4454bc346e4000002 in 0.0027310848236084s
D, [2012-11-29T15:50:05.810180 #12731] DEBUG -- : registration.rb:97:in `handlemsg' Updated data for host with id 50b650c0454bc346e4000001 in 0.00200200080871582s

Initially, i used the default registration.rb file which i downloaded from the github. But it was giving me an error handlemsg Got stats without a FQDN in facts. So don’t forget to modify theregistration.rb

Now go connect to mongoDB and verify that the nodes are getting registered in it.

$ mongo
 MongoDB shell version: 2.0.4
 connecting to: test
 > use puppet
 switched to db puppet
 > db.nodes.find().count()

So, now both my master and slave have been registered into the mongoDB. Now in order to use theMongo Discovery Method, we need to install the mongodb discovery plugin and also we need to enable the direct addressing mode. so we need to add direct_addressing = 1 in the server.cfg file.

Now we can use the –dm option to specify the discovery method.

$ mco rpc rpcutil ping --dm=mongo -v
  Discovering hosts using the mongo method .... 2

  * [ ========================================================> ] 2 / 2

  vagrant-debian-squeeze                  : OK

  ubuntults                               : OK

  ---- rpcutil#ping call stats ----
            Nodes: 2 / 2
      Pass / Fail: 2 / 0
      Start Time: Thu Nov 29 16:48:00 +0530 2012
  Discovery Time: 68.48ms
  	  Agent Time: 108.35ms
  	  Total Time: 176.83ms

$ mco rpc rpcutil ping --dm=mc -v
  Discovering hosts using the mc method for 2 second(s) .... 2

  * [ ========================================================> ] 2 / 2

  vagrant-debian-squeeze                  : OK

  ubuntults                               : OK

  ---- rpcutil#ping call stats ----
            Nodes: 2 / 2
   	  Pass / Fail: 2 / 0
   	  Start Time: Thu Nov 29 16:50:52 +0530 2012
  Discovery Time: 2004.24ms
  	  Agent Time: 104.28ms
  	  Total Time: 2108.51ms

From the above commands, we can see the difference in the Discovery Time.

Now for those who want GUIR.I.Pienaar has develeoped a web gui called Mco-Rpc-Web. He has uploaded a few screencasts, which will give us a short demo on all of these.

Hubot @ your service

A few months back, i got a chance to attend the rootconf2012, where i first came to know about “Hubot” , developed by github. I was very much interested, especially the gtalk plugin, where we can integrate “hubot” with a gmail account. We can make Hubot listen to every words and make it to respond back. There are so many default hubot-scripts which we can use to play around with it.

Configuring Hubot is very simple.

First, we’ll install all of the dependencies necessary to get Hubot up and running.

apt-get install build-essential  libssl-dev  git-core  redis-server  libexpat1-dev curl libcurl4-nss-dev libcurl4-openssl-dev”


Now, Download and extract Node.js


tar xf  node-v0.9.2.tar.gz -C  /opt  &&  cd /opt/node-v0.9.2″

“./configure && make && make install”

For Gtalk Plugin we need “node-xmpp“. So we can use “npm” to install it. We also need CoffeeScipt.

“npm install node-xmpp”

“npm install  -g coffee-script”

Now clone the Hubot repository from GitHub.

“git clone git://”

Now go inside the hubot folder and using the “hubot” binary inside the bin folder create a deployable hubot.

“./bin/hubot -c /opt/hubot”

Now go inside the new hubot folder and open the “package.json” using a text editor and add the hubot-gtalk dependency package.

“dependencies”: {
   “hubot”: “2.3.2”,
   “hubot-gtalk”: “>= 0.0.1″,
    “hubot-scripts”: “>= 2.1.0″,
   “coffee-script”: “1.3.3”,
    “optparse”: “1.0.3”,
    “scoped-http-client”: “0.9.7”,
    “log”: “1.3.0”,
    “connect”: “2.3.4”,
    “connect_router”: “1.8.6”,

Now use “npm install”  to install the dependencies.

Before starting the hubot, we need to configure the below parameters for Gtalk Adapter.

The GTalk adapter requires only the following environment variables.

  • HUBOT_GTALK_USERNAME (Should be full email address, e. g.

And the following are optional.


Once all the parameters are set, we can start the Hubot with Gtalk adapter.

“./bin/hubot -a gtalk”

Now Hubot is online with Gtalk. No we can add the Hubot gmail account to our Gtalk Account and start playing around with it. Hubot comes with a bunch of default scripts. If we type “help”, we will get a a bunch of options for each of these scripts.

Today I was able to execute some Bash commands, using my custom coffee scripts, which gave me some weird ideas, to use “Hubot” for “ChatOPS“. Let’s see how it works. Once it’s done i’ll update it in my blog. Wait for more……………….

A Small Munin-Graphite Client

Yesterday I found a munin-graphite client, which was used in the carnin-eye project. It just need one simple “client.yml” file, whose location can be mentioned in the munin-graphite.rb file. You can get the munin-graphite.rb file from carnin-eye github page.

We just have to mention the munin-node details in the client.yml. Below is the content of the client.yml file,


Finally, we have to create cron job to execute the munin-graphite.rb file, which will populate the our munin data into graphite.

Setting Up MCollective with ActiveMQ in Ubuntu 12.04

Setting up MCollective with activemq is very easy. The following steps worked perfectly in Ubuntu 12.04. The only dependency is either openjdk or sun java must be installed.


Download the latest activemq tar file from ActiveMQ website, I’ve used activemq 5.6 from the below link.

Untar the file to any folder say inside /opt and rename it to “activemq“.

Now create a symlink of the “activemq” binary inside “/etc/init.d

ln -s /opt/activemq/bin/activemq /etc/init.d/activemq

MCollective requires activemq with “STOMP” support. Edit the activemq.xml inside the activemq directory and add the stomp option in it. You can get the sample from the puppet labs website.

Copy the contents and paste it into activemq.xml present inside the activemq directory. In my case /opt/activemq/config/activemq.xml

Now start the activemq service using the init script (symlink we created) inside the init.d.

MCollective Server

For MCollective server we need to install two packages, mcollective-common and mcollective. Dowlnoad the latest packages from “” and install the packages.

The config file will be present in “/etc/mcollective/server.cfg”. Edit the file, stomp host should be the the machine where we have installed the activemq. Stomp port will be “6163” (can be changed by modifying activemq.xml file)

Also change modify the stomp user and password to the following,

plugin.stomp.user = mcollective

plugin.stomp.password = marionette

The above password can changed by modifying the activemq.xml file

And restart the mcollective service.

MCollective Client

For Mcollective client, download and install mcollective-common mcollective-client packages, and edit the the client.cfg file present inside the /etc/mcollective folder.

Now we can use the “mco” command to check the connectivity. we can use mco find to find the mcollective servers.

Using Logstash + Statsd + graphite – Part2 (Graphite & Statsd)

Now we can setup Graphite and Statsd, so logstash can start input to the statsd. I found a method which uses pip for installing the python packages, which makes steps easier.

# installing nodejs

sudo apt-get install python-software-properties
sudo apt-add-repository ppa:chris-lea/node.js
sudo apt-get update
sudo apt-get install nodejs npm
# Now install dependencies
sudo apt-get install memcached python-dev python-pip sqlite3 libcairo2 libcairo2-dev python-cairo pkg-config
# Install latest pip
pip install –upgrade pip
# Install carbon and graphite deps
cat >> /tmp/graphite_reqs.txt << EOF
sudo pip install -r /tmp/graphite_reqs.txt

# Configure carbon
cd /opt/graphite/conf/
sudo cp carbon.conf.example carbon.conf

# Create storage schema and copy it over
# Using the sample as provided in the statsd README
cat >> /tmp/storage-schemas.conf << EOF
# Schema definitions for Whisper files. Entries are scanned in order,
# and first match wins. This file is scanned for changes every 60 seconds.
# [name]
# pattern = regex
# retentions = timePerPoint:timeToStore, timePerPoint:timeToStore, …
priority = 110
pattern = ^stats\..*
retentions = 10s:6h,1m:7d,10m:1y
sudo cp /tmp/storage-schemas.conf storage-schemas.conf
# Make sure log dir exists for webapp
sudo mkdir -p /opt/graphite/storage/log/webapp
# Copy over the local settings file and initialize database
cd /opt/graphite/webapp/graphite/
sudo cp
sudo python syncdb # Follow the prompts, creating a superuser is optional
# statsd
cd /opt && sudo git clone git://
# StatsD configuration
cat >> /tmp/localConfig.js << EOF
graphitePort: 2003
, graphiteHost: “”
, port: 8125
sudo cp /tmp/localConfig.js /opt/statsd/localConfig.js
# Starting statsd
node /path/to/statsd-folder/stats.js /path/to/statsd-folder/localConfig.js
# Starting Carbon cache
Go to Graphite bin directory,
“./ start”
#Starting Graphite Dashboard
Go to Graphite bin directory,
“./ /opt/graphite”, Now we can access the graphite dashboard by,