Debian, puppet

Setting Up a PuppetMaster in Debian Based Machine

I still do not know why i like Puppet so much, But i always loves to play around with it. It’s a very powerful tool for the system admins. I’ve never tried chef before, but i’m very happy with puppet. Thanks to Luke and PuppetLabs for designing such a good tool. My colleague sarguru is working on a newer release of our deepOfix Mail Server, which will be using puppet for config management. In this blog i will explain hot setup Puppet in “StandAlone”  as well as in ” Server-Client” mode.

Puppet Master

We can install puppet from the APT. For testing i’m not going to daemonize my puppet master, i will be installing the two basic packages.

“apt-get install puppet puppet-common” 

Now, if you have already installed the puppet and if the old ssl certificates are still existing, it can be removed using the below command.

“puppet cert clean  –all”

If you are not running a dns server, enusre you have proper FQDN entries in the “/etc/hosts” for server and clients,if any. Now we can start the puppet master. It’s always better to use the debug mode during testing.

“puppet masterc  –debug  –no-daemonize”

This will start the puppet daemon, and will automatically create a self signed certificate for the puppet master.


Now we have puppet master running on the machine. In the StandAloneMode, we can create a “*.pp” file with all the resources and we can invoke puppet to apply the resources locally.

“puppet apply  *.pp  –debug”

This will the the resource which we mentioned in our puppet policy file.

If a module has to be applied, we have to mention the module path as well as the module that we are gonna apply.

“puppet apply –debug –modulepath=/etc/puppet/modules -e “include modulename””

Note:- Puppet has a very good feature, where we can simulate the changes without actually applying the resources. For that we have to use one option “–noop” (no operation) while executing the puppet apply. This is very helpful for us to simulate the results and see if the resources are being properly.

“”puppet apply –debug –modulepath=/etc/puppet/modules -e “include modulename” –noop”

Client-Server Model

So now we have puppet master running on one machine. On the client machine, similarly install “puppet and puppet-common” packages from APT. Ensure that that the machine can resolve the FQDN of the Puppet Master.

Now run the puppet agent.

“puppet agent –debug –no-daemonize”

If it throws any Name server error, then we can mention the server manually,

“puppet agent –server fqdnofmaster –debug –no-daemonize”

So now on the terminal where we are running the puppet master, we can see a certificate request from the client.

We can also see the csr by running the following command on the puppet master.

“puppet cert list”

We can certify the csr by running,

“puppet cert sign fqdnofclient”

Now we can run the puppet agent again to check whether the agent successfully talk with the the master.

In the client-server model, we need to specify the modules which are going to apply on specific nodes and for default nodes if any. These things are specified in “/etc/puppet/manifests/sites.pp”. Here we will mention the node and the modules to be applied.

Syntax is ,

node ‘node name in fqdn’ {

include modulename

. . . . . . 


Now when we run the puppet agent,appropriate modules will be installed to the clients as per the site.pp file. We can always verify the syntax of the puppet policy file using “puppet parser validate *.pp”.


A Small Munin-Graphite Client

Yesterday I found a munin-graphite client, which was used in the carnin-eye project. It just need one simple “client.yml” file, whose location can be mentioned in the munin-graphite.rb file. You can get the munin-graphite.rb file from carnin-eye github page.

We just have to mention the munin-node details in the client.yml. Below is the content of the client.yml file,


Finally, we have to create cron job to execute the munin-graphite.rb file, which will populate the our munin data into graphite.


Setting Up MCollective with ActiveMQ in Ubuntu 12.04

Setting up MCollective with activemq is very easy. The following steps worked perfectly in Ubuntu 12.04. The only dependency is either openjdk or sun java must be installed.


Download the latest activemq tar file from ActiveMQ website, I’ve used activemq 5.6 from the below link.

Untar the file to any folder say inside /opt and rename it to “activemq“.

Now create a symlink of the “activemq” binary inside “/etc/init.d

ln -s /opt/activemq/bin/activemq /etc/init.d/activemq

MCollective requires activemq with “STOMP” support. Edit the activemq.xml inside the activemq directory and add the stomp option in it. You can get the sample from the puppet labs website.

Copy the contents and paste it into activemq.xml present inside the activemq directory. In my case /opt/activemq/config/activemq.xml

Now start the activemq service using the init script (symlink we created) inside the init.d.

MCollective Server

For MCollective server we need to install two packages, mcollective-common and mcollective. Dowlnoad the latest packages from “” and install the packages.

The config file will be present in “/etc/mcollective/server.cfg”. Edit the file, stomp host should be the the machine where we have installed the activemq. Stomp port will be “6163” (can be changed by modifying activemq.xml file)

Also change modify the stomp user and password to the following,

plugin.stomp.user = mcollective

plugin.stomp.password = marionette

The above password can changed by modifying the activemq.xml file

And restart the mcollective service.

MCollective Client

For Mcollective client, download and install mcollective-common mcollective-client packages, and edit the the client.cfg file present inside the /etc/mcollective folder.

Now we can use the “mco” command to check the connectivity. we can use mco find to find the mcollective servers.


Using Logstash + Statsd + graphite – Part2 (Graphite & Statsd)

Now we can setup Graphite and Statsd, so logstash can start input to the statsd. I found a method which uses pip for installing the python packages, which makes steps easier.

# installing nodejs

sudo apt-get install python-software-properties
sudo apt-add-repository ppa:chris-lea/node.js
sudo apt-get update
sudo apt-get install nodejs npm
# Now install dependencies
sudo apt-get install memcached python-dev python-pip sqlite3 libcairo2 libcairo2-dev python-cairo pkg-config
# Install latest pip
pip install –upgrade pip
# Install carbon and graphite deps
cat >> /tmp/graphite_reqs.txt << EOF
sudo pip install -r /tmp/graphite_reqs.txt

# Configure carbon
cd /opt/graphite/conf/
sudo cp carbon.conf.example carbon.conf

# Create storage schema and copy it over
# Using the sample as provided in the statsd README
cat >> /tmp/storage-schemas.conf << EOF
# Schema definitions for Whisper files. Entries are scanned in order,
# and first match wins. This file is scanned for changes every 60 seconds.
# [name]
# pattern = regex
# retentions = timePerPoint:timeToStore, timePerPoint:timeToStore, …
priority = 110
pattern = ^stats\..*
retentions = 10s:6h,1m:7d,10m:1y
sudo cp /tmp/storage-schemas.conf storage-schemas.conf
# Make sure log dir exists for webapp
sudo mkdir -p /opt/graphite/storage/log/webapp
# Copy over the local settings file and initialize database
cd /opt/graphite/webapp/graphite/
sudo cp
sudo python syncdb # Follow the prompts, creating a superuser is optional
# statsd
cd /opt && sudo git clone git://
# StatsD configuration
cat >> /tmp/localConfig.js << EOF
graphitePort: 2003
, graphiteHost: “”
, port: 8125
sudo cp /tmp/localConfig.js /opt/statsd/localConfig.js
# Starting statsd
node /path/to/statsd-folder/stats.js /path/to/statsd-folder/localConfig.js
# Starting Carbon cache
Go to Graphite bin directory,
“./ start”
#Starting Graphite Dashboard
Go to Graphite bin directory,
“./ /opt/graphite”, Now we can access the graphite dashboard by,

Using Logstash + Statsd + graphite – Part1 (Logstash)

This blog helps to set up a simple logstash+statsd+graphite setup, which we’ve currently deployed in our company.  Thanks to @jordansissel for building such a simple and powerful tool “Logstash”. We just need the logstash jar file and  a simple config to run it.

Setting up LOGSTASH

First,Download the latest logstash jar file from the

Next we need to create a config file, ex:- logstash.conf, which should contains two mandatory parts “input” and “output“, and an optional “filter” part, where we can mention filter rules.


Starting logstash is very easy, just execute the below command.

“java -jar logstash-1.1.1-monolithic.jar agent -f logstash.conf — web –backend elasticsearch:///?local”

Elastic search UI can be accessed by “htttp://ip-address:9292”