Debian, DKIM, Qmail

DKIM signing in Qmail

DKIM and SPF are becoming most commonly adopted methods for email validation. Even if we want to use the DMARC (Domain-based Message Authentication, Reporting & Conformance), we need to configure SPF and DKIM first. DMARC acts as a layer above the SPF and DKIM. DMARC allows the receiever’s mail server to check if the Email is aligned properly as per the DMARC policy, and it queries the sender’s DNS server for the DMARC action, ie, whether to reject or quarantine if alignment fails. The action will be mentioned in the TXT record on the Sender’s DNS server. There is a good collection of DMARC training videos available in MAAWG site. We will get a clear idea on how DMARC works from those videos.

In this post, i will explain on how to make Qmail to do DKIM sign on the outgoing mails. There is a qmail-patch method available, but since i’m using qmail-1.0.3 with custom patch, i was not able to use the DKIM patch along with my custom patch. So the next method is to use a wrapper around “qmail-remote”, since qmail-remote is responsible for delivering remote mails, a wrapper around it will help us to sign the email and then start the remote delivery. There are a few wrappers mentioned in this site. I’m going to use this qmail-remote wrapper.

Initial Settings

First move the current ”qmail-remote” binary to ”qmail-remote.orig”. Now download the wrapper and move it to the /var/qmail/bin/ file.

$ mv /var/qmail/bin/qmail-remote /var/qmail/bin/qmail-remote.orig

$ wget -O /var/qmail/bin/qmail-remote ""

$ chmod 755 /var/qmail/bin/qmail-remote

This wrapper depends on two programs, 1) dktest, which comes with the libdomainkeys, 2), which is perl script for signing the emails. Both these files, must be available at the path mentioned in the “qmail-remote” wrapper file.

Go through the ”” script and install the Perl modules mentioned in it using cpan. There is no official debian package for libdomainkeys, so we need to compile it from the source.

setting up dktest

Download the latest source code from the sourceforge link.

$ tar -xzf libdomainkeys-0.69.tar.gz

$ cd libdomainkeys-0.69

Edit the Makefile and add ”-lresolv” to the end of the ”LIBS” line and run make

$ install -m 644 libdomainkeys.a /usr/local/lib

$ install -m 644 domainkeys.h dktrace.h /usr/local/include

$ install -m 755 dknewkey /usr/bin

$ install -m 755 dktest /usr/local/bin

Generate Domain keys for the domains

Before we can sign an email, we must create at least one public/private key pair. I’m going to create a key pair for the domain “”.

$ mkdir -p /etc/domainkeys/

$ cd /etc/domainkeys/

$ dknewkey default 1024 >

$ chown -R root:root /etc/domainkeys

$ chmod 640 /etc/domainkeys/

$ chown root:qmail /etc/domainkeys/

It is very important that the default file be readable only by root and the group which qmailr (the qmail-remote user) belongs to. Now add a TXT entry to the DNS for ”” containing the quoted part in the /etc/domainkeys/

Once everything is added, restart the “qmail-send” and send a test mail to any non local domain. IF things goes fine, we can see a line like the below in “qmail-send” log.

$ @40000000517f518b1e1eb75c delivery 1: success: ktest_---_/tmp/dk2.sign.Gajw948FX1A1L0hugfQ/in_dkimsignpl_---_/tmp/dk2.sign.Gajw948FX1A1L0hugfQ/r74.125.25.27_accepted_message./Remote_host_said:_250_2.0.0_OK_1367298812_ps11si19566038pab.170_-_gsmtp/

Once the DKIM is working properly, add the SPF entries in our DNS, and we are ready to try out DMARC. DMARC is already in use by mail giants like Google,Yahoo,Paypal,Linkedin etc.

Debian, Monitoring, Sensu

Sensu Admin – a GUI for Sensu API

In my previous post’s, i’ve explained on How to setup Sensu server and setting up check’s and handler’s. The default dashboard is very simple with limited options, but for those who wants a full fledged dashboard, there is a Rails project in Github Sensu-Admin. So let’s try setting it up.

First clone the repository from Github.

$ git clone

Now go to sensu-admin folder, and run bundle install to install all the dependency gems. Now go inside the ”config” folder, edit the ”database.yml” and fill in the database details. I’m going to use mysql, below is my database config.

   adapter: mysql2
   database: sensudb
   username: sensu
   password: secreto
   host: localhost
   adapter: mysql2
   database: sensudb
   username: sensu
   password: secreto
   host: localhost

Now run rake db:migrate and then rake db:seed. The seed file creates auser account named ”” with password ”secret”.

We can start the Rails app by running “rails s”, this will start the app using the thin webserver at port 3000. Access the dashboard using the url ”http://server_ip:3000” Login to the dashboard with the and go to the “*Account” tab and modify the default user name and password. Now we go through tabs and check if it displays the checks, clients, events etc properly. This is a screenshot of the SensuAdmin dashboard.

Debian, Monitoring, Sensu

Sensu – Adding Check’s and Handler’s

In my previous post, i’ve explained on how to setup Sensu Server and Client. Now i’m going to explain how to setup Check’s and Handler’s in Sensu. There is a very good collection of sensu-community-plugins.

Setting up Check’s

On the Sensu Client Node,

First clone the plugins repository on the client node. Now install the ”sensu-plugin” gem on the client node. And then copy the required plugins to /etc/sensu/plugins/ folder.

On the Sensu Server,

We need to define the check first. Create a json config file for the check in /etc/sensu/conf.d. Following is a sample check config,

    "checks": {
         "snmp_check": {
         "handlers": ["default"],
         "command": "/etc/sensu/plugins/check-snmp.rb -w 10 -c 20",
         "interval": 30,
         "subscribers": [ "snmp" ]

The above check will be applied to all clients subscribed to ”snmp” exchange. Based on the interval, Server will publish this check request, which will reach all the clients subscribed to the ”snmp” exchange using an arbitrary queue. The client will run the command mentioned in the command part, and then it will publish the result back to th server through Result queue. The check_snmp is a small plugin written by me. If we check the sensu-server log, we can see the result coming from the client machine. Below one is a similar log output in my sensu-server log.

{"timestamp":1366968018},"check":{"handlers":["default","mailer"],"command":"/etc/sensu/plugins/check-snmp.rb -w 1 -c 3","interval":100,"subscribers":["snmp"],"name":"snmp_check","issued":1366968407,"executed":1366968028,"output":"CheckSNMP WARNING: Warning state detected\n","status":1,"duration":0.526,"history":["0","0","1"]},"occurrences":1,"action":"create"},"handler":{"type":"pipe","command":"true","name":"default"}}

The above log line shows us what are handler’s enabled for this check, what is the executed command, subcribers, name of the check, timestamp at the time when the command was issued, timestamp of the time when the server has received the result, Output of the check command etc. If there is any while executing th check command, we can see the errors popping in the log’s soon after this line in the server log.

Setting up Handler’s

Sensu has got a very good collection Handler’s, available at the sensu-community-plugin repo in github. For example there is a hanlder called ”show”, available at the debug section in Handler’s, which will display a more debug report about the Event as well as the Sensu server’s settings. This is the output which i got after applying ”show” handler in my serverlog. But it’s not possible to go check the log’s continously, so there another plugin called “mailer”, which can send email alerts like how nagios does.

So first get the “mailer” plugin files from the sensu-community-plugin repo in github.

wget -O /etc/sensu/handlers/mailer.rb
wget -O /etc/sensu/conf.d/mailer.json

Now edit the mailer.json, and change the settings to fit to our environment. We need to define a new pipe handler for this new handler. So create a file /etc/sensu/conf.d/handler_mailer.json, and add the below lines to it.

    "handlers": {
        "mailer": {
        "type": "pipe",
        "command": "/etc/sensu/handlers/mailer.rb"

Now go to the one of the check config files, where we want to apply this new “mailer” handler.

    "checks": {
         "snmp_check": {
         "handlers": ["default", "mailer"],         
         "command": "/etc/sensu/plugins/check-snmp.rb -w 10 -c 20",
         "interval": 30,
         "subscribers": [ "snmp" ]

Now restart the sensu-server to make the new changes to come into effect. If everything goes fine, when the sensu detects a state change it will execute this mailer handler, we can also see the below lines in server log.


Sensu is executing the mailer script, and if there is any problem, we will see the corresponding error following the above line, or we will receive the email alert to email id mentioned in the “mailer.json” file. But in my case, i was getting an error, when the sensu invoked the “mailer” handler.

{"timestamp":"2013-04-25T15:03:32.002132+0530","level":"info","message":"/etc/sensu/handlers/mailer.rb:28:in `handle': undefined method `[]' for nil:NilClass (NoMethodError)"}
{"timestamp":"2013-04-25T15:03:32.002308+0530","level":"info","message":"\tfrom /var/lib/gems/1.9.1/gems/sensu-plugin-0.1.7/lib/sensu-handler.rb:41:in `block in <class:Handler>'"}

After playing for some time, i came to know that, it was not parsing the options from the mailer.json file, so i manually added the smtp and email settings directly in mailer.rb file. Then it started working fine. I’m writing a small script which will be using the basic ‘net/smtp’ library to send out mails. There are many other cool Handler’s like sending matrices to Graphite, Logstash, Graylog, sending notifcations to irc,xmpp,campfire etc. Compare to traditional monitoring tools, Sensu is an Amazing tool, we can use any check script’s, whether it’s ruby or perl or bash, doesn’t matter. The one common thing which i heard about other people, was the lack of proper dashboard like the traditional monitoring tools. Though Sensu dashboard is a simple one, i’m sure it will improve a lot in future.

Since I’m a CLI Junky, I dont care much about the dashboard thing, apart from that i have many good and interesting stuffs to hang around with Sensu. Cheers to portertech and sonian for open sourcing such an amazing tool.

Debian, Monitoring, Sensu

Sensu – Cloud Monitoring Tool

Monitoring always plays an important role, especially for sysadmins. There are a lot of Monitoring tools available, like Nagios, Zenoss, Icinga etc. Sensu is a new generation Cloud monitoring tool designed by Sonian. Sensu is bascially written in Ruby, uses RabbitMQ Server as the Message Broker for Message transactions, and Redis for storing the data’s.

Sensu has 3 operation Mode.

1) Request-Reply Mode, where the server will send a check request to the clients through the RabbitMQ and the clients will reply back the results.

2) Standalone Mode, where the server will not send any check request, instead the client itself will run the checks according to interval mentioned, and sends the results to the sensu master through the Result queue in RabbitMQ.

3) Push Mode, where the client will send out results to a specific handler.

So now we can start installing the dependencies for sensu, ie, RabbitMQ and Redis.

Setting up RabbitMQ

Let’s add the RabbitMQ official APT repo.

$ echo "deb testing main" >/etc/apt/sources.list.d/rabbitmq.list

$ curl -L -o ~/rabbitmq-signing-key-public.asc

$ apt-key add ~/rabbitmq-signing-key-public.asc && apt-get update

Now we can install RabbitMQ

$ apt-get install rabbitmq-server erlang-nox

Now we need to generate SSL certificates for RabbitMQ and the sensu clients. We can use RabbitMQ with out ssl also, but it will more secure with SSL, @joemiller has wrote a script to generate the SSL certificates. It’s avaliable in his GitHub repo. Clone the repo and modify the “openssl.cnf” according to our need and then we can go ahead with generating the certificates.

$ git clone git://

$ cd

$ ./ clean && / generate

Now copy the server key and cert files to the RabbitMQ folder in “/etc/rabbitmq/”

$ mkdir /etc/rabbitmq/ssl

$ cp server_key.pem /etc/rabbitmq/ssl/

$ cp server_cert.pem /etc/rabbitmq/ssl/

$ cp testca/cacert.pem /etc/rabbitmq/ssl/

Now create the RabbitMQ config file, “/etc/rabbitmq/rabbitmq.config”, and add the following lines in it.

  {rabbit, [
      {ssl_listeners, [5671]},
      {ssl_options, [{cacertfile,"/etc/rabbitmq/ssl/cacert.pem"},

Once the config file is created, restart the RabbitmQ server. Now RabbitMQ has a cool management console, we can enable this by running ”rabbitmq-plugins enable rabbitmq_management” in console. Once the Management console is enabled, we can access it RabbitMQ Web UI: Username is “guest”, password is “guest” – http://SENSU-SERVER:55672. Protocol amqp should be bound to port 5672 and amqp/ssl on port 5671.

Now let’s create a vhost and user for Sensu in RabbitMQ.

 $ rabbitmqctl add_vhost /sensu

 $ rabbitmqctl add_user sensu mypass

 $ rabbitmqctl set_permissions -p /sensu sensu ".*" ".*" ".*"

Setting up Redis Server

Now we can set up Redis server. This will used by Sensu for stroring data’s. Ubuntu’s Apt repo ships with latest Redis server, so we can directly install it.

$ apt-get install redis-server

Installing Sensu Server

Sensu has a public repository which can be used to install the necessary sensu packages. First we need to add the repository public key.

$ wget -q -O- | sudo apt-key add -

Now add the repo sources in APT

$ echo " deb sensu main" >> /etc/apt/sources.list && apt-get update

$ apt-get install sensu

Enable the sensu services to start automatically during system startup.

$ update-rc.d sensu-server defaults

$ update-rc.d sensu-api defaults

$ update-rc.d sensu-client defaults

$ update-rc.d sensu-dashboard defaults

Copy the client ssl cert and key to /etc/sensu folder, say to a subfolder ssl.

$ cp client_key.pem client_cert.pem  /etc/sensu/ssl/

Now we need setup the sensu master, create a file ”/etc/sensu/config.json” and add the below lines.

        "rabbitmq": {
          "ssl": {
            "private_key_file": "/etc/sensu/ssl/client_key.pem",
            "cert_chain_file": "/etc/sensu/ssl/client_cert.pem"
          "port": 5671,
          "host": "localhost",
          "user": "sensu",
          "password": "mypass",
          "vhost": "/sensu"
        "redis": {
          "host": "localhost",
          "port": 6379
        "api": {
          "host": "localhost",
          "port": 4567
        "dashboard": {
          "host": "localhost",
          "port": 8080,
          "user": "admin",
          "password": "sensu@123"
        "handlers": {
          "default": {
            "type": "pipe",
            "command": "true"

By default sensu package comes with all sensu-server,sensu-client,sensu-api and sensu-dashboard., If we dont want to use the current machine as a client, we can stop the sensu-client from running, and do not create the client config. But for testing purpose, i’m going to add the current machine as client also. Create a file ”/etc/sensu/conf.d/client.json” and add the client configuration in JSON format.

          "client": {
          "name": "",
          "address": "",
          "subscriptions": [ "vmmaster" ]

Now restart the sensu-client to affect the changes. The logs are recorded at ”/var/log/sensu/sensu-client.log” file. We can access the sensu-dashboard from “http://SENSU SERVER:8080”, with the username and password mentioned in the config.json file.

Setting up a Separate Sensu-Client Node

If we want to setup sensu-client on a separate node, just dd the Sensu apt repo, and install the sensu package. After that just enable only the sensu-client service and remove all other sesnu-services. Then create a config.json file and add only the rabbitmq server details in it. Now generate a separate SSL certificate for the new client and use that in the config file.

      "rabbitmq": {
        "ssl": {
          "private_key_file": "/etc/sensu/ssl/client1_key.pem",
          "cert_chain_file": "/etc/sensu/ssl/client1_cert.pem"
        "port": 5671,
        "host": "",
        "user": "sensu",
        "password": "mypass",
        "vhost": "/sensu"

Now create the “client.json” in the “/etc/sensu/conf.d/” folder.

              "client": {
              "name": "",
              "address": "",
              "subscriptions": [ "vmmaster" ]

Restart the the sensu-clinet, and check the “/var/log/sensu/sensu-client.log”, if things goes fine, we can see client connecting to the RabbitMQ server also we can see the config is getting applied.

{"timestamp":"2013-04-23T22:53:27.870728+0530","level":"warn","message":"config file applied changes","config_file":"/etc/sensu/conf.d/client.json","changes":{"client":[null,{"name":"client1.test.        com","address":"","subscriptions":["vmmaster"]}]}}
{"timestamp":"2013-04-23T22:53:27.879671+0530","level":"info","message":"loaded extension","type":"mutator","name":"only_check_output","description":"returns check output"}
{"timestamp":"2013-04-23T22:53:27.883504+0530","level":"info","message":"loaded extension","type":"handler","name":"debug","description":"outputs json event data"}

Once the Sensu Server and Client are configured successfully, then we can go ahead adding the check’s. One of the best thing of sensu, all the config’s are written in JSON format, which very easy for us to create as well as to understand things. In the next blog, i will explain on how to create the check’s and how to add these check’s to various clients, and how to add handler’s like Email alerts, Sending Metrics to graphite.

Debian, NodeJS, SMTP

HARAKA – a NodeJS Based SMTP Server

Today i came across a very interesting project in GITHUB. HARAKA is an SMTP server written completely in NodeJS. Like the qpsmtpd, apart from the core SMTP features we can improve the functionality using small plugins. There are very good pluginsi for HARAKA, basically in javascripts. Like Postfix,Qmail, we can easily implements all sorts of checks and features with the help of these plugins.

Setting up HARAKA is very simple. In my setup, i will be using HARAKA as my primary smtp server, where i will implement all my filterings and then i will relay to a qmail server for local delivery. There is plugin written by @madeingnecca in github, for directly delivering the mails to user’s INBOX (mail box should be in MAILDIR format). In the real server’s we use LDAP backend for storing all the USER databases. So before putting HARAKA into production, i need a to rebuild the auth plugin so that HARAKA can talk to LDAP for user authentication in SMTP.

So first we need to install NodeJS and NPM (Node Package Manager). There are several ways for installing NodeJS. We can compile it from the source, or we can use NVM (Node Version Manager), or we can install the packages from APT in Debian machines. But i prefer source code, because official APT repo has older versions of NodeJS, which will create compatibility issue. Current version is “v0.10.4”. Building NodeJS from source is pretty simple.

Just Download the latest source code from;

$ wget

$ tar xvzf node-v0.10.4.tar.gz && cd node-v0.10.4

$  ./compile 

$ make && make install

Once NodeJS is installed, we can go ahead with HARAKA.

$ git clone

Now go inside to the Haraka folder and run the below command. All the dependency packages are mentioned in the package.json file.

$ npm install

The above command will install all the necessary modules mentioned in the package.json file and will setup HARAKA. Now we can setup a separate service folder for HARAKA.

$ haraka -i /etc/haraka     

The above command will create the haraka folder in /etc/ and it will create creates config and plugin directories in there, and automatically sets the host name used by Haraka to the output of the hostname command. Now we need to setup up the port number and ip which HARAKA SMTP service should listen. Go to config folder in the newly created haraka service folder and open the “smtp.ini” file, and mention the port number and ip.

Now before starting the smtp service, first let’s disable all the plugins, so that we can go in steps. In the config folder, open the “plugin” file, and comment out all the plugins, because by default haraka will not create any plugin scripts, so most of them mentioned in that will not work. So we will start using the plugins, once we have copied the corresponding plugin’s js files to the plugin directory inside our service directory.

Let’s try running the HARAKA foreground and see if it starts and listens on the port we mentioned.

$ haraka -c /etc/haraka

Once HARAKA SMTP service starts, we can see the line ”[NOTICE] [-] [core] Listening on :::25” in the STDOUT, which means HARAKA is listening on port 25. We can just Telnet to port 25 and see if we are getting SMTP banner.

Now we can try out a plugin. Haraka has a spamassassin plugin. So will try it out. So first install spamassassin and start the spam filtering.

$ apt-get install spamassassin spamc

Now from the plugin folder inside the git source folder of HARAKA, copy the spamassassin.js and copy it to the plugin folder of our service directory. By default plugin folder is not created inside the service directory, so create it. Now we need to enable the service. Inside the config folder of our service directory, create a config file “spamassassin.ini”, and inside the file fill in the necessary details like, “reject_thresold”, “subject_prefix”, “spamd_socket”. Now before starting the plugin, we need to add it in the plugin, inside the config folder. Once spamassassin plugin is added, we can start the HARAKA smtp service. If the plugin is added properly, then we can see the below lines in the stdout,

[INFO] [-] [core] Loading plugin: spamassassin
[DEBUG] [-] [core] registered hook data_post to spamassassin.hook_data_post

Now using swaks, we can send a test mail see, if spam assassin is putting scores for the emails. Like this we can enable all other plugins, based on our needs.

Since i’m going to relay the mails, i need to make HARAKA to accept mails for all my domains. For that i need to define all my domains on HARAKA. In the config folder, open the file “host_list”, and add all the domains for which HARAKA should accept mails. There also a regular expression option available for, which can be done in “host_list_regex” file.

Now we need to add, smtp relay, for that edit the “smtp_forward.ini” file and mention the relay host ip, port number and auth details(if required). Now we can restart the HARAKA service and we can check SMTP relay by sending test mails using swaks.

I haven’t tried the Auth plugin yet, but soon i will be trying it. If possible, i will try to use LDAP backend for authentication, so that HARAKA can be used a full fledged SMTP service. More developments are happening in this, hope it wil become a good competitor …


Monitoring with ZENOSS

It’s being a year since i have really played with Centos or any Redhat based Distro’s. I saw a few videos on youtube realting to zenoss, which is a new generation monitoring tool. Later i attended two zenoss webinar’s, which made to try it out in own infrastructure. In this blog i will explain how to setup zenoss on a Centos6.4 machine. Make sure that you have atleast 2GB of Ram. Initially i put 1GB of Ram and 2GB of swap in my Centos VM. But when i started the zenoss services, the whole and ram and swap was consumed and finaly i was not able to start the services.

Basicaly zenoss need RabbitMQ messaging server, JAVA6, MYSQL as its dependencies. There is an automated script available from the zenos website, which will download and install all necessary dependencies. It’s a bash script. We can download it from the below link.

$  wget --no-check-certificate -O auto.tar.gz

Once we extract the above tar ball, we can see a bunch of files. zenpack_actions.txt file contains the list of zenpacks which is going to be installed. We can modify it based on our needs.

Once done, we can start the installer script.

$ ./

This will start by downloading the zenoss rpm file. Once the installation completed, it was giving an error, saying that “connection reset” while installing the zenpacks. I was going through all the log files, finally i found that the error was in the rabbitmq. The zenoss user authentication was failing. Below is the error which iwas getting in the rabbitmq log.

=INFO REPORT==== 10-Apr-2013::09:37:00 ===
accepting AMQP connection <0.3533.0> ( ->

=ERROR REPORT==== 10-Apr-2013::09:37:03 ===
closing AMQP connection <0.3533.0> ( ->
                        "PLAIN login refused: user 'zenoss' - invalid credentials",

The error says that the zenoss user credential is wrong. So using “rabbitmqctl” command i reset the zenoss user password. Once the password is changed, we have to mention the new passowrd in the zenoss global.conf file. This file will be present in “/opt/zenoss/etc” location. Open the the global.conf file, and replace the amqppassword with the new password. By default during installation, the script generates a base64 encoded random password using the openssl. Once we hab=ve replaced the password, we can start the zenoss service.

$ service zenoss start

Now while starting the service, zenoss will continue the installing the zenpacks. Once the service is started, we can access the WebGUI from http://server_ip:8080 url. Initially it will ask us to set the password for the admin user as well as to create a secondary user. More over it will ask us to add hosts to monitor, we can skip this step and move the dashboard. Later on we can add the hosts directly from the infrastructure tab. I’ve added my vps as well as few of my local server’s with snmp, so far it is working perfectly. There many cool stuffs inside zenoss, hope this will a cool playground …

Debian, virtualization

Setting up Apache Cloudstack

Today i was completely playing around with virtualization. I was playing around with Foreman and KVM, then i got WebVirtmanager to play around, which is working perfectly with LVM storage pool. It’s almost a week since i saw a few videos related to Apache Cloudstack, so today i decided to give it a try. In this blog i will explain on how to set up Apache CloudStack on an ubuntu 12.10 Machine. Apache Cloudstack is one of the coolest cloud platform’s available. It supports hypervisors like KVM, XEN, vSphere. The latest version is 4.0.1-incubating. The source can be downloaded from here. There is a very good documentation available from Cloudstack.

Building Debian Packages from the Source

First we need to install the below dependency packages.

1.  Apache Ant
2.  JDepend
3.  Apache Maven (version 3)
4.  Java (Java 6/OpenJDK 1.6)
5.  Apache Web Services Common Utilities (ws-commons-util)
6.  MySQL
7.  MySQLdb (provides Python database API)
8.  Tomcat 6 (not 6.0.35)
9.  genisoimage
10. dpkg-dev and their dependencies

Maven 3, which is not currently available in 12.10. So, we’ll need to add a PPA repository that includes Maven 3

$ add-apt-repository ppa:natecarlson/maven3

The current ppa supports only ubuntu 12.04 aka Precise, so edit /etc/apt/sources.list.d/natecarlson-maven3-quantal.list and replace “quantal” with “precise”. So now the content of the file looks like below one

deb precise main
deb-src precise main

Now we can start installing the dependencies,

$ apt-get install ant debhelper openjdk-6-jdk tomcat6 libws-commons-util-java genisoimage python-mysqldb libcommons-codec-java libcommons-httpclient-java liblog4j1.2-java python-software-properties maven3

Now we can resolve the buildtime depdencies for CloudStack by running the below command.

$ mvn3 -P deps.

Now there is a small bug, which add the dependency of “chkconfig” package to a few of the cloudstack packages. But “chkconfig” is required for Redhat based machines, not for debian based machines. So edit “Debian/control” file inside the apache cloudstack source folder and remove “chkconfig” from the dependency list. After that we can start building the debian packages.

$ dpkg-buildpackage -uc -us

The above command will build 16 debian packages.

Setting up a Local APT repo

Now we can set up a local apt repo so that we can install all these 16 packages along with their corresponding dependencies. First ensure that “dpkg-dev” is installed. After that copy all the packages to a specific location in order to create the local repo.

$ mkdir -p /var/www/cloudstack/repo/binary
$ cp *.deb /var/www/cloudstack/repo/binary
$ cd /var/www/cloudstack/repo/binary
$ dpkg-scanpackages . /dev/null | tee Packages | gzip -9 > Packages.gz

We need to configure the local machine to use this local repo. Add the local repository in echo "deb http://server_url/cloudstack/repo/binary ./" > /etc/apt/sources.list.d/cloudstack.list and run “apt-get update”. Now we can install the cloudstack packages.

$ apt-get install cloud-agent cloud-agent-deps cloud-agent-libs cloud-awsapi cloud-cli cloud-client cloud-client-ui cloud-core cloud-deps cloud-python cloud-scripts cloud-server cloud-setup cloud-system-iso cloud-usage cloud-utils

Now from the web browser go to “http://server_url:8080/client/. The default Username is “admin” and password is “password”. For the admin user, we don’t need to provide the domain option.

Debian, libvirt, virtualization

WebVirtManager and Libvirt-KVM

It’s been almost two years since i’ve started using KVM and Libvirt for creating Virtual machines. All our production server’s are VM’s, so far KVM has never let us down. The Virt Manager is a wonderful tool taht can installed in most of the linux distributions. But when i switched to ma new MacAir, i could not find a similar tool, so i decided to use a web based tool, so that it can be used from anywhere irrespective of devices. I came across this WebVirtManager, a python and django based web app, which is very active in development, so i decided to give it a try. In my libvirt setup, i’m using LVM as my storage pooli for my VM’s, so the main thing which i wanted to check was, whether the WebVirtManager is able to create LVM’s, so that it can be used as the HDD image for my new VM’s from the WebInterface.

First, we need to install the basic dependency packages.

$ apt-get install git python-django virtinst apache2 libapache2-mod-python libapache2-mod-wsgi

Now go to libvirtd.conf, and ensure that “listen_tcp” is enabled. Also go to “/etc/default/libvirt” and add the “-l” to the “libvirtd_opts”, so that libvirt will listen on tcp. The default port is “16509”.

Now we can clone the repository from the Github.

$ git clone git://
$ cd webvirtmgr
$ ./ syncdb

While running the sync, it will ask to create a super user, so create the user, this user can be used to login to the WebVirtManager GUI. Now We can create a virtualhost in apache, and we can start the server. The Apache configurations are available in the Readme. I’ve added the below WSGI settings in my default apache sites.

WSGIScriptAlias / /var/www/webvirtmgr/wsgi/django.wsgi
Alias /static /var/www/webvirtmgr/ virtmgr/static/
<Directory /var/www/webvirtmgr/wsgi>
 Order allow,deny
Allow from all

Ensure that the directory is writable by apache user. But for testing, we can start the server from command line using the below command.

$ ./ runserver x.x.x.x:8000 (x.x.x.x - your IP address server)

So this command will start the WebvirtManager, which is listening at port “8000”. So from the Browser, we can access the url. The default usernmae and password is the one which we created during the syndb. Now before adding the connection, we need to create a user which can access the libvirt. For that we are going to use “saslpasswd2”. Ensure that the package sasl2-bin is installed in the machine.

$ saslpasswd2 -a libvirt testuser     # replace testuser is the user name.

To list all the user’s, we can use the sasldblistusers2 command.

$ sasldblistusers2 -f /etc/libvirt/passwd.db
$ testuser@cloud: userPassword        # Note that the actual user name is testuser@cloud, where cloud is the hostname of my server. This full user name has to be used for adding connections.

Now login to the WebvirManager, and click on “Add Connection”. Fill in the connection name, ip of the server, the user which we created using saslpasswd2, ie testuser@cloud and the password for that user.If everything goes fine, we can see the host connected. Now click on the “Overview”, to see the settings of the host.

Now i need to check the storage pool part. Since the storage pool is already active and running, it will get displayed at the storage pool option. If a new pool has to created, click at the “add pool” option, and select the “lvm” option, define the VolumeGroup name and the physical volumes.

I tried creating a new VM from the interface, while creating, i selected my VolumeGroup as the storage, and it sucessfully created an LVM with the specified size, and i able to continue my installtion using the vnc option avalable at the WebVirtManager.