Debian, debian-packaging, debuild

Automating Debian Package Management

With the rise of CI tools like Jenkins/Gitlab and Config management tools like Salt/Ansible, Continous integration became so flexible. Now if we check, most of the Projects are using GIT as a Version control and CI tools like Jenkins to build and test the packages automatically whenever any change is pushed to the repo. And finally once the build is successful, the packages are pushed to repo so that config management systems like Salt/Puppet/Ansible can go ahead and perform the upgrade. In my previous blogs, i’ve explained on how to build a Debian package and how to create and manage APT repo’s via aptly. In this blog i’ll explain how to automate these two processes.

So the flow is like this. We have a Github repo, and once a changed is pushed to the repo, Github will send a hook to our Jenkins server which in turn triggers the Jenkins package build. Once the package has been successfully built, jenkins will automatically add the new packages to our repo and publish the same to our APT repo via aptly

Installing Jenkins

First, let’s setup a Jenkins build server.

$ wget -q -O - https://jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -

$ echo "deb http://pkg.jenkins-ci.org/debian binary/" > /etc/apt/sources.list.d/jenkins.list

$ apt-get update && apt-get install jenkins

$ /etc/init.d/jenkins restart

Once the Jenkins service is started, we can access the Jenkins UI via ”http://jenkins-server-ip:8080”. By default there is no authentication for this URL, so accessing the URL will open up the Jenkins UI.

Creating a Build Job in Jenkins

In order to use a Git repo, we have to install the Git plugin first. In Jenkins UI, Go to ”Manage Jenkins” – > ”Manage Plugins” – > ”Available” and search for ”GIT plugin” and install it. Once the Git plugin has been installed we can create a new build job.

Click on ”New Item” on the Home Page and Select ”Freestyle Project” and Click on “OK”. On the Next page, we need to configure all the necessary steps for build job. Fill in the necessary details like Project Name, Description etc. Under “Source Code Management”, select Git and enter the Repo URL. Make sure that the jenkins user has access to the repo. We can also use Deploy keys, but i’ve generated a separate ssh key for Jenkins user and the same has been added to Github. Under ”Build Triggers” select ‘Build when a change is pushed to GitHub’ so that Jenkins will start the build job everytime when a change has been pushed to repo.

Under the Build section, Click on ”Add build step” and select ’Execute shell’ and let’s add our package build script which is stage 1.

set -e
set -x
debuild -us -uc

In Stage 2, i’m going publish my newly built packages to my APT repo

aptly repo add myapt ../openvpn*.deb
/usr/bin/env script -qfc "aptly publish -passphrase=<GPG passphrase> update myapt"

If you see my above command, i’ve used the script command. This is because, i was getting this error “aptly stderr: gpg: cannot open tty /dev/tty': No such device or address“, whenever i try to update a repo via aptly using Jenkins. This is due to a bug in aptly. The fix has been placed on the Master branch but its not yet released. The script command is a temporary work around for this bug.

Now we have a Build job ready. We can manually trigger a build to test if the Job is working fine. If the build is successfull, we are done with our build server. Now the final step is Configuring Github to send a trigger whenever any change is pushed to Github.

Configuring Github Triggers

Go the Github repo and Click on the Repo settings. Open ”Webhooks and Services” and select ”Add Service” and select ”GitHub plugin“.Now it will ask for Jenkin’s Hook URL, which is ”http://:8080/github-webhook/” and add the service. Once the service is set, we can click on “Test service” to check if the webhook is working fine.

Once the test hook is created, go to the Jenkins job page and select ”GitHub Hook Log”. The test hook should get displayed there. If not there is something wrong on the config.

Now we have a fully automated build and release management. Config management tools like Salt/Ansible etc.. can go ahead and start the deployment process.

Standard
Debian, debian-packaging

Managing Debian APT Repository via Aptly

In my previous blog, i’ve explained how to build a Debian pacakge from source. In this blog i’m to explain how to create and manage our own apt repository. Enter aptly,is a swiss army knife for Debian repository management: it allows us to mirror remote repositories, manage local package repositories, take snapshots, pull new versions of packages along with dependencies, publish as Debian repository. Aptly can upload the repo to Amazon S3, but we need to install APT S3 support, in order to use it from S3.

First, let’s install aptly on our build server. A more detailed documentation on installation is available in the website

$ echo "deb http://repo.aptly.info/ squeeze main" > /etc/apt/sources.list

$ gpg --keyserver keys.gnupg.net --recv-keys 2A194991

$ gpg -a --export 2A194991 | sudo apt-key add -

$ apt-get update && apt-get install aptly

Let’s create a repo,

$ aptly repo create -distribution=wheezy -component=main my-repo    # where my-repo is the name of the repository

Once the repo is created, we can start adding our newly created packages to our new repo.

$ aptly repo add <repo name> <your debian file>    # in my case aptly repo add myrepo openvpn_2.3.6_amd64.deb

The above command will add the new package to the repo. Now in order to make this repo usable, we need to publish this repo. A valid GPG key is required for publishing the repo. So let’s create the gpg key for aptly.

$ gpg --gen-key

$ gpg --export --armor <email-id-used-fo-gpg-key-creation> > myrepo-pubkey.asc   # creates a pubkey that distributed

$ gpg --send-key KEYNAME     # This command can be used if we want to send the key to a public server, we can also pass --keyserver <server-url>, if we want to specifiy a specific keyserver

Once we have our GPG key, we can publish our repo. By default aptly can publish the repo to S3 or it can publish it locally and we can use any webserver to servce this repo.

$ aptly publish --distribution="wheezy" repo my-repo

Once published, we can point the webserver to “~/.aptly/”, where our repo files will be created. Aptly also comes with an embedded webserver which can be invoked by running aptly serve. Aptly really makes the repo management so easy. We can actually integrate this into our jenkins job so that each time when we build a package, we can directly add and upload the same to our repository.

Standard
Debian, debian-packaging, debuild

Building a Debian Package

Installing applications via packages saves us a lot of time. Especially being an OPS oriented guy, compiling applications from source is somtimes pain and time consuming. Especially the dependencies. But later, after the rise of config management system, people started creating corresponding automated scripts that will install necessary dependencies and the ususal make && make install. But if you check applications like Freeswitch was taking 15+min to finish compiliations, which is defintely a bad idea when you want to deploy the a new patch on a cluster. In such cases packages are really a life saver. Build the packages once as per our requirement and deploy it throughout the infrastructure. Now with the tools like jenkins,TravisCI etc we can attain a good level of CI.

In this blog, i’m going to explain on how to build a debian package from scratch. First let’s install two main dependencies for a build machine

$ apt-get install devscripts build-essential

For the past few days i was playing with OpenVPN and SoftEther. I’m going to build a simple debian package for OpenVPN from source. The current stable version of OpenVPN is available 2.3.6. First let’s get the OpenVPN source code.

$ wget http://swupdate.openvpn.org/community/releases/openvpn-2.3.6.tar.gz

$ tar xvzf openvpn-2.3.6.tar.gz && cd openvpn-2.3.6

Now for building a package, first we need to create a debian folder. And in this folder we are going to place all necessary files required for building a package.

$ mkdir debian

As per the Debian pacakging Documentation, the mandatory files are rules control, changelog. Changlog file content should match the exact syntax, otherwise packaging will fail at the initial stage itself. There are some more optional files that we can use. Below are the files present in my debian folder

changelog           => Changelog for my Package
control             => Contains Details about the package including the dependencies
dirs                => specifies any directories which we need but which are not created by the normal installation procedure, handled by 'dh_installdirs'
openvpn.default     => this file will be copied to /etc/default/openvpn
openvpn.init        => this file will be copied to /etc/init.d/openvpn, handled by 'dh_installinit'
postinst.debhelper  => Any action that need to be performed once the package installation is completed, like creating a specific user, starting service etc
postrm.debhelper        => Any action that need to be performed once the package removal is completed, like deleting a specific user
prerm.debhelper     => Any action that need to be performed before the package removal is initiated, like stopping the service
rules           => Contains rules for build procedure

In my case i wanted to install the openvpn on a custom location say ’/opt/openvpn’. So if we are building from scratch manually, we can mention the prefix like ’./configure –prefix=/opt/openvpn’. but in the build process, dh_auto_configure is running our ’./configure’ operation with dfault option ie, no custom prefix. So we need to overide this process if we want to have a custom prefix. Below is the content of my rules file.

# rules file

    #!/usr/bin/make -f
    # vim: tabstop=4 softtabstop=4 noexpandtab fileencoding=utf-8

    # Uncomment this to turn on verbose mode.
    export DH_VERBOSE=1

    DEB_DIR=$(CURDIR)/debian/openvpn

    %:
        dh $@
    override_dh_auto_configure:                      # override of configure
        ./configure --prefix=/opt/openvpn

Once we have all the necessary files in place, we can start the build process. Make sure that all the dependency packages mentioned in the control file is installed on the build server.

    $ debuild -us -uc

If the build command is completed successfully, we will see the deb package as well as the source package just above our openvpn source folderm which is the default path where dh_builddeb places the files. We can overide the same too.

#!/usr/bin/make -f
# vim: tabstop=4 softtabstop=4 noexpandtab fileencoding=utf-8

# Uncomment this to turn on verbose mode.
export DH_VERBOSE=1

DEB_DIR=$(CURDIR)/debian/openvpn

%:
    dh $@
override_dh_auto_configure:
    ./configure --prefix=/opt/openvpn
override_dh_builddeb:
    dh_builddeb --destdir=./deb-pkg/

So now we have the Debian package. We can test installing it manually via ’dpkg -i’. This was just a go thorugh on how to build a simple debian package. In my next blog, i’ll be discussing about how to create and manage a private apt repository using a awsme tool called aptly

Standard
Debian, logstash, Monitoring

Lumberjack – a Light Weight Log Shipper for Logstash

Logstash is one of the coolest projects that i always wanted to play around. Since i’m a sysadmin, i’m forced to handle multiple apps, which will logs in different formats. The most weird part is the timestamps, where most of the app uses it’s own time formats. Logstash helps us to solve such situations, we can remodify the time stamp to a standard time format, we can use the predefined filter’s for filtering out the log’s, even we can create our own filter’s using regex. All the documentations are available in the Logstash website Logstash mainly has 3 parts, 1) INPUT -> from which the log’s are shipped to Logstash, 2) Filter -> for filtering our incoming log’s to suit to our needs, 3) Output -> For storing or relaying the Filtered output log’s to various Applications.

Lumberjack is one such input plugin designed for logstash. Though the plugin is still in beta state, i decided to give it a try. By default we can also use logstash itself for shipping logs to centralized Logstash server, the JVM made it difficult to work with many of my constrained machines. Lumberjack claims to be a light weight log shipper which uses SSL and we can add custom fields for each line of log which we ships.

Setting up Logstash Server

Download the latest the logstash jar file from the logstash website. Now create a logstash configuration file for the logstash instance. In the config file, we have to enable the lumberjack plugin. Lumberjack uses SSL CA to verify the server. So we need to generate the same for the logstash server. We can use the below mentioned command to generate the SSL certificate and key.

$ openssl req -x509 -newkey rsa:2048 -keyout /etc/ssl/logstash.key -out /etc/ssl/logstash.pub -nodes -days 3650

Below is the sample logstash conf file which i used for stashing logs from Socklog.

input {

  lumberjack {
    type => "qmail"
    port => 4545
    ssl_certificate => "/etc/ssl/logstash.pub"
        ssl_key => "/etc/ssl/logstash.key"
  }
}

filter {
  grok {
        type => "socklog"
        pattern => "%{DATA:logfacility}: %{SYSLOGTIMESTAMP:timestamp} %{DATA:program}: *"
  }
  mutate {
        replace => [ "@message", "%{mess}" ]
  }
  date {
        type => "socklog"
        match => [ "timestamp", "MMM dd HH:mm:ss" ]
  }
}

output {
  stdout {
    debug => true
      }
}

Now we can start the the logstash using the above config.

$ java -jar logstash-1.1.13-flatjar.jar agent -f logstash.conf -v

Once the logstash has started successfully, we can use netstat to check if it listening on port 4545. I’m currently running logstash in the foreground, below is the logoutput from logstash

Starting lumberjack input listener {:address=>"0.0.0.0:4545", :level=>:info}
Input registered {:plugin=><LogStash::Inputs::Lumberjack type=>"socklog", ssl_certificate=>"/etc/ssl/logstash.pub", ssl_key=>"/etc/ssl/logstash.key", charset=>"UTF-8", host=>"0.0.0.0">, :level=>:info}
Match data {:match=>{"@message"=>["%{DATA:logfacility}: %{SYSLOGTIMESTAMP:timestamp} %{DATA:program}: *"]}, :level=>:info}
Grok compile {:field=>"@message", :patterns=>["%{DATA:logfacility}: %{SYSLOGTIMESTAMP:timestamp} %{DATA:program}: *"], :level=>:info}
Output registered {:plugin=><LogStash::Outputs::Stdout debug_format=>"ruby", message=>"%{@timestamp} %{@source}: %{@message}">, :level=>:info}
All plugins are started and registered. {:level=>:info}

Setting up Lumberjack agent

On the machine from which we are going to ship the log’s, clone the Lumberjack github repo.

$ git clone https://github.com/jordansissel/lumberjack.git

Install the fpm ruby gem, which is required to build the lumberjack package.

$ gem install fpm

$ cd lumberjack && make

$ make deb   => This will build a debian package of the lumberjack

$ dpkg -i lumberjack_0.0.30_amd64.deb  => The package will install all the files to the `/opt/lumberjack`

Now copy the SSL certificate which we have generated at the Logstash server, to the Lumberjack machine. Once the SSL certificte has been copied, we can start the lumberjack agent.

$ /opt/lumberjack/bin/lumberjack --ssl-ca-path ./ssl/logstash.pub --host logstash.test.com --port 4545 /var/log/socklog/main/current

Below is the log output from the lumberjack.

2013-06-25T15:04:32.798+0530 Watching 1 files, setting open file limit to 103
2013-06-25T15:04:32.798+0530 Watching 1 files, setting memory usage limit to 1048576 bytes
2013-06-25T15:04:32.878+0530 Connecting to logstash.test.com(192.168.19.19):4545
2013-06-25T15:04:33.186+0530 slow operation (0.307 seconds): connect to 192.168.19.19:4545
2013-06-25T15:04:33.186+0530 Connected successfully to logstash.test.com(192.168.19.19):4545
2013-06-25T15:04:34.653+0530 Declaring window size of 4096
2013-06-25T15:04:36.734+0530 flushing since nothing came in over zmq

Now we will start getting the output from the Logstash in our screen, since we are using the ‘stdout’ output plugin. A very good detailed documentation about Lumberjack and Logstash can be found here, written by Brian Altenhofel. He had given a talk on this at Drupalcon 2013, Portland. The video for the talk is available here. It’s a very good blog post.

Standard
Debian, mcollective

MCoMaster – an HTML5 Based GUI for Mcollective With Redis Discovery Method

Mcollective is a framework to build server orchestration or parallel job execution systems. It’s completely built on RUBY, and offcourse Open Sourced. There are a quite good plugins available for mcollective. In my previous blog’s i’ve explained on how to setup Mcollective with ActiveMQ and also how to use Mongo Discovery method with Mcollective. In this blog i will explain on how to use Mcollective with Rabbitmq connector and a new project called mcomaster, an HTML5 web interface to mcollective, which uses REDIS Discovery method in the backend.

Setting up RabbitMQ

First add the RaabitMQ APT repo,

$ echo "deb http://www.rabbitmq.com/debian/ testing main" >> /etc/apt/sources.list

$ curl -L -o ~/rabbitmq-signing-key-public.asc http://www.rabbitmq.com/rabbitmq-signing-key-public.asc 

$ apt-key add ~/rabbitmq-signing-key-public.asc

$ apt-get update

And install the RabbitMQ server

$ apt-get install rabbitmq-server

Now using rabbitmqctl create a vhost ”/mcollective”. Also create a user called ”mcollective” in Rabbitmq using rabbitmqctl and give full permission for this user to the new mcollective vhost.

$ rabbitmqctl add_vhost /mcollective

$ rabbitmqctl add_user mcollective mcollective

$ rabbitmqctl set_permissions -p /mcollective mcollective ".*" ".*" ".*"

There is also managemnet gui for rabbitmq, we can perform all the above tasks from that also. For that we need enable the management plugin in rabbitmq.

$ rabbitmq-plugins enable rabbitmq_management

And restart rabbitmq server. After restarting, we can access the gui using ”http://ip:55672”. Login user is ”guest”, password is ”guest”. We need to create two exchanges that the mcollective plugin needs. We can use the rabbitmqadmin command as mentioned in the Mcollective Documentation or we can use the Management GUI for that.

$ rabbitmqadmin declare exchange vhost=/mcollective name=mcollective_broadcast type=topic

$ rabbitmqadmin declare exchange vhost=/mcollective name=mcollective_directed type=direct 

Configuring Mcollective

Now install the mcollective from APT repository. We can use the puppetlabs APT repo for the latest versions.

$ echo "deb http://apt.puppetlabs.com precise main" >> /etc/apt/sources.list

$ wget -o /tmp/pubkey.gpg apt.puppetlabs.com/pubkey.gpg && apt-key add /tmp/pubkey.gpg

$ apt-get intstall mcollective

Now go to ”/etc/mcollective/” folder and open the server.cfg file and add the below lines to enable the rabbitmq connector.

direct_addressing = 1

connector = rabbitmq
plugin.rabbitmq.vhost = /mcollective
plugin.rabbitmq.pool.size = 1       => increase this value, if you are going to use rabbitmq cluster
plugin.rabbitmq.pool.1.host = stomp.example.com
plugin.rabbitmq.pool.1.port = 61613
plugin.rabbitmq.pool.1.user = mcollective
plugin.rabbitmq.pool.1.password = marionette

And restart the mcollective service, to make the changes to come effect.

Setting up McoMaster

Mcomaster is an HTML5 web interface to mcollective, which uses REDIS discovery method. Thanks to @ajf8 for such a wonderful project. Clone the repository from github.

$ git clone https//github.com/ajf8/mcomaster.git

Now copy the meta.rb registration file to the mcollective extension directory of all the Mcollective servers.

$ cp mcollective/registration/meta.rb /usr/share/mcollective/plugins/mcollective/registration/meta.rb

We need to enable the registration agent on ALL nodes with the following server.cfg settings

registerinterval = 300
registration = Meta

Now enable direct addressing by adding the below line in the server.cfg of all the server’s where we have copied the meta registration plugin.

direct_addressing=1

We need one mcollective node which receives the registrations and saves them in redis. Ensure that Redis server is installed on that server. We can use the same server where we are going to install mcomaster.

$ cp mcollective/agent/registration.rb /usr/share/mcollective/plugins/mcollective/agent/

Add the redis server details to the server.cfg on the server which receives the registrations.

plugin.redis.host = localhost
plugin.redis.port = 6379
plugin.redis.db = 0

Install mcollective clinet on the server where we are installing the mcomaster. Then configure the discovery agent, which will query the redis database for discovery data.

$ cp mcollective/discovery/redisdiscovery.* /usr/share/mcollective/plugins/mcollective/discovery/

And add the following settings to the client.cfg

plugin.redis.host = localhost
plugin.redis.port = 6379
plugin.redis.db = 0
default_discovery_method = redisdiscovery
direct_addressing = yes

Using mco command we can check the node discovery using redisdiscovery method, which is the default mwthod now.. The old mc method will also works.

root@testcloud:~# mco rpc rpcutil ping --dm redisdiscovery -v
info 2013/05/29 10:08:45: rabbitmq.rb:10:in `on_connecting' TCP Connection attempt 0 to stomp://mcollective@localhost:61613
info 2013/05/29 10:08:45: rabbitmq.rb:15:in `on_connected' Conncted to stomp://mcollective@localhost:61613
Discovering hosts using the redisdiscovery method .... 4

 * [ ============================================================> ] 4 / 4


deeptest                                : OK
    {:pong=>1369802330}

ubuntults                               : OK
    {:pong=>1369802330}

debwheez                                : OK
    {:pong=>1369802330}

testcloud                               : OK
    {:pong=>1369802326}



---- rpcutil#ping call stats ----
           Nodes: 4 / 4
     Pass / Fail: 4 / 0
      Start Time: 2013-05-29 10:08:46 +0530
  Discovery Time: 30.95ms
      Agent Time: 85.24ms
      Total Time: 116.19ms
info 2013/05/29 10:08:46: rabbitmq.rb:20:in `on_disconnect' Disconnected from stomp://mcollective@localhost:61613

Go to McoMaster repo folder, and run bundle install to install all the dependency gems. Copy th existing application.example.yml and create a new application.yml file. Edit this file and fill in the Username, Password, Redis server details etc. Create a new database.yml and fill in the database details. I’m using MYSQL as the database backend. Below is my config.

production:
  adapter: mysql
  database: mcollective
  username: mcollective
  password: mcollective
  host: localhost
  socket: "/var/run/mysqld/mysqld.sock"

development:
  adapter: mysql
  database: mcollective
  username: mcollective
  password: mcollective
  host: localhost
  socket: "/var/run/mysqld/mysqld.sock"

We need to migrate the database,

$ RAILS_ENV=production rake db:reset

After that, we need to compile the assets,

$ rake assets:precompile

Let’s start the server,

$ rails server -e production

The above command will make the application to listen to default port ”3000”. Access the McoMaster GUI from the browwser using the username and password added in the ”application.yml” file. Once we login, we will be able to see the nodes discovered through the redisdiscovery method. If we add the plugins like service and process, we will be able to see the plugins showing up in the mcomaster, and we can also run these plugins with in the McoMaster.

This is a screenshot of the MCoMaster dashboard.

Standard
Debian, mcollective

Runing MCollective in a Multi-Ruby Environment

Mcollective one of the best open sourced orchestration tools. It’s build on Ruby and depends on Ruby1.8. But now a days most of the Ruby apps are build on Ruby1.9+, so the default Ruby versions will be 1.9+. This ususally causes problem for Mcollective to run. By default when we install Mcollective from APT, it will install Ruby1.8 also. So the best hack to make Mcollective run under such Multi-Ruby setup is edit the /usr/sbin/mcollectived file,

From #!/usr/bin/env ruby*, Change it to #!/usr/bin/env ruby1.8.

This will help the mcollective to work normally. That’s it, a small hack, but will helps us in Multi Ruby environment

Standard
Debian, Qmail, SMTP

Plugging QPSMTPD Service With QMAIL

QPSMTPD is a flexible smtp daemon written in Perl. Apart from the core SMTP features, all functionality is implemented in small extension plugins using the easy to use object oriented plugin API. Basically I uses Qmail Based mail server’s with a custom qmailqueue. We uses the tcpsvd (TCP/IP Service Daemon) for the smtp service and the mails are passed to a custom qmailqueue which is basically a perl script with custom filters for filtering out the mails. QPSMTPD has a verity of custom plugins which includes SPF check, DKIM check and even for DMARC also. If you are a Perl guy, then you can build custom plugins. The main reason why i got attracted to QPSMTPD was because of its plugin nature.

In this blog i will explain on how to setup QPSMTPD along with QMAIL MTA. As i mentioned earlier, i’m using a custom qmailqueue, where i have some custom filtering which varies from client to client. So i will be using QPSMTPD to do initail filtering like checking DNSBL/RBL, SPF check, DKIM, DMARC etc. A breif info about various qpsmtpd plugins are available here

Ths qpsmtpd source is available in Github. The soucre comes with default run scripts which can be used with runit/daemontools. Clone the qpsmtpd source and install the dependency Perl modules.

$ git clone https://github.com/smtpd/qpsmtpd/

$ cpan -i Net::DNS

$ cpan -i MIME::Base64

$ cpan -i Mail::Header

Now create a user for the qpsmtp service, say ”smtp” with home folder as the location of the qpsmtpd folder and chown the qpsmtpd folder using the smptp user. Add sticky bit to the qpsmtpd folder by running chmod o+t qpsmtpd, in order to make supervise start the log process. By deafult inside the source folder there will be a sample config folder called “config.sample”. Copy the entire folder and create a new config folder.

$ cp config.sample config

In the config folder, edit the ”IP” in order to mention which ip the qpsmtpd daemon should bind. Putting “0” will bind to all the interfaces. Now if we go through the qpsmtpd’s run script in the source folder, it depends on two binaries softlimit and tcpserver. The softlimit binary comes with the daemontools debian package and the tcpserver binary comes with the ucspi-tcp debian package. so let’s install those two packages.

$ apt-get install ucspi-tcp daemontools runit

Now start the qpsmtpd server. I’m using runit for service supervision.

$ update-service --add /usr/local/src/qpsmtpd qpsmtpd

The above command will add the service. We can check the service status using sv s qpsmtpd command. This will show us whether the serivce is running or not. Now go inside the “config” folder and open the “plugin” file. This is where we enable the plugins, by addin the plugin names with corresponding options. By default the ”rcpt_ok” plugin must be enabled. This plugin handles the qmail’s rcpthosts feature. It accepts the emails for the domains mentioned in the ”rcpthosts” file present in the config folder. If this is not enabled it will not accept any mails. So the best way to understand how each plugin works is comment out all the plugins except “rcpt_ok” and then add the plugins one by one. The plugins are available in the “plugin” folder in the qpsmtpd source folder. All the basic info about the plugins are mentioned in the plugin files itself.

Now most commonly used plugins are auth for SMTP AUTH, DNSBL/RBL, spamassassin, etc. We can enable these plugins by adding the names in the config/plugin files. For example, since i’m using a custom qmailqueue, once the qpsmtpd has accepted the mail, it has to be queued to my custom QMAILQUEUE. So i need to enable the “queue plugin”. I can enable the plugin by adding the below line to the plugin file inside the config folder.

queue/qmail-queue /var/qmail/bin/qmail-scanner-queue

If you are using any other MTA, you can provide the corresponding MTA’s queue. For example for postfix ”postfix-queue”, and for exim use ”exim-bsmtp”, or if you want to use QPSMTPD as a relaying server, you can use ”smtp-forward” plugin for relaying mails to another SMTP server. So once the mail has been accepted by qpsmtpd, it will queue the mail to my custom qmail queue, and then it will start the mail delivery. Similarly i use ldap backend for smtp authentication. So i need to enable ”auth/auth_ldap_bind” plugin for this. Like that we can add other plugins too. By default DMARC plugin is not added, but we can get it from here.

Use tools like swaks for sending test mails, because plugins like check_basicheaders will not accept mails without proper headers, so using telnet to send mails wont work some times. Swaks is a good tool for sending test mail. We can increase the loglevel, by editing config/loglevel file. It’s better to increase the log level to debug so that we will get more details of errors. Some plugins needs certain Perl modules, if it’s missing the error will popup in the qpsmtpd logs, so use cpan and install those perl modules.

By default the run script uses tcpserver to start the service. There many other ways of deployments like forkserver,pre-fork daemon,Apache::Qpsmtpd etc. To use the default TLS plugin, we need to use the ”forkserver model”. The forke server model script is availbale in the same run script, but it is commented by default. The default spool directory will be a ”tmp” folder inside the QPUSER’s ie, the user “smtp” home folder. In my case i’m using a separate folder for spool, /var/spool/qpsmtp, for such cases, edit lib/Qpsmtpd.pm and go to ”spool_dir” subroutine and add ”$Spool_dir = “/var/spool/qpsmtpd/”;”. Now create the spool directory with owner as ”smtp” user and folder permission ”0700” and then restart the qpsmtpd service.

Now to enable TLS, enable the tls plugin in the config/plugin file like this ”tls cert_path priv_key_path ca_path”. If there is no TLS certificate available ,then we can generate using a perl script ”tls_cert”, which is available at the plugins folder. Now we need to edit the config/tls_before_auth file and put the value “0”, otherwise AUTH will not be offered unless TLS/SSL are in place. Now we can try sending a test mail using swaks with TLS enabled. Below is my swaks output.

=== Trying 192.168.42.189:587...
=== Connected to 192.168.42.189.
<-  220 beingasysadmin.com ESMTP  send us your mail, but not your spam.
 -> EHLO deeptest.beingasysadmin.com
<-  250-beingasysadmin.com Hi deeptest [192.168.42.184]
<-  250-PIPELINING
<-  250-8BITMIME
<-  250-SIZE 5242880
<-  250-STARTTLS
<-  250 AUTH PLAIN LOGIN CRAM-MD5
 -> STARTTLS
<-  220 Go ahead with TLS
=== TLS started w/ cipher xxxxxx-xxx-xxxxxx
=== TLS peer subject DN="/C=XY/ST=unknown/L=unknown/O=QSMTPD/OU=Server/CN=debwheez.beingasysadmin.com/emailAddress=postmaster@debwheez.beingasysadmin.com"
 ~> EHLO deeptest.beingasysadmin.com
<~  250-beingasysadmin.com Hi deeptest [192.168.42.184]
<~  250-PIPELINING
<~  250-8BITMIME
<~  250-SIZE 5242880
<~  250 AUTH PLAIN LOGIN CRAM-MD5
 ~> AUTH PLAIN AGRlZXBhawBteWRlZXByb290
<~  235 PLAIN authentication successful for deepak - authldap/plain
 ~> MAIL FROM:<deepak@beingasysadmin.com>
<~  250 <deepak@beingasysadmin.com>, sender OK - how exciting to get mail from you!
 ~> RCPT TO:<deepakmdass88@gmail.com>
<~  250 <deepakmdass88@gmail.com>, recipient ok
 ~> DATA
<~  354 go ahead
 ~> Date: Wed, 01 May 2013 23:19:54 +0530
 ~> To: deepakmdass88@gmail.com
 ~> From: deepak@beingasysadmin.com
 ~> Subject: testing TLS + Auth in qpsmtpd
 ~> X-Mailer: swaks v20120320.0 jetmore.org/john/code/swaks/
 ~>
 ~> This is a test mailing
 ~>
 ~> .
<~  250 Queued! 1367430597 qp 9222 <>
 ~> QUIT
<~  221 beingasysadmin.com closing connection. Have a wonderful day.
=== Connection closed with remote host. 
Standard
Debian, DKIM, Qmail

DKIM signing in Qmail

DKIM and SPF are becoming most commonly adopted methods for email validation. Even if we want to use the DMARC (Domain-based Message Authentication, Reporting & Conformance), we need to configure SPF and DKIM first. DMARC acts as a layer above the SPF and DKIM. DMARC allows the receiever’s mail server to check if the Email is aligned properly as per the DMARC policy, and it queries the sender’s DNS server for the DMARC action, ie, whether to reject or quarantine if alignment fails. The action will be mentioned in the TXT record on the Sender’s DNS server. There is a good collection of DMARC training videos available in MAAWG site. We will get a clear idea on how DMARC works from those videos.

In this post, i will explain on how to make Qmail to do DKIM sign on the outgoing mails. There is a qmail-patch method available, but since i’m using qmail-1.0.3 with custom patch, i was not able to use the DKIM patch along with my custom patch. So the next method is to use a wrapper around “qmail-remote”, since qmail-remote is responsible for delivering remote mails, a wrapper around it will help us to sign the email and then start the remote delivery. There are a few wrappers mentioned in this site. I’m going to use this qmail-remote wrapper.

Initial Settings

First move the current ”qmail-remote” binary to ”qmail-remote.orig”. Now download the wrapper and move it to the /var/qmail/bin/ file.

$ mv /var/qmail/bin/qmail-remote /var/qmail/bin/qmail-remote.orig

$ wget -O /var/qmail/bin/qmail-remote "http://www.memoryhole.net/qmail/qmail-remote.sh"

$ chmod 755 /var/qmail/bin/qmail-remote

This wrapper depends on two programs, 1) dktest, which comes with the libdomainkeys, 2) dkimsign.pl, which is perl script for signing the emails. Both these files, must be available at the path mentioned in the “qmail-remote” wrapper file.

Go through the ”dkimsign.pl” script and install the Perl modules mentioned in it using cpan. There is no official debian package for libdomainkeys, so we need to compile it from the source.

setting up dktest

Download the latest source code from the sourceforge link.

$ tar -xzf libdomainkeys-0.69.tar.gz

$ cd libdomainkeys-0.69

Edit the Makefile and add ”-lresolv” to the end of the ”LIBS” line and run make

$ install -m 644 libdomainkeys.a /usr/local/lib

$ install -m 644 domainkeys.h dktrace.h /usr/local/include

$ install -m 755 dknewkey /usr/bin

$ install -m 755 dktest /usr/local/bin

Generate Domain keys for the domains

Before we can sign an email, we must create at least one public/private key pair. I’m going to create a key pair for the domain “example.com”.

$ mkdir -p /etc/domainkeys/example.com

$ cd /etc/domainkeys/example.com

$ dknewkey default 1024 > default.pub

$ chown -R root:root /etc/domainkeys

$ chmod 640 /etc/domainkeys/example.com/default

$ chown root:qmail /etc/domainkeys/example.com/default

It is very important that the default file be readable only by root and the group which qmailr (the qmail-remote user) belongs to. Now add a TXT entry to the DNS for ”default._domainkey.example.com” containing the quoted part in the /etc/domainkeys/example.com/default.pub

Once everything is added, restart the “qmail-send” and send a test mail to any non local domain. IF things goes fine, we can see a line like the below in “qmail-send” log.

$ @40000000517f518b1e1eb75c delivery 1: success: ktest_---_/tmp/dk2.sign.Gajw948FX1A1L0hugfQ/in_dkimsignpl_---_/tmp/dk2.sign.Gajw948FX1A1L0hugfQ/r74.125.25.27_accepted_message./Remote_host_said:_250_2.0.0_OK_1367298812_ps11si19566038pab.170_-_gsmtp/

Once the DKIM is working properly, add the SPF entries in our DNS, and we are ready to try out DMARC. DMARC is already in use by mail giants like Google,Yahoo,Paypal,Linkedin etc.

Standard
Debian, Monitoring, Sensu

Sensu Admin – a GUI for Sensu API

In my previous post’s, i’ve explained on How to setup Sensu server and setting up check’s and handler’s. The default dashboard is very simple with limited options, but for those who wants a full fledged dashboard, there is a Rails project in Github Sensu-Admin. So let’s try setting it up.

First clone the repository from Github.

$ git clone https://github.com/sensu/sensu-admin.git

Now go to sensu-admin folder, and run bundle install to install all the dependency gems. Now go inside the ”config” folder, edit the ”database.yml” and fill in the database details. I’m going to use mysql, below is my database config.

development:
   adapter: mysql2
   database: sensudb
   username: sensu
   password: secreto
   host: localhost
production:
   adapter: mysql2
   database: sensudb
   username: sensu
   password: secreto
   host: localhost

Now run rake db:migrate and then rake db:seed. The seed file creates auser account named ”admin@example.com” with password ”secret”.

We can start the Rails app by running “rails s”, this will start the app using the thin webserver at port 3000. Access the dashboard using the url ”http://server_ip:3000” Login to the dashboard with the admin@example.com and go to the “*Account” tab and modify the default user name and password. Now we go through tabs and check if it displays the checks, clients, events etc properly. This is a screenshot of the SensuAdmin dashboard.

Standard
Debian, Monitoring, Sensu

Sensu – Adding Check’s and Handler’s

In my previous post, i’ve explained on how to setup Sensu Server and Client. Now i’m going to explain how to setup Check’s and Handler’s in Sensu. There is a very good collection of sensu-community-plugins.

Setting up Check’s

On the Sensu Client Node,

First clone the plugins repository on the client node. Now install the ”sensu-plugin” gem on the client node. And then copy the required plugins to /etc/sensu/plugins/ folder.

On the Sensu Server,

We need to define the check first. Create a json config file for the check in /etc/sensu/conf.d. Following is a sample check config,

     {
    "checks": {
         "snmp_check": {
         "handlers": ["default"],
         "command": "/etc/sensu/plugins/check-snmp.rb -w 10 -c 20",
         "interval": 30,
         "subscribers": [ "snmp" ]
          }
      }
   }

The above check will be applied to all clients subscribed to ”snmp” exchange. Based on the interval, Server will publish this check request, which will reach all the clients subscribed to the ”snmp” exchange using an arbitrary queue. The client will run the command mentioned in the command part, and then it will publish the result back to th server through Result queue. The check_snmp is a small plugin written by me. If we check the sensu-server log, we can see the result coming from the client machine. Below one is a similar log output in my sensu-server log.

{"timestamp":1366968018},"check":{"handlers":["default","mailer"],"command":"/etc/sensu/plugins/check-snmp.rb -w 1 -c 3","interval":100,"subscribers":["snmp"],"name":"snmp_check","issued":1366968407,"executed":1366968028,"output":"CheckSNMP WARNING: Warning state detected\n","status":1,"duration":0.526,"history":["0","0","1"]},"occurrences":1,"action":"create"},"handler":{"type":"pipe","command":"true","name":"default"}}

The above log line shows us what are handler’s enabled for this check, what is the executed command, subcribers, name of the check, timestamp at the time when the command was issued, timestamp of the time when the server has received the result, Output of the check command etc. If there is any while executing th check command, we can see the errors popping in the log’s soon after this line in the server log.

Setting up Handler’s

Sensu has got a very good collection Handler’s, available at the sensu-community-plugin repo in github. For example there is a hanlder called ”show”, available at the debug section in Handler’s, which will display a more debug report about the Event as well as the Sensu server’s settings. This is the output which i got after applying ”show” handler in my serverlog. But it’s not possible to go check the log’s continously, so there another plugin called “mailer”, which can send email alerts like how nagios does.

So first get the “mailer” plugin files from the sensu-community-plugin repo in github.

wget -O /etc/sensu/handlers/mailer.rb https://raw.github.com/sensu/sensu-community-plugins/master/handlers/notification/mailer.rb
wget -O /etc/sensu/conf.d/mailer.json https://raw.github.com/sensu/sensu-community-plugins/master/handlers/notification/mailer.json

Now edit the mailer.json, and change the settings to fit to our environment. We need to define a new pipe handler for this new handler. So create a file /etc/sensu/conf.d/handler_mailer.json, and add the below lines to it.

        {
    "handlers": {
        "mailer": {
        "type": "pipe",
        "command": "/etc/sensu/handlers/mailer.rb"
        }
          }
      }

Now go to the one of the check config files, where we want to apply this new “mailer” handler.

           {
    "checks": {
         "snmp_check": {
         "handlers": ["default", "mailer"],         
         "command": "/etc/sensu/plugins/check-snmp.rb -w 10 -c 20",
         "interval": 30,
         "subscribers": [ "snmp" ]
          }
      }
   }

Now restart the sensu-server to make the new changes to come into effect. If everything goes fine, when the sensu detects a state change it will execute this mailer handler, we can also see the below lines in server log.

"action":"create"},"handler":{"type":"pipe","command":"/etc/sensu/handlers/mailer.rb","name":"mailer"

Sensu is executing the mailer script, and if there is any problem, we will see the corresponding error following the above line, or we will receive the email alert to email id mentioned in the “mailer.json” file. But in my case, i was getting an error, when the sensu invoked the “mailer” handler.

{"timestamp":"2013-04-25T15:03:32.002132+0530","level":"info","message":"/etc/sensu/handlers/mailer.rb:28:in `handle': undefined method `[]' for nil:NilClass (NoMethodError)"}
{"timestamp":"2013-04-25T15:03:32.002308+0530","level":"info","message":"\tfrom /var/lib/gems/1.9.1/gems/sensu-plugin-0.1.7/lib/sensu-handler.rb:41:in `block in <class:Handler>'"}

After playing for some time, i came to know that, it was not parsing the options from the mailer.json file, so i manually added the smtp and email settings directly in mailer.rb file. Then it started working fine. I’m writing a small script which will be using the basic ‘net/smtp’ library to send out mails. There are many other cool Handler’s like sending matrices to Graphite, Logstash, Graylog, sending notifcations to irc,xmpp,campfire etc. Compare to traditional monitoring tools, Sensu is an Amazing tool, we can use any check script’s, whether it’s ruby or perl or bash, doesn’t matter. The one common thing which i heard about other people, was the lack of proper dashboard like the traditional monitoring tools. Though Sensu dashboard is a simple one, i’m sure it will improve a lot in future.

Since I’m a CLI Junky, I dont care much about the dashboard thing, apart from that i have many good and interesting stuffs to hang around with Sensu. Cheers to portertech and sonian for open sourcing such an amazing tool.

Standard