CollectD, Elasticsearch, Kibana, logstash, Monitoring, Redis

Monitoring Redis Using CollectD and ELK

Redis is an open-source, networked, in-memory, key-value data store. It’s being heavily used every where from Web stack to Monitoring to Message queues. Monitoring tools like Sensu already has some good scripts to Monitor Redis. Last Month during PyCon 2014 @Plivo, opensourced a new rate limited queue called SHARQ which is based on Redis. So apart from just Monitoring checks, we decided to have a tsdb of what’s happening in our Redis Cluster. Since we are heavily using ELK stack to visualize our infrastructure, we decided to go ahead with the same.

CollectD Redis Plugin

There is a cool CollectD plugin for Redis. It pulls a verity of Data from Redis which includes, Memory used, Commands Processed, No. of Connected Clients and slaves, No. of blocked Clients, No. of Keys stored/db, uptime and challenges since last save. The installation is pretty simple and straight forward.

$ apt-get update && apt-get install collectd

$ git clone https://github.com/powdahound/redis-collectd-plugin.git /tmp/redis-collectd-plugin

Now place the redis_info.py file onto the collectd folder and enable the Python Plugins so that collectd can use this python file. Below is our collectd conf

Hostname    "<redis-server-fqdn>"
Interval 10
Timeout 4
Include "/etc/collectd/filters.conf"
Include "/etc/collectd/thresholds.conf"
LoadPlugin network
ReportStats true

        LogLevel info

Include "/etc/collectd/redis.conf"      # This is the configuration for the Redis plugin
<Plugin network>
    Server "<logstash-fqdn>" "<logstash-collectd-port>"
</Plugin>

Now copy the redis python plugin and the conf file to collectd folder.

$ mkdir /etc/collectd/plugin            # This is where we are going to place our custom plugins

$ cp /tmp/redis-collectd-plugin/redis_info.py /etc/collectd/plugin/

$ cp /tmp/redis-collectd-plugin/redis.conf /etc/collectd/

By default, the plugin folder in the redis.conf is defined as ‘/opt/collectd/lib/collectd/plugins/python’. Make sure to replace this with the location where we are copying the plugin file, in our case “/etc/collectd/plugin”. Now lets restart the collectd daemon to enable the redis plugin.

$ /etc/init.d/collectd stop

$ /etc/init.d/collectd start

In my previous Blog, i’ve mentioned how to enable and use the ColectD input plugin in Logstash and to use Kibana to plot the data coming from the collectd. Below are the Data’s that we are receiving from the CollectD on Logstash,

  1) type_instance: blocked_clients
  2) type_instance: evicted_keys
  3) type_instance: connected_slaves
  4) type_instance: commands_processed
  5) type_instance: connected_clients
  6) type_instance: used_memory 
  7) type_instance: <dbname>-keys
  8) type_instance: changes_since_last_save
  9) type_instance: uptime_in_seconds
10) type_instance: connections_received

Now we need to Visualize these via Kibana. Lets create some ElasticSearch queries so that visualize them directly. Below are some sample queries created in Kibana UI.

1) type_instance: "commands_processed" AND host: "<redis-host-fqdn>"
2) type_instance: "used_memory" AND host: "<redis-host-fqdn>"
3) type_instance: "connections_received" AND host: "<redis-host-fqdn>"
4) type_instance: "<dbname>-keys" AND host: "<redis-host-fqdn>"

Now We have some sample queries, lets visualize them.

Now create histograms in the same procedure by changing the Selected Queries.

Advertisements
Standard
Elasticsearch, Kibana, logstash, Monitoring, Plivo, SIP, Ubuntu, Voip

Extending ELK Stack to VOIP Infrastructure

Being a DevOps guy, i always love metrics. Visualized metrics gives a good picture of what’s happening in our live battle stations. There are now a quite lot of Open Source tools for monitoring and visualizing. It’s more than a year since i’ve started using Logstash. It never turned me down. ElasticSearch-Logstash-Kibana (ELK) is a killer combination. Though i started Elasticsearch + Logstash as a log analyzer, later StatsD and Graphite took it to the next level. When we have a simple infrastructure it’s easy to monitor. But when the infra starts scaling, it becomes quite difficult to keep track of all the events happening inside each nodes. Though service checks can help, but there is still limitation for it. I faced a lot of scenarios where things breaks but service checks will be fine. Under such scenarios logs are the only hope. They have all these events captured.

At Plivo, we manage a variety of servers from SIP, Media, Proxy, WebServers, DB’s etc. Being a fully Cloud based system, i really wanted to have a system which can keep track of all the live events/status of what’s really happening inside our infra. So my plan was to collect two important stats, 1) Server’s events 2) Application events.

Collectd and Logstash

Collectd is a daemon which collects system performance statistics periodically. Since we have a lot Server’s which handle Realtime Media, it’s a very critical component for us. We need to ensure that the server’s are not getting overloaded and there is no latency in network. I’ve been using Logstash heavily for stashing all my logs. And there is a stable input plugin for collectd to send the all the system metrics to logstash.

First we need to enable the Network Plugin, and then we need to mention our Logstash server IP and port so that collectd can start injecting metrics. Below is a sample colectd configuration.

Hostname    "test.plivo.com"
Interval 10
Timeout 4
Include "/etc/collectd/filters.conf"
Include "/etc/collectd/thresholds.conf"
ReportStats true
    LogLevel info
LoadPlugin interface
LoadPlugin load
LoadPlugin memory
LoadPlugin network
<Plugin interface>
    Interface "eth0"
    IgnoreSelected false
</Plugin>
<Plugin network>
    Server "{logstash_server_ip}" "logstash_server_port"    # if no port number is mentioned, it will take the default port number (25826)
</Plugin>

Now on the Logstash server, we need to add the CollectD plugin on to the input filter in the logstash’s config file.

input {
      collectd {
      port => "5555"    # default port is 25826
      }
}

Now we are set. Based the plugins enabled in the collectd config file, collctd will start sending the metrics to Logstash on the Interval mentioned in the config, default is 10s. So in my case, i wanted the Load, CPU usage, Memory usage, Bandiwdth (TX and RX) etc. There are default plugins for all these metrics, which we can just enable it in the config file. We also had some custom plugins to collect some custom metrics. BTW writing custom plugin is pretty easy in Collectd.

Now using the Logstash’s Elasticsearch output plugin, we can keep these metrics in Elasticsearch. Now this where Kibana comes in. We can start visualizing these metrics via Kibana. We need to create a custom Lucene Query. Once we have the query, we can create a custom histogram’s for each of these queries. Below aresome sample Lucene queries that we can use with Kibana.

For Load -> collectd_type:"load" AND host:"test.plivo.com"
For Network usage -> collectd_type:"if_octets" AND host:"test.plivo.com"

Below is the screenshot of histogram for Load and Network (TX and RX)

Log Events

Now next is to collect the events from the application logs. We use SIP protocol for all our VOIP sessions. So all our SIP server’s are very critical for us. SIP is pretty similar to HTTP. The response codes are very similar to HTTP responses, ie 1xx, 2xx, 3xx, 4xx, 5xx, 6xx. So i wrote some custom grok patterns so keep track of all of these responses and stores the same on the Elasticsearch.

The second stats which i was interested was our SIP registrar server. We provide SIP endpoints to our customers so that they can use the same with SIP/Soft phones. So i was more interested on stats like Number of registrations/sec, Auth error rates. Plus using ElasticSearch’s MAP facet’s i can create BetterMap. In my previous blog post’s i’ve mentioned on how to create these bettermaps using Kibana and Elasticsearch. Below bettermap screenshot shows us the SIP endpoint registrations from various locations in the last 2 hours.

Now using the Kibana we can start visualizing all these data’s. Below is a sample of Dashboard that i’ve created using Kibana.

ELK stack proved to be an amazing combination. We are currently injecting 3 million events every day and ElasticSearch was blazingly fast in indexing all theses.

Standard
Elasticsearch, Kibana, logstash, Monitoring

Near RealTime Dashboard with Kibana and Elasticsearch

Being in DevOps it’s always Multi tasking. From Regular customer queries it goes through Monitoring, Troubleshooting etc. And offcourse when things breaks, it really becomes core multi tasking. Especially when you have a really scaling infrastructure, we should really understand what’s really happening in our infrastructure. Yes we do have many new generation cloud monitoring tools like Sensu, but what if we have a near real time system that can tell us the each and every events happeing in our infrastructure. Logs are the best places where we can keep track of the events, even if the monitoring tools has missed it. We have a lot of log aggregator tools like tool Logstash, Splunk, Apache Kafka etc. And for log based event collection the common choice will be always Logstash -> StatsD -> Graphite. And ElasticSearch for indexing these.

My requirement was pretty straight. Record the events, aggregate them and keeps track of them in a timely manner. Kibana uses ElasticSearch facets for aggregating the search query results.Facets provide aggregated data based on a search query. So as a first task, i decided to visualize the location of user’s who are registering their SIP endpoints on our SIP registrar server. Kibana gives us a good interface for the 2D heat map as well as a new option called BetterMap. Bettermap uses geographic coordinates to create clusters of markers on map and shade them orange, yellow and green depending on the density of the cluster. So from the logs, i just extracted the register events, and used a custom regex patterns to extract the details like the Source IP, usernames etc using logstash. Using the logstash’s GeoIP filter, the Geo Locations of the IP can be identified. For the BetterMap, we need coordinates, in geojson format. GeoJSON is [longitude,latitude] in an array. From the Geo Locations that we have identified in the GeoIP filter, we can create this GeoJSON for each event that we are receiving. Below is a sample code that i’ve used in logstash.conf for creating the GeoJSON in Logstash.

if [source_ip]  {
    geoip {
      type => "kamailio-registers"
      source => "source_ip"
      target => "geoip"
      add_field => ["[geoip][coordinates]","%{[geoip][longitude]}"]
      add_field => ["[geoip][coordinates]","%{[geoip][latitude]}"]
    }
    mutate {
      convert => [ "[geoip][coordinates]", "float" ]
    }
  }

The above filter will create a GeoJSON array “geoip.coordinates”. This array can be used for creating the BetterMap in Kibana. Below are the settings for creating a BetterMap panel in the Kibana dashboard. While adding a new panel, select “bettermap” as the panel type, and the co-ordinate filed should be the one which contains the GeoJSON data. Make sure that the data is of the format [longitude,latitude], ie Longitude first and then followed by latitude.

Moving ahead, i decided to collect the events happening on our various other server’s. We were one of the earliest companies who started using SIP (Session Initiation Protocol). SIP employs design elements similar to the HTTP request/response transaction model. So similar to web traffic, i’ve decided to collect events related to 4XX, 5XX and 6XX error responses, as it is very important to us. Once the logs are shipped to logstash, i wrote another custom grok pattern, which extracts the Error Code and Error responses, including the server which returned the same. These data’s can be used for future analysis also. So i decided to store these on ElasticSearch. So now we have the real time event data’s stored, but how to visualize. Since i dont have to perform much mathematical analytics with data, i decided to to remove graphite. Kibana has a wonder full GUI for visualizing the data. So decided to go ahead with Kibana. One option is “histogram” panel time. Using histogram we can visualize the data via a regular bar graph, as well as using the area graph. There is another panel type called “terms” which can be used to display the agrregated events via pie chart, bar chart, or a table. And below is what i achieved with Kibana.

This is just an inital setup. I’m going to add more events to this. As of now Kibana + Elasticsearch proves to be a promising combination for displaying all near real time events happening in my Infrastructure.

Standard
logstash, Monitoring, Riemann, StatsD

Event Monitoring Using Logstash + StatsD + Riemann

Being an OPS guy, i love Logs a lot. Logs contains lots of sensitive events recorded in it. Though a lot of people rely on monitoring tools, there are a lot of scenario where we still can’t rely on monitoring. In such scenarios, logs are the best sources to identify those events in a near real time fashion. A common scenario is Web Operations, where we need to count the the various 4xx, 5xx, Auth errors experienced by the user’s. I had a simliar requirement where i need to identify the 4xx, 5xx, 6xx errors and other similar failures on various SIP server’s. But apart from just visualising these error’s i also wanted a notification system which can notify me when the value crosses the threshold.

Logstash and StasD is a perfect combination for aggregating events from the logs. StatsD has a Graphite backend, where it sends the aggreagated metric values for visualizing. But when we have large number graphs, and offcourse when being a multi tasking Ops guy, it’s not possible to sit and watch all these graphs. So we need a notification system which alert’s us when things starts breaking. Here comes RIEMANN. Riemann aggregates events from your servers and applications with a powerful stream processing language. Riemann is pretty light weight, easy to configure monitoring framework. Logstash sents the filtered events from the logs to StatsD output plugin. Based on the flushInterval, statsD iterates through the received events sents the aggregated metric values to the Graphite. There is also a Riemann output plugin for Logstash, but we need to pass the state/metric to the plugin. In my case, logstash filters the event from the log, so i need to converts these events to time based metric values. Since statsD already has these events converted into time series metrics, i decided to write a small backend for statsD that can send these aggregated metrics to Riemann.

The StatsD backend basically requires to main functions, one is ”flush_stats” which will get invoked once the flush interval is reached. This function then iterates over the received metrics and passes these aggregated metrics to another function called ”post_stats”, which sends the metrics to the corresponding aplications. In our case, we need to send the metrics to Riemann. There is a Riemann-Node plugin, which we can utilize here for sending the metrics to Riemann server. Below is the content for the ”flush_stats” and ”post_stats” functions. Currently i’ve added support only for counters. Soo i’ll be adding support for Counters and Timers also.

flush_stats function
--------------------

var flush_stats = function riemann_flush(ts, metrics) {
var statString = '';
var numStats = 0;
var key;

var counters = metrics.counters;
var gauges = metrics.gauges;
var timers = metrics.timers;
var pctThreshold = metrics.pctThreshold;

for (key in counters) {
    var value = counters[key];
    var valuePerSecond = value / (flushInterval / 1000); // calculate "per second" rate

    statsString = value;
    service = key;
    time_stamp = ts;
    post_stats(statString, service_name, time_stamp);
}
}; 


post_stats function
-------------------

var post_stats = function riemann_post_metrics(statString, service_name, time_stamp) {

riemannStats.last_exception = Math.round(new Date().getTime() / 1000);

client.send(client.Event({
  service: service_name,
  metric:  statsString,
  time: time_stamp
}));
};

So here i’m not gonna send the per second metrics. I’m using the default 10 sec flushInterval. So every seconds StatsD will send the incremented metrics to Riemann. The namespace, sender etc are defined in the logstash conf itself. The full plugin file is available in here

To use this Riemann backend, first we need to copy this file into the backend folder of the StatsD repo folder. Then we need to enable this plugin in the StatsD config file. Below is a sample config file which uses both graphite and Riemann backends.

{
  riemannPort: 5555
, riemannHost: "localhost"
, graphitePort: 2003
, graphiteHost: "localhost"
, port: 8125
, backends: [ "./backends/riemann", "./backends/graphite" ]
}

So now StatsD will send out the incremented and the per second metric to Graphite and the Riemann backend will send the incremented metric to the Rieman server. No we can define the metric threshold and the notification method on the reimann config file. Below is my reimann metric threshold and notification.

(streams
      (where (>= metric 10)
        (where (service #"SIP")
          (email "deepakmdass88@gmail.com")))))

So whenever the recived metric value is beyond 10, Riemann will notify the same to my Email. I’ve done some dry testing with this setup. So far this setup never turned me down. Though there are some tweaks to be done, but this setup really suited to my requirement. being an OPS guy, my primary focus is to detect the outages at a very early stages to minimize the impact. Hope this guy will be an added defence layer for the same.

Standard
logstash, Monitoring, Ubuntu

Real Time Web-Monitoring Using Lumberjack-Logstash-Statsd-Graphite

For the last few days i was playing around with my two of my favourite tools Logstash and StatsD. Logstash, StatsD, Graphite together makes a killer combination. So i decided to test this combination along with Lumberjack for Real time Monitoring. I’m going to use, Lumberjack as the log shipper from the webserver, and then Logstash will stash the log’s porperly and and using the statsd output plugin i will ship the metrics to Graphite. In my previous blog, i’ve explained how to use Lumberjack with Logstash. Lumberjack will be watching my test web server’s access logs.

By default, i’m using the combined apache log format, but it doesnot have the original response time for each request as well as the total reponse time. So we need to modify the LogFormat, in order to add the two. Below is the LogFormat which i’m using for my test setup.

LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\" %D %>D" combined

Once the LogFormat is modified, restart the apache service in order to make the change to be effective.

Setting up Logstash Server

First Download the latest Logstash Jar file from the Logstash site. Now we need to create a logstash conf file. By default there is a grok pattern available for apache log called “COMBINEDAPACHELOG”, but since we have added the tow new fields for the response time, we need to add the same for grok pattern also. So below is a pattern which is going to be used with Logstash.

pattern => "%{COMBINEDAPACHELOG} %{NUMBER:resptime} %{NUMBER:resptimefull}"

So the Logstash conf file will look like this,

input {
      lumberjack {
        type => "apache-access"
        port => 4444
        ssl_certificate => "/etc/ssl/logstash.pub"
        ssl_key => "/etc/ssl/logstash.key"
  }
}

filter {
  grok {
        type => "apache-access"
    pattern => "%{COMBINEDAPACHELOG} %{NUMBER:resptime} %{NUMBER:resptimefull}"
  }
}

output {
  stdout {
    debug => true
      }
  statsd {
    type => "apache-access"
    host => "localhost"
    port => 8125
    debug => true
    timing => [ "apache.servetime", "%{resptimefull}" ]
    increment => "apache.response.%{response}"
  }
}

Setting up STATSD

Now we can start setting up the StatsD daemon. By default, Ubuntu’s latest OS ships with newer verision of NodeJS and NPM. So we can install it using APT/Aptitude.

$ apt-get install nodejs npm

Now clone the StatsD github repository to the local machine.

$ git clone git://github.com/etsy/statsd.git

Now create a local config file “localConfig.js” with the below contents.

{
graphitePort: 2003
, graphiteHost: "127.0.0.1"
, port: 8125
}

Now we can start the StatsD daemon.

$ node /opt/statsd/stats.js /opt/statsd/localConfig.js

The above command will start the StatsD in foreground. Now we can go ahead with setting up the Graphite.

Setting up Graphite

First, let’s install the basic python dependencies.

$ apt-get install python-software-properties memcached python-dev python-pip sqlite3 libcairo2 libcairo2-dev python-cairo pkg-config

Then, we can start installing Carbon and Graphite dependencies.

        cat >> /tmp/graphite_reqs.txt << EOF
        django==1.3
        python-memcached
        django-tagging
        twisted
        whisper==0.9.9
        carbon==0.9.9
        graphite-web==0.9.9
        EOF

$  pip install -r /tmp/graphite_reqs.txt

Now we can configure Carbon.

$ cd /opt/graphite/conf/

$ cp carbon.conf.example carbon.conf

Now we need to create a storage schema.

        cat >> /tmp/storage-schemas.conf << EOF
        # Schema definitions for Whisper files. Entries are scanned in order,
        # and first match wins. This file is scanned for changes every 60 seconds.
        # [name]
        # pattern = regex
        # retentions = timePerPoint:timeToStore, timePerPoint:timeToStore
        [stats]
        priority = 110
        pattern = ^stats\..*
        retentions = 10s:6h,1m:7d,10m:1y
        EOF


$ cp /tmp/storage-schemas.conf /opt/graphite/conf/storage-schemas.conf

Also we need to create a log directory for graphite.

$ mkdir -p /opt/graphite/storage/log/webapp

Now we need to copy over the local settings file and initialize database

$ cd /opt/graphite/webapp/graphite/

$ cp local_settings.py.example local_settings.py

$ python manage.py syncdb

Fill in the necessary details including the super user details while initializing the database. Once the database is initialized we can start the carbon cache and graphite webgui.

$ /opt/graphite/bin/carbon-cache.py start

$ /opt/graphite/bin/run-graphite-devel-server.py /opt/graphite

Now we can access the dashboard using the url, “http://ip-address:8080&#8221;. Once we have started the carbon cache, we can start the Logstash server.

$ java -jar logstash-1.1.13-flatjar.jar agent -f logstash.conf -v

Once the logstash has loaded all the plugins successfully, we can start shipping logs from the test webserver using Lumberjack. Since i’ve enabled the STDOUT plugin, i can see the output coming from the Logstash server. Now we can start accessing the real time graph’s from graphite gui. There are several other alternative for the Graphite GUI like Graphene, Graphiti, Graphitus, GDash. Anyways Logstash-StatsD-Graphite proves to be a wonderfull combination. Sorry that i could not upload any screenshot for now, but i will upload soon

Standard
Debian, logstash, Monitoring

Lumberjack – a Light Weight Log Shipper for Logstash

Logstash is one of the coolest projects that i always wanted to play around. Since i’m a sysadmin, i’m forced to handle multiple apps, which will logs in different formats. The most weird part is the timestamps, where most of the app uses it’s own time formats. Logstash helps us to solve such situations, we can remodify the time stamp to a standard time format, we can use the predefined filter’s for filtering out the log’s, even we can create our own filter’s using regex. All the documentations are available in the Logstash website Logstash mainly has 3 parts, 1) INPUT -> from which the log’s are shipped to Logstash, 2) Filter -> for filtering our incoming log’s to suit to our needs, 3) Output -> For storing or relaying the Filtered output log’s to various Applications.

Lumberjack is one such input plugin designed for logstash. Though the plugin is still in beta state, i decided to give it a try. By default we can also use logstash itself for shipping logs to centralized Logstash server, the JVM made it difficult to work with many of my constrained machines. Lumberjack claims to be a light weight log shipper which uses SSL and we can add custom fields for each line of log which we ships.

Setting up Logstash Server

Download the latest the logstash jar file from the logstash website. Now create a logstash configuration file for the logstash instance. In the config file, we have to enable the lumberjack plugin. Lumberjack uses SSL CA to verify the server. So we need to generate the same for the logstash server. We can use the below mentioned command to generate the SSL certificate and key.

$ openssl req -x509 -newkey rsa:2048 -keyout /etc/ssl/logstash.key -out /etc/ssl/logstash.pub -nodes -days 3650

Below is the sample logstash conf file which i used for stashing logs from Socklog.

input {

  lumberjack {
    type => "qmail"
    port => 4545
    ssl_certificate => "/etc/ssl/logstash.pub"
        ssl_key => "/etc/ssl/logstash.key"
  }
}

filter {
  grok {
        type => "socklog"
        pattern => "%{DATA:logfacility}: %{SYSLOGTIMESTAMP:timestamp} %{DATA:program}: *"
  }
  mutate {
        replace => [ "@message", "%{mess}" ]
  }
  date {
        type => "socklog"
        match => [ "timestamp", "MMM dd HH:mm:ss" ]
  }
}

output {
  stdout {
    debug => true
      }
}

Now we can start the the logstash using the above config.

$ java -jar logstash-1.1.13-flatjar.jar agent -f logstash.conf -v

Once the logstash has started successfully, we can use netstat to check if it listening on port 4545. I’m currently running logstash in the foreground, below is the logoutput from logstash

Starting lumberjack input listener {:address=>"0.0.0.0:4545", :level=>:info}
Input registered {:plugin=><LogStash::Inputs::Lumberjack type=>"socklog", ssl_certificate=>"/etc/ssl/logstash.pub", ssl_key=>"/etc/ssl/logstash.key", charset=>"UTF-8", host=>"0.0.0.0">, :level=>:info}
Match data {:match=>{"@message"=>["%{DATA:logfacility}: %{SYSLOGTIMESTAMP:timestamp} %{DATA:program}: *"]}, :level=>:info}
Grok compile {:field=>"@message", :patterns=>["%{DATA:logfacility}: %{SYSLOGTIMESTAMP:timestamp} %{DATA:program}: *"], :level=>:info}
Output registered {:plugin=><LogStash::Outputs::Stdout debug_format=>"ruby", message=>"%{@timestamp} %{@source}: %{@message}">, :level=>:info}
All plugins are started and registered. {:level=>:info}

Setting up Lumberjack agent

On the machine from which we are going to ship the log’s, clone the Lumberjack github repo.

$ git clone https://github.com/jordansissel/lumberjack.git

Install the fpm ruby gem, which is required to build the lumberjack package.

$ gem install fpm

$ cd lumberjack && make

$ make deb   => This will build a debian package of the lumberjack

$ dpkg -i lumberjack_0.0.30_amd64.deb  => The package will install all the files to the `/opt/lumberjack`

Now copy the SSL certificate which we have generated at the Logstash server, to the Lumberjack machine. Once the SSL certificte has been copied, we can start the lumberjack agent.

$ /opt/lumberjack/bin/lumberjack --ssl-ca-path ./ssl/logstash.pub --host logstash.test.com --port 4545 /var/log/socklog/main/current

Below is the log output from the lumberjack.

2013-06-25T15:04:32.798+0530 Watching 1 files, setting open file limit to 103
2013-06-25T15:04:32.798+0530 Watching 1 files, setting memory usage limit to 1048576 bytes
2013-06-25T15:04:32.878+0530 Connecting to logstash.test.com(192.168.19.19):4545
2013-06-25T15:04:33.186+0530 slow operation (0.307 seconds): connect to 192.168.19.19:4545
2013-06-25T15:04:33.186+0530 Connected successfully to logstash.test.com(192.168.19.19):4545
2013-06-25T15:04:34.653+0530 Declaring window size of 4096
2013-06-25T15:04:36.734+0530 flushing since nothing came in over zmq

Now we will start getting the output from the Logstash in our screen, since we are using the ‘stdout’ output plugin. A very good detailed documentation about Lumberjack and Logstash can be found here, written by Brian Altenhofel. He had given a talk on this at Drupalcon 2013, Portland. The video for the talk is available here. It’s a very good blog post.

Standard
Debian, logstash

Using Riak with Logstash

For the last few days i was playing around with Riak, a distributed database. It’s very simple to configure and use and offcourse it supports MapReduce. I wanted to try out the map reduce, and since logstash has a plugin to write data into riak,  i decided to use it with logstash on an Ubuntu 12.04 machine.

Configuring Riak

Installing Riak is very simple, it has only a few dependencies.

apt-get install  libssl0.9.8  erlang”

Once the dependencies are being installed, we have to download and install the deb package of Riak from its website.

wget http://downloads.basho.com.s3-website-us-east-1.amazonaws.com/riak/CURRENT/ubuntu/lucid/riak_1.2.1-1_i386.deb

dpkg -i  riak_1.2.1-1_i386.deb”.

Once Riak is installed, go to “/etc/riak”, where the config files are available. We can change the name of the riak node by editing the “vm.args” file. By default Riak will listen to “127.0.0.1”, but we can change this by editing “app.config” file.  In order to use enable https enable, we need to uncomment the https section in Riak core config. We also have to mention the path of the server key and certificate. Riak comes with a build in Admin console, which currently has very minimal functions. It shows the status of the riak nodes as well as the members in the riak ring. To enable this, open the “app.config”  go to “riak_control_config” and change the “enabled,false” to “enabled,true”. The user name and password can be mentioned in the userlist option.

If we have multiple machine we can create a riak cluster using riak-admin tool. Currently i’ve only one machine with Riak installed.

In Riak, data’s are stored in “Buckets”. A Bucket is a container and keyspace for data stored in Riak, with a set of common properties for its contents (the number of replicas, or n_val, for instance). Buckets are accessed at the top of the URL hierarchy under “riak”, e.g. /riak/bucket.

Configuring Logstash

Now we have Riak machine, listening on port port “8098”. Now we need to configure logstash to sendthe data to the riak. This is very simple because logstash has an output plugin which can directly write to riak.

In the output section of logstash config file, add the riak output plugin.  It should be like this,

” riak {

bucket => bucketname

type => typename

nodes => [“riakserverip”,”8098″,”riakserverip”,”8098″]

}”

In the nodes section we have to mention the riak node ip’s. Since i’ve only one riak node, i’m mentioning the same ip twice.

That’s it, now we need to start logstash, then logstash will start writing data into the bucket which we mentioned in the conf file.

There is one good GUI for Riak called “rekon“. Just get  the source code from github and edit the “install.sh” and change the ip mentioned in to the ip which riak listens to and execute it. Now we can access the GUI using the below url

http://riakserverip:8098/riak/rekon/go&#8221;

Using this we can see the buckets inside the riak and also the corresponding key values.

Now testing the “Map Reduce Function”

This is one of the main features of Riak. For example i’m going to write a map reduce function that will display all the keys in my bucket that has the keyword “mylinux”, which is the hostname of my machine. This function will return the key as well as the number of occurrences. Below is a simple MapReduce function.

“inputs”:”mybucket”,
“query”:[{“map”:{“language”:”javascript”,
“source”:”function(riakObject) {
var m = riakObject.values[0].data.match(\”mylinux\”);
return [[riakObject.key, (m ? m.length : 0 )]];
}”

To execute the map reduce function, execute the following command,

curl -X POST http://192.168.1.173:8098/mapred -H ‘Content-Type: application/json’ -d ‘{
“inputs”:”mybucket”,
“query”:[{“map”:{“language”:”javascript”,
“source”:”function(riakObject) {
var m = riakObject.values[0].data.match(\”mylinux\”);
return [[riakObject.key, (m ? m.length : 0 )]];
}”}}]}’

The above command will return all the keys which has the keyword “mylinux” along with number of occurrences.

Standard