Debian, virtualization

Virtualization with LXC

LXC or Linux Continers is is an operating system-level virtualization, using which we can run multiple isolated Linux systems on a single host. LXC relies on the cgroups functionality of the Linux Kernels. Cgroups (control groups) is a Linux kernel feature to limit, account and isolate resource usage (CPU, memory, disk I/O, etc.) of process groups. LXC does not provide a virtual machine, but rather provides a virtual environment that has its own process and network space. It is similar to a chroot, but offers much more isolation.

First, let’s install the necessary packages.

$ apt-get install lxc btrutils

By default in Ubuntu, when we install the lxc package, it will create a default bridge network called “lxcbr0”. If we don’t want to use this bridge network, we can disbale it by editing the /etc/default/lxc file. We can also create bridge networks using the “btrcl” or we can directly define the bridge networks in the interfaces file. There are a few templates, which gts shipped with the lxc package, which will be present in the /usr/share/lxc/template. I’m going to use the default template to create the containers. We can also use OPENVZ templates to create containers.

I’m going to keep my keep all my container’s files in a default path say “/lxc”

$ mkdir /lxc

Now we can create the first debian container.

$ mkdir /lxc/vm0    # where vm0 is the name of my conatiner.

$ /usr/share/lxc/templates/lxc-debian -p /lxc/vm0

Now this will install and build the necessary files for the container. If we go inside the vm0 folder, we can see two things, one is the config file, and second is the root folder of the container. This root folder will be the virtual environment for our container. Now we can edit the config file, to mention the default Network options.

lxc.network.ipv4 = 192.168.0.123/24 # IP address should end with CIDR
lxc.network.hwaddr = 4a:59:43:49:79:bf # MAC address
lxc.network.link = br0 # name of the bridge interface
lxc.network.type = veth 
lxc.network.veth.pair = veth_vm0

Now we need to add the ip to the lxc’s interface file alos, for that we need to edit the /lxc/vm0/rootfs/etc/network/interfaces file and set the ip address in it for the interface eth0
We can create a bridge interface and we can bind it with the physical interface of the host, so that the lxc will be in the same network as that of the host. If there is a virtual network already existing, for example, when we install libvirt, it will create a bridge interface called “virbr0”, or in Ubuntu the lxc package installation wil create a bridge interface called ‘lxcbr0’, we can alos use those with lxc. Or we can define a bridge interface in the “interfaces” file. Below is a configuration of creating a bridge interface.

    auto eth0
    iface eth0 inet manual

# Bridge setup
    iface br0 inet static
        bridge_ports eth0 
        address 192.168.0.2
        broadcast 192.168.0.255
        netmask 255.255.255.0
        gateway 192.168.0.1

If a separate network has to be given for the lxc’s, then we can to go for NATing. The below configuration on the interfaces file is for the NAT enabled.

auto br0
iface br0 inet static
address 172.16.0.1
netmask 255.255.255.0
bridge_stp off
bridge_maxwait 5
pre-up  /usr/sbin/brctl addbr br0
post-up /usr/sbin/brctl setfd br0 0
post-up /sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
post-up echo 1 > /proc/sys/net/ipv4/ip_forward

Once we are ready with the configurations, we can start our container using the below command.

$ lxc-start -n vm0 -f /lxc/vm0/config

In the NAT scenario, the lxc machines are under the “172.16” network, while the host lies in “192.168.0” network. There are some good projects which works around with lxc, vagabond is an example for that.Vagabond is a tool integrated with Chef to build local nodes easily and most importantly, quickly. Vagabond is built for Chef.

Standard
Debian, puppet, virtualization

Using Foreman With Puppet and Libvirt

TheForeman is one of the best provisioning tools available. It’s purely open-sourced. And it natively supports puppet for provisioning the nodes. Foreman can talk to libvirt also, which makes us easy to create a VM and provision it on the way. In this blog i will be explaining on how to install Foreman from the source, how to integrate it with puppet to receive the logs and facts and make Foreman to use Libvirt for building VM’s.

Setting up Foreman

First will install the basic depenencies. Since i’m using the git repository of Foreman for installation, git package has to be installed. Moreover we also need a database for Foreman. I’m going to use Mysql for that.

$ apt-get install git mysql-server ruby-mysql libmysql-ruby1.9.1 libmysqlclient-dev libvirt-dev 

Now clone the repository from github. The newer build’s works with Puppet 3.0.

$ git clone https://github.com/theforeman/foreman.git -b develop

Ensure that ”ruby and bundler” is installed in the machine.

$ bundle install --without postgresql sqlite

Now we can start configuring Foreman. Copy the sample config files.

$ cp config/settings.yaml.example config/settings.yaml
$ cp config/database.yml.example config/database.yml

Now create a database for FOreman and add the database details in the database.yml. Now add the puppet master details in the settings.yaml. Since i’m going to use the Foreman in production mode, i’ve commented out the Development and test environment setting in database.yml. Once the config files are set, we can now go ahead with db migration.

$ RAILS_ENV=production bundle exec rake db:migrate

Now we can check whether the server is fine or not by using the following command. The below command will start the Foreman with the builtin web server, and we can access the webui from http://foreman_ip:3000 in the browser. By default there is no authentication set for the WebUI. But LDAP Authentication can be set for the WebUI. Details are availabe in the foreman’s documentation.

$ RAILS_ENV=production rails server

Once the Foreman server is working fine, we can configure puppet to send its logs and facts to foreman. In the puppet clients, add report = true in the puppet.conf file. Now in the puppet master, we need to do a few stuffs.

Copy this foreman report file to puppet’s report library.

In my case it is /usr/lib/ruby/vendor_ruby/puppet/reports/ and rename it to foreman.rb. Now add reports=log, foreman in the puppet.conf file. Also add the foreman url in the foreman.rb file.

foreman_url='http://foreman:3000    # or use ip instead of foreman, if DNS/Host entry is not there for Foreman

Now for sending facts to puppet, we can put a cron job to execute the below command

$ rake puppet:import:hosts_and_facts RAILS_ENV=production

Now once the puppet clients starts running, they will send the logs to Foreman, and can be viewed in the WebUI.

Foreman and Libvirt

Now in the same machine i’ve installed libvirt and libvirt-ruby. Now create a user “foreman” and generate ssh-key for the user. Now copy the public key to the “authorized_keys” file of the root user. This is actually needed if your libvirt host is different.

Now go to the Foreman WebUI, Go to More —–> provisioning —–> Compute Resources. Now click on “New Compute Resource”, Add a name for the Resource, Select the provider as Libvirt, and URL is qemu:///system, since libvirt and foreman resides on the same system. We can also test the connection to libvirt. IF the parameters we entered are fine, Foreman can talk to libvirt directly.

Standard
Debian, Monitoring

Using SNMP With Icinga

In my previous [blog](https://beingasysadmin.wordpress.com/2013/02/27/monitoring-servers-with-icinga-2/) i explained how to set up the Icinga monitoring system on Debian based machine. In this i will be explaining on how to use SNMP with icinga. Using SNMP we can get various info from a remote machine which can be used to icinga.

On the remote server we need to install the snmp server. In ordr to check whether the snmp is working fine, we can install the snmp client also on the same machine.

apt-get install snmp snmp-server

From Debian Wheezy onwards, the snmp-mibs package has been removed from the debian repositories. But we can download the debian file from the squeeze repo. Since it has only a few depenencies and they can be installed from the debian repositories, we can manually install the mibs package. But in Ubuntu mibs package is still available in their repositories.

Once the package is installed we can run the download-mibs to install all the necessary mibs. Now once the snmpd package is installed, we need to comment the MIBS option in the /etc/snmp/snmp.conf. And define the basic settings like, location, contact etc in the snmpd.conf file. We can define the ip to which snmp should listen either in the snmpd.conf file or in the /etc/default/snmp file as an extra option. Once the settings are modified, we can start the snmpd service.

So now the service will listening on the port 631 on the ip which we defined, by default 127.0.0.1. We can also enable authentication for snmp service, so that the snmp clients which passes the authentication will be able to get the result from the snmp service.

Once the snmp service has started on the remote server, we can test the snmp onnectivity from our icinga server using the nagios’ check_snmp plugin. Once we are able to get results from the snmp service, we can deploy tese snmp services into our icinga host configurations to get the results from the remote server’s using the check_snmp command.

Standard