Monday, April 30, 2012

AHARS Kit update

So, last December when I started posting kits for the club, AHARS, I really did not know how many kits we would likely sell. Sure we had a rough idea from the previous club that shipped the VK5JST AERIAL ANALYSER, but as the more kits shipped, we expected the numbers to drop away.

Well I can only say, it isn't so. Today we hit a bit of a milestone. As I left the post office today it slowly dawned on me, "I just posted kit number 100!".




From the email response the feedback has been pretty good. Most people have been pretty happy with the completeness and quality of the kit. Sure we've had a few small hiccups, but we had them sorted pretty quickly. Our suppliers have been wonderful. Our kit packager, Wolf, has been wonderful. The AHARS committee has been nothing but supportive, helping out where ever they can in their busy lives.

So, we have a basic benchmark of twenty kits posted per month. In a years time, hopefully we will continue to see the VK5JST Aerial Analyser ship in similar numbers. This time next year I would hope we have a few more kits to offer too. Time will tell.

So one thing on my to do list, is to finish my Aerial Analyse kit before Winter ;)

73, Kim VK5FJ

Thursday, April 19, 2012

VM on KVM on a VLAN on a bridge interface on a bond interface across multiple NICs on Debian/Stable - Squeeze

As noted in the last post, I have been building up a couple of VM servers for our new web server stack. We are running Debian/Squeeze, ie Debian/Stable. I have had to install a few packages for this, to support;

* vlan - 802.1q, LACP, VLAN trunking
* ifenslave-2.6 - ethernet bonding or port-channels
* tcpdump - your friendly packet dumping program

So of course you will need to make sure your repository is pointed somewhere useful and run as root user;

apt-get install vlan ifenslave-2.6 bridge-utils

Also you need to make sure the kernel modules are loaded. I put them in /etc/modules so they are loaded at boot time;


# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.

loop
8021q
bonding


Then we need to update the /etc/network/interfaces file;


## /etc/network/interfaces
auto eth0
iface eth0 inet manual

auto eth1
iface eth1 inet manual

auto eth2
iface eth2 inet manual

auto eth3
iface eth3 inet manual

auto bond0
iface bond0 inet manual
bond-mode 2
bond-miimon 100
bond_xmit_hash_policy layer2+3
bond_lacp_rate slow
bond-slaves eth0 eth1 eth2 eth3

# 10.0.0.0/24 - VLAN 1000
auto br1000
iface br1000 inet static
network 10.0.0.0
address 10.0.0.1
gateway 10.0.0.254
broadcast 10.0.0.255
netmask 255.255.255.0
vlan-raw-device bond0
bridge_ports bond0.1000
bridge_stp off



Thats it! No really! You end up with the interfaces;
eth0, eth1, eth2, eth3, bond0, bond0.1000 and br1000.

The important difference to notice from other Linux distros is that Debian requires the following statement in the br1000 interface specification;
vlan-raw-device bond0

It quite simply doesn't plumb the bits through to the right interfaces with out it. I also moved the enslave statements back into the bond0 interface specification. Other wise the config is the same as described over at Networking in Ubuntu 12.04 LTS.

So you then use the bridge interface br1000 in virt-manager to plumb your VMs network onto the VLAN on the wire. Make sure you create the NIC device and select the right driver. I have been using virtio. Perhaps the e1000 could be used instead, but folks have suggested sticking with the virtio for latency reasons.

I had previously wrangled the bonding and setup the interfaces, with LACP with our good network folks. Note we use bond-mode 2. You may need to use a different mode, depending on how your switches are configured.

Remember to put add the DNS search and nameserver statements in your interfaces too.

I used tcpdump alot. Mostly like this;
tcpdump -i bond0 not port 1985

Removing the port 1985 drops all the switch chatter. So you can see what is actually coming in from other hosts. I also ran tcpdump on the bridge interface br1000 to see what traffic was going in and out. It too a while for me to realise that the bridge wasn't plumbing the VLAN to the bridge until is added the vlan-raw-device statement. I added it to see what would happen because I had nothing to lose and had to get the server back online so folks could get work done on the DEV server =) So there you have it. Hopefully this post will be useful to someone, I know it would save me a whole bunch of frustration ;)

cheers,

Kim VK5FJ

New VMs on my server farm...

Morning all,

This post is a beginning of a long journey no doubt.

It started some time early last year designing, quite specifically what we wanted to do when we replace our old servers.

In the design was a pair of machines that would run VMWare for a number of VMs that would replace a bunch of cron jobs, web apps on the old servers, plus a mountain of little things running on a collection of our desktops. These things are all production or new development and new UAT services that we have been working on for the last couple of years in between the deluge of day to day work.

So, we have build a DEV and a UAT machine. DEV is where we create new code and do the initial testing and deployment for new webapps and other tools. UAT is where we our users do testing on that new code. DEV, UAT and PRD, its a model used in the whole organisation, so its a good model as every one is used to it.

So over the last couple of months, I've been building these services, moving things from the old servers to the new. A couple of things popped up that we wanted to run on VMs to limit the damage if something internet facing is pwnd. Hey, it happens unfortunately. Being careful is the more responsible that cleaning up afterwards, IMHO ;)

So, VMWare simple wasn't a viable option, even thought cost isn't a factor because of our licensing arrangements. Latency is a huge problem in all the VMWare server farms we have. High latency and web services is a massive turn off for casual web browsers and your customers/users hate to wait. So it was really the organisational limitations placed on VMWare that lead to its demise for our use. Its technical limitations also didn't help us. The storage and management interface are also limiting.

So we did our homework, spoke to lots of sysops and admins. KVM/QEMU is well supported on Debian/Stable, has all the tools packaged too. It has Linux native management interface, the biggest win really. We just don't do Windows Servers.

When installing the VMs I ran into that fun problem of now what network interface do you want to use? Arrggghh!

Time to re-learn all about bridges and plumbing VLANs around on top of the existing ethernet bonding setup. We bond four one gigabit NICs into a single bond0 interface and then have VLAN interfaces on top of that. Bridging added a whole new word of complexity!

I spent many hours reading up about how to do this. It seems it is not a common approach to run a VM on KVM on a VLAN on a bridge interface on a bond interface across multiple NICs.

It seems there is some doco out there about how to do this on Ubuntu 12.04, but it doesn't work on Debian/Stable (Squeeze).

So finally when I got this all working yesterday I had to document it and post it here on the blog so others can do the same and give some feed back.