Thursday, April 19, 2012

New VMs on my server farm...

Morning all,

This post is a beginning of a long journey no doubt.

It started some time early last year designing, quite specifically what we wanted to do when we replace our old servers.

In the design was a pair of machines that would run VMWare for a number of VMs that would replace a bunch of cron jobs, web apps on the old servers, plus a mountain of little things running on a collection of our desktops. These things are all production or new development and new UAT services that we have been working on for the last couple of years in between the deluge of day to day work.

So, we have build a DEV and a UAT machine. DEV is where we create new code and do the initial testing and deployment for new webapps and other tools. UAT is where we our users do testing on that new code. DEV, UAT and PRD, its a model used in the whole organisation, so its a good model as every one is used to it.

So over the last couple of months, I've been building these services, moving things from the old servers to the new. A couple of things popped up that we wanted to run on VMs to limit the damage if something internet facing is pwnd. Hey, it happens unfortunately. Being careful is the more responsible that cleaning up afterwards, IMHO ;)

So, VMWare simple wasn't a viable option, even thought cost isn't a factor because of our licensing arrangements. Latency is a huge problem in all the VMWare server farms we have. High latency and web services is a massive turn off for casual web browsers and your customers/users hate to wait. So it was really the organisational limitations placed on VMWare that lead to its demise for our use. Its technical limitations also didn't help us. The storage and management interface are also limiting.

So we did our homework, spoke to lots of sysops and admins. KVM/QEMU is well supported on Debian/Stable, has all the tools packaged too. It has Linux native management interface, the biggest win really. We just don't do Windows Servers.

When installing the VMs I ran into that fun problem of now what network interface do you want to use? Arrggghh!

Time to re-learn all about bridges and plumbing VLANs around on top of the existing ethernet bonding setup. We bond four one gigabit NICs into a single bond0 interface and then have VLAN interfaces on top of that. Bridging added a whole new word of complexity!

I spent many hours reading up about how to do this. It seems it is not a common approach to run a VM on KVM on a VLAN on a bridge interface on a bond interface across multiple NICs.

It seems there is some doco out there about how to do this on Ubuntu 12.04, but it doesn't work on Debian/Stable (Squeeze).

So finally when I got this all working yesterday I had to document it and post it here on the blog so others can do the same and give some feed back.

1 comment:

  1. I found the documentation was frustratingly absent when I was trying to set up a few virtual machines (xen guests actually) with a few isolated virtual networks so I could have my own lab on a laptop to go through some training exercises. Just about everyone just documented setting up one interface on the VM and bridging to the host interface. There was very little on how to set up multiple isolated networks. Even less documentation if you wanted to use dnsmasq to provide dhcp for the interfaces on the guest machine which I wanted to do because it made the VM cloning so much easier, just clone the machine with the right set up MAC addresses and there you go. The saddest thing I saw in the whole exercise was someone posting to the dnsmasq mailing lists saying he was working on fixing having to pile all the dhcp settings on the command line only to be told by the developers that the feature already existed. Was that documented? Not that I could find even when I knew what to look for. Seriously people, if you write some software then document it please otherwise you are wasting so many peoples time.... /rant

    ReplyDelete