As noted in the last post, I have been building up a couple of VM servers for our new web server stack. We are running Debian/Squeeze, ie Debian/Stable. I have had to install a few packages for this, to support;
* vlan - 802.1q, LACP, VLAN trunking
* ifenslave-2.6 - ethernet bonding or port-channels
* tcpdump - your friendly packet dumping program
So of course you will need to make sure your repository is pointed somewhere useful and run as root user;
apt-get install vlan ifenslave-2.6 bridge-utils
Also you need to make sure the kernel modules are loaded. I put them in /etc/modules so they are loaded at boot time;
# /etc/modules: kernel modules to load at boot time.
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.
Then we need to update the /etc/network/interfaces file;
iface eth0 inet manual
iface eth1 inet manual
iface eth2 inet manual
iface eth3 inet manual
iface bond0 inet manual
bond-slaves eth0 eth1 eth2 eth3
# 10.0.0.0/24 - VLAN 1000
iface br1000 inet static
Thats it! No really! You end up with the interfaces;
eth0, eth1, eth2, eth3, bond0, bond0.1000 and br1000.
The important difference to notice from other Linux distros is that Debian requires the following statement in the br1000 interface specification;
It quite simply doesn't plumb the bits through to the right interfaces with out it. I also moved the enslave statements back into the bond0 interface specification. Other wise the config is the same as described over at Networking in Ubuntu 12.04 LTS.
So you then use the bridge interface br1000 in virt-manager to plumb your VMs network onto the VLAN on the wire. Make sure you create the NIC device and select the right driver. I have been using virtio. Perhaps the e1000 could be used instead, but folks have suggested sticking with the virtio for latency reasons.
I had previously wrangled the bonding and setup the interfaces, with LACP with our good network folks. Note we use bond-mode 2. You may need to use a different mode, depending on how your switches are configured.
Remember to put add the DNS search and nameserver statements in your interfaces too.
I used tcpdump alot. Mostly like this;
tcpdump -i bond0 not port 1985
Removing the port 1985 drops all the switch chatter. So you can see what is actually coming in from other hosts. I also ran tcpdump on the bridge interface br1000 to see what traffic was going in and out. It too a while for me to realise that the bridge wasn't plumbing the VLAN to the bridge until is added the vlan-raw-device statement. I added it to see what would happen because I had nothing to lose and had to get the server back online so folks could get work done on the DEV server =) So there you have it. Hopefully this post will be useful to someone, I know it would save me a whole bunch of frustration ;)