Ramon Acedo shares how to get set up on infrastructure requiring multiple networks on a typical home lab.

image

This post originally ran on Ramon Acedo’s blog Tricky Cloud . Ramon is a Cloud Architect who started in the Open Source world in the age of the 33.6K modems. He works at Red Hat helping businesses in their journey to an enterprise-class OpenStack experience. You should follow him on Twitter.

j5omk1t7ferrcj8ounsq

GRE tunnels are extremely useful for many reasons. One use case is to be able to design and test an infrastructure requiring multiple networks on a typical home lab with limited hardware, such as laptops and desktops with only 1 ethernet card.

As an example, to design an OpenStack infrastructure for a production environment with RDO or Red Hat Enterprise Linux OpenStack Platform (RHEL OSP) three separate networks are recommended.

These networks will have services such as DHCP (even multiple DHCP servers if eventually needed) as they will be completely isolated from each other. Testing multiple VLANs or trunking is also possible with this setup.

The diagram above should be almost self-explanatory and describes this setup with Open vSwitch, GRE tunnels and Libvirt.

Step by Step on CentOS 6.5

  1. Install CentOS 6.5 choosing the Basic Server option

  2. Install the EPEL and RDO repos which provide Open vSwitch and iproute Namespace support:
# yum install http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
# yum install http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-3.noarch.rpm
  1. Install Libvirt, Open vSwitch and virt-install:
# yum install libvirt openvswitch python-virtinst
  1. Create the bridge that will be associated to eth0:
# ovs-vsctl add-br br-eth0
  1. Set up your network on the br-eth0 bridge with the configuration you had on eth0 and change the eth0 network settings as follows (with your own network settings):
# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none
MTU=1546
# cat /etc/sysconfig/network-scripts/ifcfg-br-eth0
DEVICE=br-eth0
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none
IPADDR0=192.168.2.1
PREFIX0=24
DNS1=192.168.2.254

Notice the MTU setting above. This is very important as GRE adds encapsulation bytes. There are two options, increasing the MTU in the hosts like in this example or decreasing the MTU in the guests if your NIC doesn’t support MTUs larger than 1500 bytes.

  1. Add eth0 to br-eth0 and restart the network to pick up the changes made in the previous step:
# ovs-vsctl add-port br-eth0 eth0 && service network restart
  1. Make sure your network still works as it did before the changes above

  2. Assuming this host has the IP 192.168.2.1 and you have two other hosts where you will do this same (or compatible) setup with the IPs 192.168.2.2 and 192.168.2.3, create the internal ovs bridge br-int0 and set the GRE tunnel endpoints gre0 and gre1 (note that the diagram above has only two hosts but a you can add more hosts with identical setup):
# ovs-vsctl add-br br-int0
# ovs-vsctl add-port br-int0 gre0 -- set interface gre0 type=gre options:remote_ip=192.168.2.2
# ovs-vsctl add-port br-int0 gre1 -- set interface gre1 type=gre options:remote_ip=192.168.2.3

Notice there is another way to set up GRE tunnels using /etc/sysconfig/network-scripts/ in CentOS/RHEL but the method explained here works in any Linux distro and is equally persistent. Choose whichever you find appropriate.

  1. Enable STP (needed for more than 2 hosts):
# ovs-vsctl set bridge br-int0 stp_enable=true
  1. Create a file called libvirt-vlabs.xml with the definition of the Libvirt network that will use the Open vSwitch bridge br-int0 (and the GRE tunnels) we just created. Check the diagram above for reference:
<network>
  <name>ovs-network</name>
  <forward mode='bridge'/>
  <bridge name='br-int0'/>
  <virtualport type='openvswitch'/>
  <portgroup name='no-vlan' default='yes'>
  </portgroup>
  <portgroup name='vlan-100'>
    <vlan>
      <tag id='100'/>
    </vlan>
  </portgroup>
  <portgroup name='vlan-200'>
    <vlan>
      <tag id='200'/>
    </vlan>
  </portgroup>
</network>
  1. Remove (optionally) the default network that Libvirt creates and add (mandatory) the network defined in the previous step:
# virsh net-destroy default
# virsh net-autostart --disable default
# virsh net-undefine default
# virsh net-define libvirt-vlans.xml
# virsh net-autostart ovs-network
# virsh net-start ovs-network
  1. Create a Libvirt storage pool where your VMs will be created (needed to use qcow2 disk format). I chose /home/VMs/pool but it can be anywhere you find appropriate:
# virsh pool-define-as --name VMs-pool --type dir --target /home/VMs/pool/
  1. Asuming you are installing a CentOS VM and that the location of the ISO is /home/VMs/ISOs/CentOS-6.5-x86_64-bin-DVD1.iso, create a VM named foreman (or any name you like) with virt-install:
# virt-install 
--name foreman 
--ram 1024 
--vcpus=1 
--disk size=20,format=qcow2,pool=VMs-pool 
--nonetworks 
--cdrom /home/VMs/ISOs/CentOS-6.5-x86_64-bin-DVD1.iso 
--graphics vnc,listen=0.0.0.0,keymap=en_gb --noautoconsole --hvm 
--os-variant rhel6
  1. Use a VNC client to access the screen of the VM during the installation. Finish the installation and shut down the VM.

  2. Edit the VM with virsh edit foreman (following name used in the example above) to add the 3 networks created before. At the bottom of the VM definition, just before add the following:
<interface type='network'>
  <source network='ovs-network' portgroup='no-vlan'/>
  <model type='virtio'/>
</interface>
<interface type='network'>
  <source network='ovs-network' portgroup='vlan-100'/>
  <model type='virtio'/>
</interface>
<interface type='network'>
  <source network='ovs-network' portgroup='vlan-200'/>
  <model type='virtio'/>
</interface>

Now you can start your VM with virsh start foreman, set up the network (in any or all of the three interfaces). Repeat the same process in another host and VM and you are good to go and install something like Foreman and OpenStack without having to have more than one network interface per host.