I was recently in a need to start “playing” with Openstack
(working in an existing RDO
setup) so I thought that it would be good idea to have my personal playground to start deploying from scratch/breaking/fixing that playground setup.
At first sight, Openstack looks impressive
and “over-engineered”, as it’s complex and have zillions of modules to make it work. But then when you dive into it, you understand that the choice is yours to make it complex or not. Yeah, that sentence can look strange, but I’ll explain you why.
First, you should just write your requirements, and then only have a look at the needed openstack components. For my personal playground, I just wanted to have a basic thing that would let me deploy VMs on demand, in
the existing network, and so directly using bridge as I want the VMs to be directly integrated into the existing network/subnet.
So just by looking at the mentioned diagram
, we just need :
- keystone (needed for the identity service)
- nova (hypervisor part)
- neutron (handling the network part)
- glance (to store the OS images that will be used to create the VMs)
Now that I have my requirements and list of needed components, let’s see how to setup my PoC … The RDO project
has good doc for this, including the Quickstart
guide. You can follow that guide, and as everything is packaged/built/tested and also delivered through CentOS mirror network, you can have a working RDO/openstack All-in-one setup working in minutes …
The only issue is that it doesn’t fit my need, as it will setup unneeded components, and the network layout isn’t the one I wanted either, as it will be based on openvswitch, and other rules (so multiple layers I wanted to get rid of). The good news is that Packstack
is in fact a wrapper tool around puppet modules, and it also supports lot of options to configure your PoC.
Let’s assume that I wanted a PoC based on openstack-newton, and that my machine has two nics : eth0 for mgmt network and eth1 for VMs network. You don’t need to configure the bridge on the eth1 interface, as that will be done automatically by neutron. So let’s follow the quickstart guide, but we’ll just adapt the packstack command line :
yum install centos-release-openstack-newton -y systemctl disable firewalld systemctl stop firewalld systemctl disable NetworkManager systemctl stop NetworkManager systemctl enable network systemctl start network yum install -y openstack-packstack
Let’s fix eth1 to ensure that it’s started but without any
IP on it :
sed -i 's/BOOTPROTO="dhcp"/BOOTPROTO="none"/' /etc/sysconfig/network-scripts/ifcfg-eth1 sed -i 's/ONBOOT="no"/ONBOOT="yes"/' /etc/sysconfig/network-scripts/ifcfg-eth1 ifup eth1
And now let’s call packstack with the required option so that we’ll use basic linux bridge (and so no openvswitch), and we’ll instruct that it will have to use eth1 for that mapping
packstack --allinone --provision-demo=n --os-neutron-ml2-type-drivers=flat --os-neutron-ml2-mechanism-drivers=linuxbridge --os-neutron-ml2-flat-networks=physnet0 --os-neutron-l2-agent=linuxbridge --os-neutron-lb-interface-mappings=physnet0:eth1 --os-neutron-ml2-tenant-network-types=' ' --nagios-install=n
At this stage we have openstack components installed, and /root/keystonerc_admin file that we can source for openstack CLI operations. We have instructed neutron to use linuxbridge, but we haven’t (yet) created a network and a subnet tied to it, so let’s do that now :
source /root/keystonerc_admin neutron net-create --shared --provider:network_type=flat --provider:physical_network=physnet0 othernet neutron subnet-create --name other_subnet --enable_dhcp --allocation-pool=start=192.168.123.1,end=192.168.123.4 --gateway=192.168.123.254 --dns-nameserver=192.168.123.254 othernet 192.168.123.0/24
Before import image[s] and creating instances, there is one thing left to do : instruct dhcp_agent that metadata for cloud-init inside the VM will not be served from traditional “router” inside of openstack. And also don’t forget to let traffic (in/out) pass through security group (see doc
Just be sure to have
enable_isolated_metadata = True
in /etc/neutron/dhcp_agent.ini and then
systemctl restart neutron-dhcp-agent
: and from that point, cloud metadata will be served from dhcp too.
One last remark with linuxbridge in an existing network : as neutron will have a dhcp-agent listening on the bridge, the provisioned VMs will get an IP from the pool declared in the “neutron subnet-create” command. However (and I saw that when I added other compute nodes in the same setup), you’ll have a potential conflict with an existing dhcpd instance on the same segment/network, so your VM can potentially get their IP from your existing dhcpd instance on the network, and not from neutron. As a workaround, you can just ignore the mac addresses range used by openstack, so that your VMs will always get their IP from neutron dhcp. To do this, there are different options, depending on your local dhcpd instance :
for dnsmasq : dhcp-host=fa:16:3e: :
:*,ignore (see doc
for ISC dhcpd : “ignore booting” (see doc
The default mac addresses range for openstack VMs is indeed fa:16:3e:00:00:00 (see /etc/neutron/neutron.conf, so that can be changed too)
Those were some of my findings for my openstack PoC/playground. Now that I understand a little bit more all this, I’m currently working on some puppet integration for this, as there are official openstack puppet modules available on git.openstack.org
that one can import to deploy/configure openstack (and better than using packstack). But lot of “yaks to shave” to get to that point, so surely for another future blog post.