I’ve been playing with Kolla for about a week, so I thought it’d be good to share my notes with the OpenStack operator community. (Kolla provides production-ready containers and deployment tools for operating OpenStack clouds, including Docker and Ansible.)
Up to stable/newton
, Kolla was a single project that lives in the git repository:
In the current master (Ocata not yet released), Kolla is split into two repositories:
So in the current master, you won’t find the directory with the Ansible roles, because that directory is now in the new repository.
There is also a kolla-kubernetes
repo, but I haven’t had the chance to look at that yet. I’ll work up a second part to this tutorial about that soon.
My first goal was to deploy OpenStack on top of OpenStack with Kolla. I will use SWITCHengines that is OpenStack Mitaka and I’ll try to deploy OpenStack Newton.
To get started, you need an Operator Seed node, the machine where you actually install Kolla, and from where you can run the kolla-ansible
command.
I used Ubuntu Xenial for all my testing. Ubuntu does not yet have packages for Kolla. Instead of just installing with pip
a lot of Python stuff and coming up with a deployment that is hard to reproduce, on #ubuntu-server
I got this tip to use https://snapcraft.io.
There are already some OpenStack tools packaged with snapcraft:
I looked at what was already done, then I tried to package a snap for Kolla myself:
It worked quite fast, but I needed to write a couple of Kolla patches:
Also, because I had a lot of permission issues, I had to introduce this ugly patch to run all Ansible things as sudo
:
In the beginning, I tried to fix it in a elegant way and add only where necessary the become: true
, but my work collided with some one who was already working on that:
I hope that all these dirty workarounds will be gone by stable/ocata
. Apart from these small glitches everything worked pretty well.
For Docker, I used this repo on Xenial: deb http://apt.dockerproject.org/repo ubuntu-xenial main
Understanding high availability
Kolla comes with HA built in. The key idea is to have as a front end two servers sharing a public VIP with VRRP protocol. These front-end nodes run HAProxy in active-backup mode. HAProxy then load-balances the requests for the API services and for DB and RabbitMQ to two or more controller nodes in the back end.
In the standard setup the front-end nodes are called network
because they act also as Neutron network nodes. The nodes in the back end are called controllers
.
Run the playbook
To get started, source your OpenStack config and get a tenant with enough quota and run this Ansible playbook:
cp vars.yaml.template vars.yaml
vim vars.yaml # add your custom config
export ANSIBLE_HOST_KEY_CHECKING=False
source ~/opestack-config
ansible-playbook main.yaml
The Ansible playbook will create the necessary VMs, will hack the /etc/hosts
of all VMs so that they look all reachable to each other with names, and it will install Kolla on the operator-seed node using my snap package.
To have the frontend VMs share a VIP, I used the approach I found on this blog:
The playbook will configure all the OpenStack networking needed for our tests, and will configure Kolla on the operator node.
Now you can ssh to the operator node and start configuring Kolla. For this easy example, make sure that on /etc/kolla/passwords.yaml
you have at least something written for the following values:
database_password:
rabbitmq_password:
rabbitmq_cluster_cookie:
haproxy_password:
If you want, you can also just type kolla-genpwd
and this will enter passwords for all the fields in the file.
Now let’s get ready to run Ansible:
export ANSIBLE_HOST_KEY_CHECKING=False
kolla-ansible -i inventory/mariadb bootstrap-servers
kolla-ansible -i inventory/mariadb pull
kolla-ansible -i inventory/mariadb deploy
This example inventory that I have put at the path /home/ubuntu/inventory/mariadb
is a very simplified inventory that will just deploy mariadb
and rabbitmq
. Check what I disabled in /etc/kolla/globals.yml
Check what is working
With the command:
openstack floating ip list | grep 192.168.22.22
You can check the public floating IP applied to the VIP. Check the OpenStack security groups applied to the front-end VMs. If the necessary ports are open you should be able to access the MySQL service on port 3306, and the HAProxy admin panel on port 1984. The passwords are the ones in the password.yml
file and the username for HAProxy is openstack
.
TODO
I will update this file with more steps 🙂 Pull requests are welcome!
Saverio Proto is a cloud engineer at SWITCH, a national research and education network in Switzerland, which runs a public cloud for national universities.
Superuser is always interested in community content, email: [email protected].
- Keystone authentication for your Kubernetes cluster - February 16, 2018
- Deploy Kubernetes on Openstack with Ansible - July 25, 2017
- Getting started with Kolla - February 14, 2017