[Editor’s note: this tutorial should work on any OpenStack cloud that supports IPv4 addresses and LBaaSv2.]
Increasing demand for container orchestration tools is coming from our users. Kubernetes has currently a lot of hype, and often it comes the question if we are providing a Kubernetes cluster at SWITCH, the foundation that operates the Swiss academic network.
At the moment, we suggest that our users deploy their own Kubernetes cluster on top of SWITCHengines. To make sure our Openstack deployment works with this solution we tried ourself.
After deploying manually with kubeadm to learn the tool, I found a well written ansible playbook from Francois Deppierraz. I extended the playbook to make Kubernetes aware that SWITCHengines implements the LBaaSv2, and the patch is now merged in the original version.
The first problem I discovered deploying Kubernetes is the total lack of support for IPv6. Because instances in SWITCHengines get IPv6 addresses by default, I run into problems running the playbook and nothing was working. The first thing you should do is to create your own tenant network with a router, with IPv4 only connectivity. This is already explained in detail in our standard documentation.
Now we are ready to clone the Ansible playbook:
git clone https://github.com/infraly/k8s-on-openstack
Because the ansible playbook creates instances through the Openstack API, you will have to source your Openstack configuration file. We extend a little bit the usual configuration file with more variables that are specific to this ansible playbook. Lets see a template:
export OS_USERNAME=username export OS_PASSWORD=mypassword export OS_PROJECT_NAME=myproject export OS_PROJECT_ID=myproject_uuid export OS_AUTH_URL=https://keystone.cloud.switch.ch:5000/v2.0 export OS_REGION_NAME=ZH export KEY=keyname export IMAGE="Ubuntu Xenial 16.04 (SWITCHengines)" export NETWORK=k8s export SUBNET_UUID=subnet_uuid export FLOATING_IP_NETWORK_UUID=network_uuid
Lets review what changes. It is important to add also the variable OS_PROJECT_ID because the Kubernetes code that creates Load Balancers requires this value, and it is not able to extract it from the project name. To find the uuid just use the Openstack cli:
openstack project show myprojectname -f value -c id
The KEY is the name of an existing keypair that will be used to start the instances. The IMAGE is also self explicative, at the moment only Xenial is tested by me. The variable NETWORK is the name of the tenant network you created earlier. When you created a network you created also a subnet, and you need to set the uuid into SUBNET_UUID. The last variable is FLOATING_IP_NETWORK_UUID that tells kubernetes the network where to get floating IPs. In SWITCHengines this network is always called public, so you can extract the uuid like this:
openstack network show public -f value -c id
You can customize your configuration even more, reading the README file in the git repository you will find more options like the flavors to use or the cluster size. When your configuration file is ready you can run the playbook:
source /path/to/config_file cd k8s-on-openstack ansible-playbook site.yaml
It will take a few minutes to go through all the tasks. When everything is done you can ssh into the kubernetes master instance and check that everything is running as expected:
ubuntu@k8s-master:~$ kubectl get nodes NAME STATUS AGE VERSION k8s-1 Ready 2d v1.6.2 k8s-2 Ready 2d v1.6.2 k8s-3 Ready 2d v1.6.2 k8s-master Ready 2d v1.6.2
I found very useful adding bash completion for kubectl:
source <(kubectl completion bash)
Lets deploy an instance of nginx to test if everything works:
kubectl run my-nginx --image=nginx --replicas=2 --port=80
This will create two containers with nginx. You can monitor the progress with the commands:
kubectl get pods kubectl get events
At this stage you have your containers running but the service is still not accessible from the outside. One option is to use the Openstack LBaaS to expose it, you can do it with this command:
kubectl expose deployment my-nginx --port=80 --type=LoadBalancer
The expose command will create the Openstack Load Balancer and will configure it. To know the public floating ip address you can use this command to describe the service:
ubuntu@k8s-master:~$ kubectl describe service my-nginx Name: my-nginx Namespace: default Labels: run=my-nginx Annotations: Selector: run=my-nginx Type: LoadBalancer IP: 10.109.12.171 LoadBalancer Ingress: 10.8.10.15, 86.119.34.151 Port: 80/TCP NodePort: 30620/TCP Endpoints: 10.40.0.1:80,10.43.0.1:80 Session Affinity: None Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 service-controller Normal CreatingLoadBalancer Creating load balancer 10s 10s 1 service-controller Normal CreatedLoadBalancer Created load balancer
Conclusion
With this blog post you should be able to deploy Kubernetes on Openstack to understand how things work. For a real deployment you might want to make some customizations, we encourage you to share any patch to the ansible playbook with github pull requests.
Please note that Kubernetes is not bug free. When you will delete your deployment you might find this bug where Kubernetes is not able to delete correctly the load balancer. Hopefully this is fixed by the time you read this!
Saverio Proto is a cloud engineer at SWITCH, a national research and education network in Switzerland, which runs a public cloud for national universities.
Superuser is always interested in community content, email: [email protected].
- Keystone authentication for your Kubernetes cluster - February 16, 2018
- Deploy Kubernetes on Openstack with Ansible - July 25, 2017
- Getting started with Kolla - February 14, 2017