Gaëtan Trellu, technical operations manager at Ormuco, offers this guide.

image

I recently started working on an Qinling implementation for our OpenStack platforms, this is my first post in a series about it. But before going further, here’s a quick overview of what Qinling is and what it does.

Qinling is an OpenStack project to provide Function-as-a-Service. This project aims to provide a platform to support serverless functions (like AWS Lambda). Qinling supports different container orchestration platforms (Kubernetes/Swarm, etc…) and different function package storage backends (local/Swift/S3) by nature using plugin mechanism.

Basically, it allows you to trigger a function only when you need it, helping you to consume only the CPU and memory time that you really need without requiring you to configure any servers. In the end, this makes for a lighter billing, making everyone happy. (There’s a lot more about Qinling online if you want to take a deeper dive.)

Deploying Qinling in production

Our platforms are deployed and maintained by Kolla, an OpenStack project to deploy OpenStack within Docker and configured by Ansible. The first thing I checked was to see if Qinling integrated with Kolla, alas…no.

When you have to manage production you don’t want or like to deal with custom setups that are impossible to maintain or to upgrade (that little voice in your head knows what I mean), so I started working integrating Qinling in Kolla, namely the Docker and Ansible parts.

The qinling_api and qinling_engine containers are now up and running, configured to communicate with RabbitMQ, MySQL/Galera, memcached, Keystone and etcd. The final important step is to authenticate qinling-engine to the Kubernetes cluster — I must admit this was the most complex to set up and that the documentation is a bit confusing.

Qinling and Magnum, for the win!

Our Kubernetes cluster has been provisioned by OpenStack Magnum, an OpenStack project used to deploy container orchestration engines (COE) such as Docker Swarm, Mesos and Kubernetes.

Basically, the communication between Qinling and Kubernetes is done by SSL certificates (the same ones used with kubectl), qinling-engine needs to be aware of the CA, the certificate and the key and the Kubernetes API endpoint.

Magnum provides a CLI which allows easily to retrieve the certificates, just make sure that you have python-magnumclient installed.

# Get Magnum cluster UUID
$ openstack coe cluster list -f value -c uuid -c name
687f7476–5604–4b44–8b09-b7a4f3fdbd64 goldyfruit-k8s-qinling
# Retrieve Kubernetes certificates
$ mkdir -p ~/k8s_configs/goldyfruit-k8s-qinling
$ cd ~/k8s_configs/goldyfruit-k8s-qinling
$ openstack coe cluster config --dir . 687f7476-5604-4b44-8b09-b7a4f3fdbd64 --output-certs
# Get the Kubernetes API address
$ grep server config | awk -F"server:" '{ print $2 }'

Four files should have been generated in ~/k8s_configs/goldyfruit-k8s-qinling directory:

ca.pem — CA — ssl_ca_cert (Qinling option)
cert.pem — Certificate — cert_file (Qinling option)
key.pem — Key — key_file (Qinling option)
config— Kubernetes configuration

Only ca.pem, cert.pem and key.pem will be useful in our case (config file will only be used to get the Kubernetes API), which from Qinling documentation will become these options:

[kubernetes]
kube_host = https://192.168.1.168:6443
ssl_ca_cert = /etc/qinling/pki/kubernetes/ca.crt
cert_file = /etc/qinling/pki/kubernetes/qinling.crt
key_file = /etc/qinling/pki/kubernetes/qinling.key

At this point if qinling-engine has restarted, then you should see a network policy created on the Kubernetes cluster under the qinling namespace (yes, you should see that too).

The network policy mentioned above could block the incoming traffic to the pods inside the qinling namespace which result in a timeout from qinling-engine. A bug has been opened about this issue and it should be solved soon, so right now the “best” thing to do is to remove this policy (keep in mind that every time than qinling-engine will be restarted the policy will be re-created).

$ kubectl delete netpol allow-qinling-engine-only -n qinling

Just a quick word about the network policy created by Qinling. It has the objective to restrict the pod access to a trusted CIDR list (192.168.1.0/24, 10.0.0.53/32, etc…) preventing connections from unknown sources.

One common issue is to forget to open the Qinling API port (7070), this will prevent the Kubernetes cluster to download the function code/package (it’s time to be nice with your dear network friend ^^).

Runtime, it’s time to run!

One of Qinling pitfalls is the “lack” of runtime, preventing Qinling to be widely adopted, the reason why there are not that much is because of security reasons (completely understandable).

Actually, in the production environment (especially in the public cloud), it’s recommended that cloud providers supply their own runtime implementation for security reasons. Knowing how the runtime is implemented gives the malicious user the chance to attack the cloud environment.

So far, “only” Python 2.7, Python 3 and Node.JS runtimes are available, it’s a good start but it would be nice to have it for Golang and PHP too (just saying, not asking).

Conclusion

My journey has just begun and I think Qinling has a huge potential, which is why I was a bit surprised to see the project isn’t popular as it could be.

Having it in Kolla, improving the documentation for integration with Magnum, Microk8s, etc… and providing more runtimes would help the project to gain the popularity it deserves.

Thanks to Lingxian Kong and the community for making this project happen!

About the author

Gaëtan Trellu is a technical operations manager at Ormuco.

This post first appeared on Medium.

Superuser is always interested in open infra community topics, get in touch at editorATopenstack.org