This article offers a step-by-step guide on setting up a load-balanced service deployed on Docker containers using OpenStack VMs. The installation consists of an Nginx load balancer and multiple upstream nodes located in two deployments.
The issue
Let’s imagine that we plan to deploy an application that is expected to be heavily used. The assumption is that a single server can’t handle all incoming requests. We need multiple instances. Moreover the application we launch runs complex calculations that fully utilizes the server for a long time. Therefore the server instance can’t meet expected performance requirements. Or we simply want to deploy an application across multiple instances to ensure that if one instance fails, we still have another instance operating.
Docker allows us to easily and efficiently run multiple instances of the same service. Docker containers are designed to be able to build up a container very quickly on the VM regardless the underlying layers.
Nevertheless, such installation allows us to run multiple containers as separate objects. When building such infrastructure, it is desirable to keep all the instances available over one URL. The requirement can be satisfied by the usage of the load balancer node.
Our goal
Our goal is to set up an installation that has an Nginx reverse proxy server at the front and a set of upstream servers handling the requests. The Nginx server is the one directly communicating with clients. Clients don’t receive any information about particular upstream server handling their requests. The responses appear to come directly from the reverse proxy server.
In addition to this functionality, Nginx has health checks. The checks ensure that nodes behind the load balancer are still operating. In case of one of the servers is not responding, Nginx stops forwarding requests to the failed node.
How to build the infrastructure
First, we need to create Docker hosts in which we can run the containers. If you are not familiar with Docker, we recommend reading more about launching Docker hosts and containers in our previous article Docker containers on OpenStack VMs first. In this entry, we launch two OpenStack VMs running Ubuntu 14.04 on different deployments. The easiest way to launch the VMs is to use the Dashboard.
The VMs being launched should have a public IP address in order to be directly addressable over the internet. Then, we need to keep the TCP port 2376 open for communication in order to run with Docker. Additionally, the TCP port 80 (HTTP) needs to be open in order to access the load balancer and ports 8080 and 8081 so that the reverse proxy server can reach the upstream servers that will be accessible on that ports.
Once the VMs are launched and operating, we make them Docker hosts with docker-machine using the generic driver. We can do so by executing this command from the local environment for both VMs:
# docker-machine create -d generic \
--generic-ip-address \
--generic-ssh-key \
--generic-ssh-user ubuntu \
After the execution, we have two Docker hosts running. We can check them with the ls
command:
# docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
lb1 - generic Running tcp://1.2.3.4:2376 v1.11.0
lb2 - generic Running tcp://5.6.7.8:2376 v1.11.0
Now, we can run the containers on the hosts. We launch the upstream nodes and the reverse proxy. In this article, tutum/hello-world was chosen as an image for running upstream nodes because it enables us to differentiate the particular containers from one another. We launch two containers from the hello-world image on both VMs. Moreover, we launch the load balancer on one of the hosts. There is an image called nginx where the Nginx is already running. First on instance lb1
:
# eval $(docker-machine env lb1)
# docker run -d --name con1 -p 8080:80 tutum/hello-world
# docker run -d --name con2 -p 8081:80 tutum/hello-world
# docker run -d --name nginx1 -p 80:80 nginx
Let’s check the created containers on this host:
# docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
4a9a75a82ecf nginx "nginx -g 'daemon off" 16 minutes ago
Up 16 minutes 0.0.0.0:80->80/tcp, 443/tcp nginx1
b28434a2e5d4 tutum/hello-world "/bin/sh -c 'php-fpm " 18 minutes ago
Up 18 minutes 0.0.0.0:8081->80/tcp con2
3df4a2e5d86b tutum/hello-world "/bin/sh -c 'php-fpm " 19 minutes ago
Up 18 minutes 0.0.0.0:8080->80/tcp con1
And then for the second instance, but this time without launching the Nginx load balancer from the image *nginx*:
# eval $(docker-machine env lb2)
# docker run -d --name con3 -p 8080:80 tutum/hello-world
# docker run -d --name con4 -p 8081:80 tutum/hello-world
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0a35034ac307 tutum/hello-world "/bin/sh -c 'php-fpm " 15 minutes ago Up 15 minutes 0.0.0.0:8081->80/tcp con4
f223f0eed6a0 tutum/hello-world "/bin/sh -c 'php-fpm " 15 minutes ago Up 15 minutes 0.0.0.0:8080->80/tcp con3
How to configure the reverse proxy server
In the current state we have an Nginx welcome page accessible at 1.2.3.4 and four instances of hello-world web application, all with slightly different content at 1.2.3.4:8080, 1.2.3.4:8081, 5.6.7.8:8080, and 5.6.7.8:8081. Now, we have to configure the Nginx node to become a load balancer and a reverse proxy server.
We show only the basic configuration that provides a load balancing where all nodes are equal. There are a lot more possible configurations enabling for example setting priorities (primary and secondary nodes), load balancing methods (round-robin, ip-hash), or setting weights for servers. In order to find out more about load balancing configurations, we recommend you to read the Nginx load balancing guide or the entry Understanding the Nginx Configuration File Structure and Configuration Contexts at DigitalOcean blog.
Now, let’s configure the Nginx node to become a load balancer and a reverse proxy server. The configuration file we want to create in the container is /etc/nginx/conf.d/default.conf
. We can execute a script on a container using the docker exec command. The convenient way is to establish a new shell session to the container. The nginximage has the Bash at /bin/bash
# eval $(docker-machine env lb1)
# docker exec -it nginx1 /bin/bash
Executing this command creates a Bash session so that we can insert the desired content in the configuration file.
# echo "upstream servers {
server 1.2.3.4:8080;
server 5.6.7.8:8080;
server 1.2.3.4:8081;
server 5.6.7.8:8081;
}
# This server accepts all traffic to the port 80 and passes it to the upstream.
# Notice that the upstream name and the proxy_pass need to match.
server {
listen 80;
location / {
proxy_pass http://servers;
}
}” > /etc/nginx/conf.d/default.conf
The upstream servers
section specifies four upstream servers that we have created before. They will be accessed over given URLs. Because no other approach was specified, the default round-robin load balancing is used.
The server
section specifies the function of the node as a load balancer specifying the proxy_pass as the group of servers defined above. The server is listening on the port 80.
Be aware that the port 80 does not correspond with the port 1.2.3.4:80 that the server is accessible at. If other ports than -p 80:80 were published while starting the container, it could have changed this directive. For example if -p 8080:8081 was used, then we need to access the port 8081 on 1.2.3.4:8080 and we have to use listen 8081; in the configuration file.
The configuration file is properly configured now, so we can terminate the Bash session:
# exit
Back in our local environment, we restart the container so that the changes to the configuration file will be loaded and thus the node begins to operate as a load balancer.
# docker restart nginx1
Review the infrastructure
The system is running now, so we can check its functionality. We can type http://1.2.3.4
to our browser and see the hello-world app content. When we do it again, the displayed hostname changes. That means that another upstream server has responded to our request.
Simulate a failover
Let’s test the health checking feature. First, we stop one of the containers:
# docker stop con1
Then access http://1.2.3.4
several times and check the hostnames that are being displayed – there are only three different hostnames alternating. This means that the stopped container does not receive the requests.
Now we start the container again:
# docker start con1
Conclusion
Nginx is an efficient way to perform load balancing in order to provide fail-over, increase the availability, extend fleet of the application servers, or to unify the access point to the installation. Docker containers allow quickly spawn multiple instances of the same type on various nodes. Combined together, it is an easy and powerful mechanism for solving similar challenges.
And after a short delay, you should see four different hosts responding again.
This post first appeared on the Cloud&Heat blog. Superuser is always interested in community content, email: [email protected].
Cover Photo // CC BY NC
- Exploring the Open Infrastructure Blueprint: Huawei Dual Engine - September 25, 2024
- Open Infrastructure Blueprint: Atmosphere Deep Dive - September 18, 2024
- Datacomm’s Success Story: Launching A New Data Center Seamlessly With FishOS - September 12, 2024