Traditionally, using containers in the cloud requires users to manage their own infrastructure such as a cluster of virtual servers. Users have to provision, configure and scale the clusters of virtual servers that these containers run on. They also need to perform the initial capacity planning, such as choosing the number of the servers and server types, as well as handle the dynamic reconfiguration, such as adding or removing servers from the running clusters.
Emerging clusterless container solutions, such as Amazon Web Service (AWS) Fargate and Azure Container Instances (ACI), highlight the potential for OpenStack Zun, which also takes a clusterless approach to running containers. OpenStack Zun launched in June 2016 before Fargate and ACI hit the market, which made it more difficult to understand the use case from a commercial perspective, especially compared to running platforms like Kubernetes on OpenStack or even using OpenStack Magnum to provision Kubernetes.
The clusterless approach taken by Zun, Fargate and ACI allows users to run containers without having to pre-create the set of virtual servers or manage the hosting environment.
The major benefits are:
- Reduce administration overhead: Users see their containers and the applications running in their containers, there is zero server management.
- On-demand pricing model: Users are charged by their containers which can be metered by seconds. In addition, the agility of containers allows users to quickly start and shut down their applications on-demand and users don’t need to pay if their applications aren’t running.
- Increase resource utilization: The smaller footprint of containers allows a better packing of the workloads, increasing utilization of the cloud.
In this article, we’ll go over two clusterless container technologies, AWS Fargate and OpenStack Zun and make recommendations for use cases.
AWS Fargate is a technology of Amazon Elastic Container Service (ECS). Since Fargate is embedded into ECS, all of the ECS features are immediately available. OpenStack Zun is an open-source solution. It’s integrated and released together with other OpenStack services. Both solutions are similar in term of providing a better integration between containers and their cloud infrastructure. With ECS Fargate, containers are integrated with AWS Identity and Access Management (IAM), Virtual Private Cloud (VPC), CloudWatch and many others. With OpenStack Zun, containers are integrated with OpenStack Neutron, Cinder, Keystone, Heat and Horizon.
ECS is more opinionated in the way it builds applications. You have to define your application by using their concepts such as Task and Service. In comparison, OpenStack Zun adopts a less opinionated approach, offering more flexibility to build your applications as you prefer.
OpenStack Zun
Usability
Zun primarily uses the concepts of Container and Capsule. A container is a single container backed by Docker or other container engines. The basic Docker primitives, such as create/delete container, are integrated with Zun. In addition, Zun adds several OpenStack-integrated capabilities to containers. For example, a Zun container is, by default, given Neutron IP addresses, has an option to attach Cinder volumes, the access is authenticated by Keystone and several other services. OpenStack users already familiar with the OpenStack concepts will find it fairly easy to get started with Zun containers.
In addition to the Container concept, Zun provides some slightly advanced abstractions such as Capsule. A Zun Capsule, like a Kubernetes Pod, represents a group of containers that are co-located, co-scheduled and have some shared contexts. Capsule is used to group multiple containers that need to work closely with each other to achieve a business goal.
The operational requirements of Zun are similar to other OpenStack services. They require installing a Zun API server at the controller node and a Zun compute agent at each compute node. The primary dependency of Zun is the set of OpenStack Base Services: an Oslo compatible database, message queue, Etcd and Keystone.
Feature set
At its core, Zun provides a RESTful API service for managing container resources. Zun provides basic functionalities, such as creating a container with an arbitrary amount of CPUs and memory, as well as some advanced features, such as support for single root input/output virtualization (SRIOV). Zun is integrated with OpenStack Keystone so it supports multi-tenancy and provides role-based access control (RBAC) at the level of individual containers.
In terms of networking, a Zun container typically has one or more network interfaces modeled as Neutron ports. By default, Zun creates the Neutron ports from a tenant network with default configurations. Users can customize these configurations, including choosing the Neutron network and/or the IP addresses of the ports, or even using a pre-created and pre-configured Neutron port. In addition, a Zun container can connect to more than one networks. In this case, a Neutron port will be created from each network.
Here’s an example:
openstack appcontainer run \
--net network=net1,v4-fixed-ip=10.0.0.31 \
--net network=net2 \
--net port=net3-port \
nginx:latest
Using Zun together with Neutron you can create containers within your isolated environment where your Nova instances or other OpenStack resources reside. All of the Neutron features that are available to VMs (i.e. Security Groups, QoS) are also available to Zun containers.
To containerize applications that need to persist data, the common approach is to leverage external service to provide persistent volumes for containers. Zun tackles this problem by integrating with OpenStack Cinder. When creating a container, users have the option to mount Cinder volumes into the container. The Cinder volume can be an existing volume in the tenant or a newly created volume. Each volume will be bind-mounted into a path in the container’s file system and data stored there will be persisted.
The following command offers an example:
openstack appcontainer run \
--mount source=my_cinder_volume,destination=/path/in/container \
nginx:latest
In terms of orchestration, unlike other container platforms that provide built-in orchestration, Zun uses an external orchestration system for this purpose (Currently, Zun is integrated with OpenStack Heat. In the future, Zun will also integrate with Kubernetes). With an external orchestration tool in place, end users can define their containerized applications by using the DSL provided by the tool. With Heat, in particular, it’s also possible to define applications composed of containers and other OpenStack resources, such as Neutron load balancers, floating IPs, Nova instances, etc.
heat_template_version: 2017-09-01
resources:
db:
type: OS::Zun::Container
properties:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: rootpass
MYSQL_DATABASE: wordpress
wordpress:
type: OS::Zun::Container
properties:
image: "wordpress:latest"
environment:
WORDPRESS_DB_HOST: {get_attr: [db, addresses, private, 0, addr]}
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: rootpass
floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network: public
association:
type: OS::Neutron::FloatingIPAssociation
properties:
floatingip_id: { get_resource: floating_ip }
port_id: {get_attr: [wordpress, addresses, private, 0, port]}
outputs:
url:
value: {get_attr: [floating_ip, floating_ip_address]}
description: The web server url
Finally, SRIOV support is a unique feature found in Zun. Generally speaking, SRIOV is a hardware-based virtualization technology that can be used to enhance network performance. SRIOV is widely used for Network Function Virtualization (NFV) workloads because NFV has strict requirements for the network performance. To leverage the SRIOV capability with Zun, users need to create an SRIOV port from Neutron and pass the SRIOV port to Zun when creating a container. Under the hook, Zun will schedule the container to one of the hosts that has an SRIOV Network Interface Card (NIC) and pass-through a SRIOV NIC to the container’s network namespace. As a result, the container will have accelerated network performance.
ECS Fargate
Usability
ECS Fargate primarily uses the concepts of Task and Service. An ECS Task is similar as a Kubernetes Pod. It contains one or multiple containers that are co-located, co-scheduled and have shared context. An ECS Service is primarily for maintaining the specified number of Tasks, similar to Deployment in Kubernetes. Each Task is given an Elastic Network Interface and a private IP address within a VPC. Each Task can optionally have a data volume but the data volume is not backed by cloud storage.
In general, ECS appears to have a steep learning curve as it defines its own set of abstractions and concepts that are not used in other platforms. Users need to get familiar with them before creating their applications. In addition, the Task abstraction doesn’t expose finer control of individual containers. Users might find it impossible to perform some operations that they are used to doing in Docker, for example, start/stop/kill a container, attach to the container’s terminate, run a command in a running container, commit a container, etc. Instead, what ECS provides is a higher-level control such as specifying the number of Tasks you can run, restart the Task once it terminates, etc.
Feature set
ECS Fargate uses the concept of Task as its smallest unit of deployable resources. Each instance of Task can have one or multiple containers. Users can specify the size of the Task (i.e. how many vCPU and memory it can use), the network configuration of the Task, the data volumes available to the Task and the IAM role assigned to the Task.
A sample Task Definition follows:
{
"family": "sample-fargate",
"networkMode": "awsvpc",
"containerDefinitions": [
{
"name": "fargate-app",
"image": "nginx:latest",
"portMappings": [
{
"containerPort": 80,
"protocol": "tcp"
}
]
}
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "256",
"memory": "512"
}
After you’ve defined a Task, there are several options to run the Task. A basic option is to run the Task manually. If the Task is manually triggered, it won’t be re-scheduled once it has finished so it might be suitable for batch workloads. Another option is to run the Task by a Service. A Service is a scheduler used to run Tasks and ensure that the specified number of Tasks are running at any point in time. The Task will be re-scheduled if it stops. In addition, users have an option to put the Service behind an AWS load balancer and the load balancer will route traffic to each instance of Task.
In terms of networking, the only network mode Fargate supports is called AWSVPC. In this mode, each instance of Task gets an Elastic Network Interface and is given a private IP address and an internal DNS hostname. Therefore, similar to OpenStack Zun, containers can be placed in the same network/VPC that other cloud resources (i.e. ECS instances) reside. Firewall features such as security groups are also available to containers. However, slightly different from Zun, the Elastic Network Interface of a Task is fully managed by ECS. That means it’s impossible to change the network configuration once the Task is created. For example, users can’t add or remove an Elastic Network Interface to a running Task.
ECS Fargate also supports the concept of Volumes in a Task. However, the support of volume is somehow limited. A volume in ECS is not backed by cloud storage. Instead, it appears to be just a data volume managed by a local Docker daemon. Moreover, it appears impossible to share a data volume between Tasks in Fargate mode. Therefore, it’s relatively hard to persist or share data within the cloud. However, in the future, this limitation might be mitigated if Fargate integrates with a cloud storage service, such as AWS Elastic Block Store (EBS) or Elastic File System (EFS).
In term of orchestration, the Service concept provides basic orchestration support. Moreover, like OpenStack Zun, ECS is integrated with their orchestration service, AWS CloudFormation, making it possible to orchestrate containers with other AWS resources.
Lastly, it’s worth mentioning that ECS Fargate is integrated with AWS CloudWatch. Basic metrics such as the CPU/memory utilization of individual Service are available at CloudWatch. These metrics can be used to set alarms, perform autoscaling of the Service, or for any other purpose.
Conclusion
We’ve gone over two clusterless container solutions: OpenStack Zun and AWS ECS Fargate. Both allow users to run containers without creating container hosts. In addition, both solutions offer seamless integration with their respective cloud platforms. The main difference between Zun and Fargate is that Fargate is a proprietary technology and Zun is open-source software. Aside from this, ECS Fargate is more opinionated than OpenStack Zun by assuming how applications are deployed. OpenStack Zun gives users finer control of individual containers while ECS Fargate offers high-level constructs for application management.
Of course, the choice between Fargate and Zun depends on your use case. If you’re already a public cloud user with AWS, ECS Fargate might be the obvious choice. If you opt for a private-cloud solution, you can use OpenStack Zun to build your clusterless container cloud. However, if you’re looking for a solution that’s independent of the underlying infrastructure, you might opt for platform-level tools such as Kubernetes. Essentially, what you need is the skill set to manage your own set of servers as well as the control plane of the orchestration platform. In the future, it might be possible to bridge Kubernetes to clusterless technologies so that you can skip the steps for provisioning a set of Kubernetes nodes and launch Kubernetes Pods on demand.
About the author
Hongbin Lu is currently a senior software engineer at Huawei Technologies. In the OpenStack community, he serves as the Project Team Lead (PTL) for the OpenStack Zun project. Previously, he was PTL for the OpenStack Magnum project and is also an active contributor to a few other projects, including Kuryr. His expertise includes container related technologies, deployment and management of containerized applications.