Intel’s Manjeet Singh Bhatia shares this tutorial.


Devops principles like continuous integration (CI) and continuous delivery (CD) are attracting considerable attention given their propensity to increase software development efficiency and facilitate collaboration between developers and IT, resulting in speedier building, testing and release of bug-free software. Many tools can be used to automate CI/CD pipelines for software over Git and Gerrit; we will talk about one of these automation tools in this article. Some of these devops principles can also be applied to hardware testing, as significant automation is needed to test hardware over software projects. CI/CD automation can be also be used to test hardware over frequently changing software.

Extending single-root input/output virtualization (SR-IOV) to support traffic mirroring

Some of you may be familiar with single-root input/output virtualization (SR-IOV) for Ethernet, which allows a PCI device to appear as multiple separate virtual functions that can each be directly attached to a virtual machine as a virtual network interface card (VNIC) to achieve performance on par with using a network interface card (NIC) with a physical machine. SR-IOV is being extended to support many new features, such as traffic mirroring. The packets received on virtual function devices can be mirrored on a specific endpoint using an Ethernet that supports traffic mirroring. Usually endpoints are remote devices, but Ethernet can support probing within the same machine on a dedicated virtual function of a physical device. Traffic mirroring can be set up based on filters, like probing traffic for virtual LAN (VLAN) ranges. While enabling this feature in OpenStack-based clouds, we set up a third-party CI job, which deploys an environment with required hardware to run integration API tests and report on the status of new patches submitted for reviews.

Setting up a third-party CI pipeline over a project

The OpenStack community introduced the concept of third-party CI to enable hardware vendors to test hardware-specific features over OpenStack projects in cases where test jobs cannot be hosted on infrastructure provided by the community due to special hardware needs.

Of the breadth of tools to deploy CI/CD, we’ll share our experiences with using Zuul along with Docker Compose to build a CI pipeline. Zuul v3 CI of OpenStack uses Zuul for CI/CD and we wanted to set up a test pipeline over an OpenStack project to test hardware, so Zuul was an obvious choice. We used Docker Compose to set up Zuul services. Zuul provides a basic docker-compose template; we used this template as a reference and built a custom docker-compose yaml to set up Zuul services. We configured Zuul on a virtual machine using docker compose.

Let’s start with the service containers that were deployed:

  • Zuul-web, the container that runs web service, which can be used to view all the history related to prior builds, current build and even live stream the logs of a current run.
  • Zuul-scheduler, a service that monitors gerrit streams for events and then talks to nodepool to get available machine for testing.
  • Nodepool-launcher: Nodepool service to manage availability of machines that will be used for testing.
  • Zuul-executor, container gets notification and machine from zuul-scheduler and runs actual testing on machine provided by nodepool via ssh.
  • Mariadb, the database service used by web service to store all the builds related data.
  • Logs container for logging all the task executed for pipeline.
  • A container node available to use for running tests on.

Since we were testing an open-source project over remote Gerrit, we didn’t set up any local Gerrit service, which can also be done by defining its configuration in the docker-compose template. You can use an example from the Zuul repo. Below are the main files you need to modify before docker-composing the containers.

Docker-compose.yaml: This is the file that defines all the zuul services you’d like to run; you can use default or customize it based on CI requirements. It also contains volume mapping for each service, such as defining which directory from the vm will map to which path in the container.

zuul.conf: This is the config file that defines all the information like gerrit url for a project, mysql connection and projects where CI jobs are defined will be configured.

The sample file will look like this:





[connection "gerrit"]
name=”your gerrit name”
server=”gerrit server url”
sshkey=”path to private key used”
user=”gerrit user name”
# password=secret

# This section defines untrusted projects and github repos that contain the zuul-config and
# zuul-jobs. For example, zuul-config and zuul-jobs have all the configuration and jobs repo
# over here so base url in this case would  be
# .

[connection ""]
baseurl=”github namespace where project jobs are defined”

[connection "mysql"]



c.) main.yaml

The main.yaml file defines projects to be monitored in the untrusted-projects part of this config, such as the zuul-config and zuul-projects explained above.

- tenant:
name: tenant1
- openstack-dev/ci-sandbox
- openstack/anyprojectb
- zuul-config
- zuul-jobs:
- job

You can define multiple tenants in this file if you want to monitor multiple projects under separate tenants. You can also use one tenant to define multiple projects, such as these two projects we are monitoring above, ci-sandbox and any projectb.

d.) Nodepool.yaml

Next is the nodepool config, configured to schedule pools of machines for actual testing. You have a couple of options: you can use a cloud in nodepool config, or a static driver to use bare metal machines directly. For the purposes of this article, we will use the sample nodepool config to add bare metal machines or static virtual machines. In our example, we will assume we have a pool of two machines on which we would like to run test jobs, which will look like this:

- host: zk

- name: ubuntu-xenial

- name: static-vms
driver: static
- name: main
- name: “ip of machine”
labels: ubuntu-xenial
host-key: "Host Key"
username: ubuntu

- name: “ip of machine”
labels: ubuntu-xenial
host-key: "Host Key"
username: ubuntu

Once you have configured the nodepool, zuul etc, you will need to define the zuul-config configuration. The project zuul-config over open infra provides the basic template. The projects.yaml in zuul.d needs to configured for a project. Because we are monitoring two projects, ci-sandbox and projectb, this config will look like this:


- project:
name: ^.*$
jobs: []
jobs: []

- project:
name: openstack-dev/ci-sandbox
- run-test-command

- project:
- noop

Note: The jobs shown here are the jobs already defined in zuul-jobs. We used the basic template from zuul-jobs and defined the playbooks for these jobs. The sample job definition can be found in this same location. To include new jobs, templates can be defined in zuul-jobs.

The last config is pipeline definition, which defines what to post when a test job succeeds or fails. The sample pipeline.yaml will look like this, where +1 indicates that a test job has succeeded and -1 indicates that a test job has failed:

Pipelines.yaml in zuul-config
- pipeline:
name: check
description: |
Newly uploaded patchsets enter this pipeline to receive an
initial +/-1 Verified vote.
manager: independent
open: True
current-patchset: True
- event: patchset-created
- event: change-restored
- event: comment-added
comment: (?i)^(Patch Set [0-9]+:)?( [\w\\+-]*)*(\n\n)?\s*recheck
success-message:  Build succeeded (check pipeline).
Verified: 1
Verified: -1

Through our experiences, we have found that Zuul is a pretty slick tool to set up CI pipelines, as it offers considerable automation.

Get involved

Learn more about new SR-IOV features enabled in OpenStack and how to set up continuous integration test pipelines by joining Intel and AT&T at the upcoming Open Infrastructure Summit in Denver for a session titled “Unleashing the Power of Fabric: Orchestrating New Performance Features for SR-IOV Virtual Network Functions” on Monday, April 29, at  3:50 p.m.

Catch up with Manjeet Singh Bhatia of Intel, or Munish Mehan and Deepak Tiwari of AT&T at the Open Infrastructure Summit to learn more about hardware features for Virtual Network Functions (VNFs).