OpenStack’s big tent keeps expanding: two of the newest projects are Dragonflow and Kuryr.
We’ll give you a quick overview of them and find out how you can get involved. Superuser talked to Gal Sagie, open source software architect at Huawei and project team lead (PTL) for Kuryr.
Kuryr
Kuryr is a Docker network plugin that uses Neutron to provide networking services to Docker containers. It’s available standalone and in a containerized way, making it possible to use the common Neutron plugins to plug the containers into the Neutron networking. Read more about the name and how it got started here on Superuser.
In the mailing list announcement, you wrote that "we are currently facing some interesting challenges and times." What are some of the specific challenges people are facing or trying to solve with Kuryr?
Running containers networking workloads mixed with OpenStack environments is currently not easy, not efficient, limited and orchestration-wise it is not abstracted in what we believe is the OpenStack way.
Kuryr tries to solve all of these issues by leveraging the flexibility that OpenStack networking already provide and try to bridge containers orchestration engines and models to it, users get single networking infrastructure in their cloud both for VM’s, containers, nested containers and bare metal.
Some of the challenges of achieving this is bridging between various different models of networking, like the Kubernetes model for example, to Neutron and OpenStack advance networking services and finding the right approach that users want to run their application networks with.
This require work in both communities (containers and OpenStack) where the end goal is to have single infrastructure to manage and orchestrate for networking
Following your announcement, has anyone come forward with problems that the project is working to solve?
Kuryr’s diverse community is something I am personally very proud of, we have many companies working together on this including Midokura, Huawei, IBM, PLUMGrid and others. I assume that most are doing it because they seriously see Kuryr as a solution they would like to deploy.
I hope that since we are getting close to be release ready and given we will receive more visibility in the upcoming Summit, more and more users will see the benefit and solutions Kuryr bring and consider using it in their environments.
I hope this will lead to more use cases and requirements and together as a community we can simplify networking for containers and OpenStack.
What new features/enhancements/fixes are expected for the Mitaka release for Kuryr?
Kuryr already has full support for Docker and Docker Swarm and we are working
on full integration with Kubernetes, integration with Magnum and supporting use cases of nested containers.
For all of these topics, we have some interesting designs and we are looking at ways that not only support the current status of containers networking capabilities but also how to enhance them with Kuryr and with OpenStack networking and advanced networking services.
Dragonflow
Dragonflow is a distributed control plane implementation of Neutron. Its mission is to implement advanced networking services driven by the Neutron API and running on a distributed control plane. It’s designed to support containers networking and large scale production loads.
Tell me a bit more about Dragonflow’s mission.
Dragonflow aims to support large scale deployments and was initially started from scale issues found in production clouds. Dragonflow is a distributed software-define network (SDN) controller for OpenStack® supporting distributed- switching, routing, dynamic host configuration protocol (DHCP) and more.
In Dragonflow, we took a new approach to network virtualization that fully distributed the virtual services logic and relies on policy and virtual topology synchronization to the compute nodes using pluggable key / value DB (Redis/Zookeeper/etcd/RamCloud…)
We tried to reuse production-ready components from other areas and focus mainly on distributed network services and networking scale and performance.
For Mitaka, we have a few users trying to deploy Dragonflow in production and we are focused on production ready features (DB consistency, Pub/Sub and local controller reliability)
If you want to learn more about Dragonflow, see: https://wiki.openstack.org/wiki/Dragonflow
What contributions do you need most from the community?
We would like to grow the community and learn about other cloud deployments that are facing scale issues and solve those challenges together.
Get involved!
Use Ask OpenStack for general questions.
For roadmap or development issues, subscribe to the OpenStack development mailing list, and use the relevant tag [kuryr] [dragonflow]
The Kuryr team holds [meetings](https://wiki.openstack.org/wiki/Meetings/Kuryr ) every two weeks (on odd weeks) on Tuesdays at 03:00 UTC in #openstack-meeting-4 (IRC)
Every two weeks (on even weeks) on Monday at 1500 UTC in #openstack-meeting-4 (IRC)
Dragonflow meets weekly on Mondays at 09:00 UTC in #openstack-meeting-4 (IRC).
Cover Photo // CC BY NC
- OpenStack Homebrew Club: Meet the sausage cloud - July 31, 2019
- Building a virtuous circle with open infrastructure: Inclusive, global, adaptable - July 30, 2019
- Using Istio’s Mixer for network request caching: What’s next - July 22, 2019