Growing healthy container ops in a siloed company isn’t easy — but they are in fine form at athenahealth.
Superuser TV talked to Ryan Wallner and Kevin Scheunemann, both on the technical staff of athenahealth, at the recent OpenStack Summit Boston on how they got there.
How did you get started deploying OpenStack on containers?
Scheunemann: We initially started by deploying OpenStack using Mirantis Fuel and we found some limitations on how the network was deployed. So we started looking at other options — Kolla seemed interesting because it was basically network agnostic. You could deploy your own network in any way you want and just put the containers on top and run the applications on OpenStack.
Wallner: As a company we’re moving toward containerizing a lot of things, not just infrastructure but applications, too. We have this grand vision for everything being containerized and even running OpenStack on other container systems.
How did you do it? What’s your high-level architecture?
Wallner: We started off experimenting with Kolla, so we wanted to get a sense of how it’s being used to deploy, what the Docker images are like, seeing how we want to modify it, etc. We run OpenStack on a bunch of Dell servers on bare metal so we have a lot of automation around deploying those servers. We were already using Ansible extensively to get those servers ready for OpenStack. So Ansible was a really good fit when we chose Kolla. After testing with Kolla for awhile, we switched to deploying it on bare metal. Kolla gave us a lot of flexibility — it’s pretty much bare metal and containers running OpenStack and all of our work loads for developers running on OpenStack.
Scheunemann: We started with plain old VLAN networking and that was great to start with. When it was time to scale that up, we needed more of a spine-leaf architecture and most of the stock deployers don’t support that. That’s where we started working with containers.
What were some of the challenges?
Wallner: We moved from flat VLan to spine-leaf architecture in the last few months and we wanted to move all the routing to the hosts….Kolla was flexible but it was also “opinionated” in ways — the way Ansible reads IP addresses didn’t necessarily work with the way we put IP addresses on the loop path. There’s a bunch of instrumental challenges around getting the fabric to work…Once we found out how it worked it was nice that Kolla was flexible.
Scheunemann: We had the technical side and the processes side. Coming from a very siloed company, we have a whole dedicated networking team and bringing them on board to make sure that their vision and our vision is realized in the network for the infrastructure has been helpful. It’s also a challenge – because they do things differently. We learn from each other and hopefully we can grow and become one infrastructure.
Catch the entire eight-minute interview below for more on what they’re planning in the future.
- Exploring the Open Infrastructure Blueprint: Huawei Dual Engine - September 25, 2024
- Open Infrastructure Blueprint: Atmosphere Deep Dive - September 18, 2024
- Datacomm’s Success Story: Launching A New Data Center Seamlessly With FishOS - September 12, 2024