Eleven years into the development and adoption of OpenStack, organizations with clouds of all sizes are continuing to grow their footprint and diversify the workloads they are running. In this series of case studies, we talked to different users around the world, learned about the scale and growth of their OpenStack cloud, and what kind of future growth they are expecting.
Today, Nils Magnus, cloud architect at T-Systems, presented their OpenInfra use case and explained why their OpenStack deployment is growing so quickly.
What is the name of your organization?
Open Telekom Cloud operated by T-Systems International GmbH
What year did your organization launch its first OpenStack
2015
How has OpenStack transformed your organization?
OpenStack enabled us to become a leading public cloud provider, conforming to the special needs of European legislation. Its open source approach made it possible to tailor the services conforming to our clients, who require transparency and security while being able to consume reliable and scalable services.
T-System, who operates the Open Telekom Cloud for Deutsche Telekom, made a substantial agile transformation of its processes, offerings, and the way its staff and management is working. The open way of OpenStack is very compatible to our way of working and thinking. As we think that #peoplemakeithappen, we see clearly a counterpart in the meritocratic and acknowledging way the OpenStack community works.
What workloads are you running on OpenStack?
As a public cloud provider, typical IaaS services are the building blocks of our offering to our clients. That includes the OpenStack core services like Nova, Keystone, Neutron, Glance, Cinder, Swift, and many more. On top of that we offer many convenience platform services like databases, Kubernetes frameworks, and AI services. In some cases we carefully adapt services to manage our vast scale. For example, we’re able to host several thousand bare-metal servers in a single region, where classic setups seldom surpass a few hundred servers.
Public cloud providers have to be able to cater the full range of their customers’ needs. Our users host all types of applications in the Open Telekom Cloud, ranging from single web server instances to multi-Petabyte scientific data setups for earth orbit missions, or telco-related services for hundreds of thousands of users.
What is the scale of your OpenStack environment?
Distributed to two main regions with three availability zones each, our data centers currently host more than 6,000 servers and a similar number of other assets like switches, and security devices. Facilitating our own network infrastructure, single instances are seldom more than single digit milliseconds roundtrip apart from each other, regardless of their availability zone; the total uplink exceeds several hundred GBit/s. We are particularly proud of our storage capacities: Adding block storage and object store together, we care for over 500 PByte of data, and counting.
Why has your organization seen such significant growth over the last year?
There’s clearly a demand for ever-increasing compute, network, and storage capacities in the market. The Open Telekom Cloud is large enough to satisfy even demanding scale requirements, yet flexible enough to map our experience and expertise to specific needs and markets. We host for some industries hybrid clouds, that share the same OpenStack based services and interfaces, but are separated from the general public cloud.
What is next for your OpenStack deployment? Are you expecting similar growth of the upcoming years?
We expect continuous growth and demand. On major challenge is not only to set up infrastructure at scale, but rather to maintain it. These day-2-operations keep our teams busy, but we’re very happy and proud that we didn’t face any substantial outages back since the days of the notorious incidents of Spectre and Meltdown in 2018. But we even look further: Code-named “day-3-operations” is our research to improve ready-to-use platform services like easy to manage containers with Kubernetes frameworks, AI-platforms, or access to special-purpose hardware components like GPUs, FPGAs, or AI-processors.
- Inside Open Infrastructure: October 2024 - October 2, 2024
- Inside Open Infrastructure: June 2024 - June 18, 2024
- Inside Open Infrastructure: April 2024 - April 3, 2024