The OpenInfra Edge Computing Group will be publishing a series of blog posts to address some of the thornier aspects of edge computing as it enters the mainstream adoption cycle.


Edge computing has been slowly gaining real adoption for a few years, yet it seems to still be on top of every hot technology debate list. So let us just agree to disagree. Not to add fuel to the fire, but this article will be no different by asking questions and sharing different viewpoints.

No matter which definition of Edge you choose, hard questions on the challenges of the backbone hardware and software infrastructure that supports the Edge remain. There is much more to edge computing than the competition of whose edge is smaller and whose latency requirements are tougher! We can’t help but wonder if we talk so much about the Edge because it really is so exciting, or because getting to the edge from the core and back is a long solved problem? After all, we have had distributed systems since at least the 1980s. But do they really do the trick?

Let us debate this for a bit by stating that distributed systems might serve as a foundation for edge infrastructures, but they will not hold up if we use them as they were originally conceived.

Distributed systems in the core mean that the components of that system are located on separate computer resources connected over a WAN or LAN network and communicate with each other through messages — or a message bus, to use the more technical terminology. The key to the operation of these systems is interaction, and critical characteristics include concurrency, no global clock and components with the capability to handle failures independently. And this seems to sum up your standard end-to-end edge infrastructure. So what is different about edge computing, that we aren’t just using all the good old, tried and true distributed systems concepts and tools?

To understand the conundrums better, before we take a deeper dive into the maze of Edge infrastructures let’s look into distributed systems from the geographical distribution perspective. We need to revisit well-known requirements such as scalability, reliability, adaptability and consistency of the information that is getting manipulated throughout the system. As the cards getting reshuffled we find ourselves focusing more and more on connectivity and the data that needs to be stored, managed and transferred to make the systems all work holistically. How do you distribute the key services of a system when the servers are not in the same room anymore, maybe not even in the same city or even the same continent? How do you keep all desired functionality if the connection breaks without sacrificing too much of the limited available resources on a small site?

The answer lies in the details. Unlike traditional distributed systems, the Edge is geographically distributed on a massive scale. So how do you connect all the components in a way that makes the management task less than Herculean? How do you build, manage and orchestrate an Edge infrastructure? Is it loosely or tightly coupled or maybe a combination of both? With that we have just arrived at the Distributed Control Plane Paradox of Edge computing infrastructures. To fulfill the edge use case requirements to the fullest we need autonomy and centralization at the same time as we need reduced management and orchestration overhead while all under often significant resource and environmental constraints. As an architectural principle, the choice between tightly or loosely coupled has to be made early, it’s not something that can be addressed later in the design of the Edge ecosystem. In light of this, distributed infrastructure management for Edge redefines where the tight and loose control loops are implemented. At the Edge site itself, tight control loops are required to provide local autonomy and ongoing control. Outside of the individual sites, management control operations need to be decoupled from immediate control functions. This is not as easy as it looks, as it will also define the information exchange mechanism. That in turn, makes the centralized system into a proxy and synchronization system for the distributed control plane. The centralized systems are more about influencing and synchronizing than they are about maintaining control. Ok, so maybe we just federate the entire ecosystems and we’re done? Probably not…

There is still a need and desire in most cases to control and manage the edge locations in a consistent manner. It is this consistent management approach that makes an Edge architecture into a distributed infrastructure as opposed to a collection of disjointed sites even if the hardware, location and workloads are different. This brings up the possibility of a system that has a single pane of glass management interface to make this all look easy. But there’s more… we haven’t even defined yet what we mean by ‘autonomy’.

In the next article, we will dig deeper into understanding the challenges and solutions better. For example, does it make sense to just do a traditional hierarchical federation, or can we mix loosely and tightly coupled services within one geographically distributed edge system? We will also analyze known mechanisms that have been in use by distributed systems for a long time, such as NTP, LDAP and DNS. We invite you to join our discussions and further these ideas during our weekly calls or on the edge-computing mailing list. If you would like to know more about the OpenInfra Edge Computing Group visit our wiki page.