This article is part of a series of blog posts published by the OpenInfra Edge Computing Group, to address some of the thornier aspects of edge computing as it enters the mainstream adoption cycle.

image

This article is the continuation of the OpenInfra Edge Computing Group’s discussions at the PTG in April, 2022 about the challenges and practices to bring edge use cases into production.

If you haven’t yet, please read the segment about ‘Day 0’ first!

‘Day 1’ – Deployment and Configuration

The Day 1 discussions were a bit all over the place. Networking, security, hardware and logistics of getting the systems installed on site were all mentioned. Networking was the major theme as it relates to deploying edge systems in the field. This is not surprising since edge sites need to be set up as well as connected to be fully operational. Bootstrapping a new site can get challenging, that is further elevated by the confusion around the terminology related to transport and networking. Disclaimer; these two terms do not refer to the same thing! Wireless connectivity is becoming more ubiquitous, with fixed wireless (with 5G) now being the new last mile solution. High availability (HA) is still a concern, particularly in mission critical sites, so some routers support two SIM cards now; if there are two providers in the area. Of course there was a debate about using fixed IP addresses versus having a DHCP pool with a lease time set up. The core internet is still mainly using IPv4,  however most large companies have their own DNS and IPAM setup. There is a clear need for standardized solution across the industry.

Any time you bring up edge, security challenges discussions are unavoidable, since edge sites have quite a few! For example, what happens if someone takes the whole box that is set up with a “call home” function from the edge location? Will it call home if you plug it in somewhere else? (For those of you worried about this potential vulnerability, the answer is ’No’!)

One ‘Day 2’-related thought to share here is the ‘direction of control’. The edge site is not supposed to connect back to the centralized system through the management interface, the edge should be treated as a Demilitarized Zone (DMZ), or in other words, apply the ‘Zero Trust’ concept at all times! We examined the best options for connecting an edge site to the central or regional cloud. On a related note, service mesh was felt not to be the right approach for the edge, except for smaller sites. However, if you do service mesh on higher level in a geographically distributed fashion, then it becomes an interesting idea to experiment with.

The group then moved on to a conversation about the practicalities and the challenges that infrastructure software needs to solve. With one geographically distributed OpenStack deployment with availability zones (AZ), the anti-affinity rules across AZs do not work. Even if you have multiple OpenStack instances, which is not a popular model, you need to be able to address the scheduling challenge. OpenStack also needs to evolve to be able to identify the edge site state and determine what needs to be done if it is not running correctly. Without that capability it might rebuild a site too often or not at all. These challenges also exist for Kubernetes, since it was also designed and built for large data canters. Interestingly, human interaction is still preferred to get the last steps of preparation in and have the system connect to the central site through VPN to go live.

Finally, a discussion about hardware options and challenges, for instance, ARM devices closed the second day of edge discussions at the PTG. Some of these devices have different bootstrapping mechanisms, while others are following more standard processes. Diversity in hardware options points back to similar software challenges. For instance, how do you ensure to have the right set of libraries and images and how the variations of different operating systems can accommodate different HW architectures, and how can you include all this in your automation?

Stay tuned for the summary of the ‘Day 2‘ discussions!

If you missed the event and would like to listen to the sessions you can access the recordings on the OpenInfra Edge Computing Group wiki. You can also find notes on the etherpad that we used during the event. The group has a list of follow-up discussions and presentations scheduled already! Check out our lineup and join our weekly meetings on Mondays to get involved!

About OpenInfra Edge Computing Group:

The Edge Computing Group is a working group comprised of architects and engineers across large enterprises, telecoms and technology vendors working to define and advance edge cloud computing. The focus is open infrastructure technologies, not exclusive to OpenStack.

Get Involved and Join Our Discussions: