It seems like just yesterday, but here we are, 2021 has come and gone. It was a hectic and unusual year for most of us, but one thing hasn’t changed; we haven’t stopped exploring the twists and turns in the edge computing maze. As telecom and other industries ramp up their production deployments of 5G and yet more exciting and creative use cases, it is obvious that we still haven’t figured out even the half of it!
The OpenInfra Edge Computing Group finished the year strong as we covered many subjects from deployment and automation to gaining expertise with the arcane corners of networking (who knew that edge computing requires advanced knowledge of WAN networking). All this knowledge and tools are being put to use untangling the challenges and highlighting the solutions to build flexible, robust and manageable open infrastructure for edge use cases. Here are just a few of the highlights of our working group’s activities at the end of this past year.
Automation – Digital Rebar
Automation was a major theme this year, after all it is an essential component of edge infrastructures, or well, it is supposed to be. Since one deployment can span over many geographic locations, some of which might also be hard or expensive to reach, it is important to have some reasonable way to perform fixes and maintenance actions. In order to reduce complexity in such environments we need to look into how we can eliminate manual steps and treat infrastructure more as code.
As important as automation is, there is still little collaboration in the industry. This creates additional challenges with making solutions portable and shareable. During this session we not only discussed issues and requirements, but also looked into possible solutions. Rob Hirschfeld, Founder/CEO of RackN joined us to discuss some of the work he has done in this area. He shared some details about his company’s solution, Digital Rebar, and how it can be used to automate infrastructure lifecycles. Rob talked about the pipeline concept at the heart of the tool, where you can define steps and build them into a workflow which works both with cloud, through Terraform or to manage bare metal infrastructure.
The key takeaways from the session:
* Reusability and repeatability is key to automate infrastructure management
* Heterogeneity needs to be isolated from automation
* More industry collaboration is needed to build out solutions that are standard and portable
Smart Edge – Neal Oliver
Next we looked into one of the platform solutions, called Smart Edge Open. The project originates in Intel, but some parts have been opened up under an open source license. Neal Oliver, Principal Engineer at Intel, joined the group in November to introduce the project, share some details and look into collaboration opportunities.
Smart Edge Open is a Kubernetes-based platform targeting edge use cases with a focus on public and private wireless. Don’t think that this means telecommunications-only; use cases include smart factories and industrial, public safety, smart cities, and more.The platform builds on Kubernetes and other components from the CNCF ecosystem but is mostly concerned with containers. It is not just another container orchestration system, the platform adds building blocks so you can build telemetry, analytics, optimization and so forth. Beyond the enhancements these components provide, you can also pick experience kits with Smart Edge Open to spin up an opinionated stack that is fine tuned for various use cases. Examples to these include a Developer Experience Kit and a Private Wireless Experience Kit.
The platform uses a microservices architecture that has modularity and reusability built in. The key challenges include platform consistency and scalability across diverse edge locations while meeting strict KPIs and address network complexity. The project is looking to collaborate with adjacent groups and communities as well as standards bodies. With the large number of groups focusing on edge computing in the open source and standards space, this last objective might be harder than it looks. We spent a few minutes on areas where we can best collaborate, where we talked about the StarlingX project, that is a fully integrated cloud platform that is optimized for edge and IoT use cases, and mentioned specific components of Smart Edge Open, including the telemetry, topology manager and scheduler services which seem to be a common ground for the project and the OpenInfra communities.
Collaboration with Industry IoT Consortium (IIC)
Towards the end of 2021, Chuck Byers and Mitch Tseng from the Industry IoT Consortium (IIC) (Formerly known as Industrial Internet Consortium) participated in one of the working group’s calls for a joint session. Chuck gave a presentation about IIC’s views, scope and activities in the edge computing space and the group touched on plans and ideas for 2022 and onwards. IIC, as the name suggests, is mostly concerned with IoT and small-devices and how they can be deployed in an edge computing environment. Their use cases include smart cities, drone-based cargo delivery and 5G with the magnifying glass on indoor networks. Many of these use cases are shared across both groups. As we looked a bit under the hood the challenges and requirements in the infrastructure layer also matched. We touched on multi-layer distributed systems and also discussed the distributed versus centralized conundrum.
The topics we covered included:
* Security and Trust
In an ideal world you would run everything on the edge site that sits between the end device or user equipment and the central cloud. But that is just not feasible as there are space and power limitations, environmental considerations, reliability issues, and the list just goes on. The group agreed that the key is modular solutions and finding the balance between the placement of services as both the central cloud and the edge sites need to play a role. The two groups are planning to have joint workshops and meetings during 2022 to identify next steps that we can take together.
Edge computing does not work without the connection between the edge sites and the central cloud and often without the connection between the edge sites themselves. It is expected that the connection might get flaky or unstable, but it is never expected for it to not be there at all. This moves networking into the heart of the dependency chain, but for most it is more the unknown monster under the bed than a good friend. We had two sessions to cover some specific aspects of networking for edge infrastructures, DNS at the Edge and IPv6 for Edge. We will summarize below what we discussed about these two areas so you can understand better the challenges as well as solutions and advantages of some of the choices that you can make when you are building your edge infrastructure.
DNS has been around for over 35 years, so it is tempting to say it should all be crystal clear by now. But once edge comes into the picture, it makes DNS that much harder with several gaps and challenges. For instance, the question of centralizing or distributing the service pops up here too. For DNS, centralizing is the most common model. For telecom operators, typically they are responsible for the central or core WAN DNS, and customers manage anything inside their LAN (that is anything on the other side of the WAN connection.
However, the configuration choices depend highly on the control plane design and gets more complicated if the end-to-end infrastructure is running from the core to the edge. The question becomes, what is considered the WAN and what is the LAN. Edge architectures can quickly blur that distinction. Autonomy at the edge is a big factor along with zero trust, where DNS can play a key role in supporting a distributed architecture with many nodes — after all DNS was designed to support the Internet itself! Decisions about how the infrastructure is deployed in the network affect your DNS setup. For instance, if you are all IP-based then you don’t need to worry about DNS, but when you are relying on certificates, it becomes essential. The challenges increase moving out of the data center to the edge. Decentralized management is different from a centralized system. Global management can be appealing, but even if it is not feasible for a given use case, most people would still want that global view of the systems and architecture. Another problem is that container images often come with hard-coded names in the image. So, how to control the uncontrolled out of band calls and requests that’s a result of this choice?
For further details about our conversation, you can find and listen back to the session recording here.
While IPv6 is finally gaining traction after many years, it is still a networking step child. Yes, it does provide answers to the many challenges with IPv4 (yes, we really did run out of IP addresses years ago), and yet, the adoption of IPV6 is still slow. Most architectures still require support for both IPv4 and IPv6, which has created a road block towards transitioning entirely over to IPv6, since supporting the two together is harder than just pulling the plug, flipping the switch and say ’starting from tomorrow, we are IPv6 only!’. This dilemma also affects the edge. A great example of the advantages of IPV6 is service discovery. IPv6 makes discovery much easier and more transparent. Not running out of addresses when there is a potential for millions of IoT devices is pretty darn appealing. This means that you don’t need to rely on NAT and other kinds of magic sauce to build out networking for your edge infrastructures. IPv6 can be leveraged in many areas, such as centralized administration and policy changes as well.
As reducing complexity is always on top of the wishlist, why aren’t we just jumping on this one?! Unfortunately, there are so many holes to fill before we can jump over the smaller potholes on the road to Edge Nirvana. Just one example is that Android doesn’t support DHCPv6. It also opens up challenges with security and trust, since it is easier to get your network opened up, unintentionally. This is a tooling problem which can be mapped back to human factors, which could be eliminated by more automation.
The general agreement during the meeting was that IPv6 adoption is poor. And as surprising as it might sound, it is not a networking challenge! The expertise and experience are not there, you cannot find many experts in this area outside of the telecommunications space. And then you also need to realize that it is an all-or-nothing game. Unless everybody is on board, it is not possible to switch over, as everything is connected. That includes storage, security and other teams to get on board. We need to encourage everyone to start to think in terms of IPv6-only. We cannot delay this move for much longer as new trends, such as edge computing demand the many benefits it can bring and are less forgiving of the many shortcomings of IPv4.
The session recording is available here.
About OpenInfra Edge Computing Group:
The Edge Computing Group is a working group comprised of architects and engineers across large enterprises, telecoms and technology vendors working to define and advance edge cloud computing. The focus is open infrastructure technologies, not exclusive to OpenStack.
Get Involved and Join Our Discussions:
- Weekly Meetings
- Join the Mailing List
- Cloud Edge Computing: Beyond the Data Center White Paper
- Edge Computing: Next Steps in Architecture, Design and Testing
- Cloud Platform Monitoring, Digital Twins and a Real-time Context Broker - August 2, 2023
- StarlingX is 5 Years Old, and more from the OpenInfra Summit - July 11, 2023
- StarlingX is Turning 5! - June 13, 2023