When new paradigms sweep the tech industry, evangelists often channel their inner Judy Garland from “The Wizard of Oz” and say, “We’re not in Kansas anymore.”

image

When new paradigms sweep the tech industry, evangelists often channel their inner Judy Garland from “The Wizard of Oz” and say, “We’re not in Kansas anymore.” With edge computing, you’re not in Kansas anymore, at least in symbolic terms, but you are very much in Kansas, perhaps in every town in the state. You’re in Schrödinger’s Kansas! 

Such is the paradox of the edge: It’s all about location, but most edge technologies are not adequately focused on location. For example, one of the industry’s leading infrastructure management platforms does not currently have a good way to identify the precise geographic location of a server under its control. I’m not faulting them. For cloud natives, there hasn’t been a good reason to track the geographic locations of digital assets. If you’re deploying servers to hyper-scale data centers, you don’t need to know the zip code for each machine. 

The edge disrupts this model significantly and one common issue is scale. Near-term visions of the edge involve setting up thousands of new edge points of presence. You need to know exactly where your machines are running, in order to place workloads correctly. 

Another issue with the edge is cross-entity interoperability. There is no single corporation or public sector organization that will control the entire edge simply because they won’t get all the geographic coverage they need with their own infrastructure. In contrast to the cloud, successful edge computing deployments will require seamless interoperation between multiple independent edge networks owned by separate businesses.

For example, in a recent article, I talked about vehicle-to-anything (V2X) connectivity and pointed out that in a random 25-mile drive across Oklahoma (sorry, Kansas), a connected vehicle will ping six cell towers owned by five different tower operators. If the V2X software maker wants to deploy at micro data centers within each tower site, it will have to negotiate data center hosting agreements with five different vendors. With the latencies expected for V2X, only a hyperlocal approach will work, but that would take a lot of negotiation. 

Indeed, with the kind of scale that’s anticipated for the mass deployment of thousands of micro edge data centers, system owners will need to handle the following tasks:

  • Provisioning applications and data on a location-centric basis, e.g., identifying available edge rack space near specific locations. (And, if a micro data center is absent, the system owner needs an efficient way to contact property owners and discuss installing one on their land.)
  • Executing the business transactions needed to deploy across multiple edge networks, e.g., sub-leasing rack space in thousands of micro edge data centers owned and operated by different entities—including requesting the sub-lease, procuring the capacity, negotiating a service contract(s), arranging for physical support providers on a local basis, and handling payments
  • Enabling edge applications hosted at micro data centers owned by different corporate entities to inter-operate using different equipment
  • Monitoring hardware and responding to performance problems on a location-by-location basis
  • Managing infrastructure on a location-by-location basis, inclusive of site-specific and network-wide parameters
  • Responding to physical issues on a location-by-location basis, e.g., rolling trucks
  • Provisioning location-based failover sites

Relying on manual processes to manage such massively distributed, multi-supplier infrastructure would be prohibitively expensive and slow. Automation and orchestration between the various participants are essential, given the scale of the coming edge computing deployment, the volume and complexity of required tasks and the number of different corporations that must collaborate. This process, in turn, will work best with standards, common APIs and agreed-upon schemas. 

Proposed Solution: ELMOS 

While standards are developing for application management and other control plane aspects of this anticipated infrastructure trend, few standards are addressing the physical layer. My suggestion is to develop what I call Edge Location Management and Operations Standards (or Schemas) (ELMOS). ELMOS would enable the inter-operation and management of edge data center locations. They could handle the exchange of data required for location-centric edge infrastructure management and multi-edge computing, such as:

  • Edge data center location
  • Infrastructure elements (e.g., compute, storage) available at an edge data center
  • Capacity (e.g., kilowatts, rack space, etc.)
  • Commercial terms for procurement or sub-leasing of edge data center capacity
  • Information about an edge data center’s owner and operator
  • Contacts and contracts for local utility and physical support services
  • Real estate information, e.g., zoning, property owner contact information
  • Location of nearby fiber optic networks
  • Requirements needed to deploy software to an edge data center site, based on location
  • Data to support monitoring and management of edge infrastructure, based on location

Can diverse players come together for ELMOS?

The many industry stakeholders who share a common goal of building a commercially successful edge should consider coming together to develop ELMOS or something like them. These might include automotive companies, telcos, towercos, co-lo providers, data center real estate investment trusts (REITs) and many others. A business that wants to inter-operate and transact business related to edge computing resources would be wise to investigate the idea of ELMOS.

A comment draft of an ELMOS white paper is available for feedback

With special thanks to Rob Hirschfeld of RackN.