OpenStack Foundation ecosystem technical lead Ildiko Vancsa offers a recap of discussions around edge computing at the recent Project Teams Gathering.

image

The main goal for the Edge Computing Group at the PTG was to draw an overall architecture diagram to capture the basic setup and requirements towards building an environment suitable for Edge use cases from a set of OpenStack services. Our primary focus was around Keystone and Glance, but discussions with other project teams such as Nova, Ironic and Cinder also took place.

The edge architecture diagrams we drew are part of a minimum viable product (MVP) that defines a minimum set of services and requirements to create a functional system. This architecture will evolve as we collect more use cases and requirements.

To describe edge use cases on a higher level using mobile edge as a use case, we identified three main building blocks:

  • Main or regional data center (DC)
  • Edge sites
  • Far edge sites or cloudlets

We examined these architecture diagrams with the following user stories in mind:

  • As an OpenStack deployer, I want to minimize the number of control planes necessary to manage across a large geographical region.
  • As an OpenStack deployer, I want disk images to be pulled to a cluster on demand, without needing to sync every disk image everywhere.
  • As an OpenStack user, I expect that instance autoscale continues to function in an edge site if connectivity is lost to the main data center.
  • As an OpenStack user, I want to manage all of my instances in a region (from a regional DC to far-edge cloudlets) via a single API endpoint.

The above list is not finalized and not exclusive, rather of a starting point that we captured during the discussions as a first step.

We concluded by talking about service requirements in two major categories:

  1. Edge sites that are fully operational in case of a connection loss between the regional data center and the edge site which requires control plane services running on the edge.
  2. Having full control on the edge site is not critical in the event of a connection loss between the regional data center and an edge site which can be satisfied by having the control plane services running only in the regional data center.

In the first case, the orchestration of the services becomes harder and is not necessarily solved yet, while in the second example users have centralized control but lose functionality on the edge sites without access back to the regional DC.

We did not discuss further items such as high availability (HA) at the PTG or go into details about networking during the architectural discussion.

We agreed to prefer Federation for Keystone and came up with two work items to cover missing functionality:

  • Keystone to trust a token from an ID Provider master and when the auth method is called, perform an idempotent creation of the user, project and role assignments according to the assertions made in the token
  • Keystone should support the creation of users and projects with predictable UUIDs (e.g.: hash of the name of the users and projects). This greatly simplifies image federation and telemetry gathering.

For Glance, we explored image caching and spent some time discussing the option to also cache metadata so a user can boot new instances at the edge in case of a network connection loss which would result in being disconnected from the registry:

  • As a Glance user, I want to upload an image in the main data center and boot that image in an edge data center. Fetch the image to the edge data center with its metadata

We’re still in the progress of documenting these discussions and drawing architecture diagrams and flows for Keystone and Glance.

In addition to the above, we also went through the Dublin PTG Wiki capturing requirements:

  • We agreed to consider the list of requirements on the Wiki finalized for now.
  • We agreed to move there the additional requirements listed on the Use Case Wiki page.

For details on discussions about related OpenStack projects, check out the following Etherpads for notes:

In addition, here are notes from the StarlingX sessions.

We’re still working on cleaning up the MVP architecture and discussing comments and questions before moving it to a Wiki page.

You can find out more about how to get involved — meetings, IRC channel and mailing lists — on the Edge Computing Group Wiki.