Learn about how NVIDIA is using Kata Containers and Confidential Containers with GPUs.

image

OpenInfra Live is a weekly hour-long interactive show streaming to the OpenInfra YouTube channel on Thursdays at 15:00 UTC (9:00 AM CT). Episodes feature more OpenInfra release updates, user stories, community meetings, and more open infrastructure stories.

In this episode of OpenInfra Live, Zvonko Kaiser, principal systems software engineer at NVIDIA, and Treva Williams, Kata Containers technical community manager, dived deep into the intricacies of Kata Containers. The Q&A session with Kaiser covered a variety of topics, from exploring the synergy between Kata and Confidential Containers (CoCo) to understanding NVIDIA’s utilization of Kata.

Kaiser has always been a fan of open source and has been actively contributing to numerous projects over the past 12 years. He expressed his enthusiasm for the transparency inherent in open source projects. He emphasized that the strength of open source, especially in the realm of confidential containers, is not providing security by obfuscation or by hiding anything, instead, it provides complete openness, enabling anyone to examine the source code and understand the intricacies of Kata’s execution. This transparency, he noted, distinguishes it from proprietary software where the inner workings may not be as visible.

The ability to examine source code is a pivotal factor in attracting individuals to open source, as it not only fosters transparency but also allows users to implement necessary changes collaboratively. Kaiser highlighted that the shared goal of maintaining transparency among contributors is a driving force behind the widespread adoption of open source solutions.

Currently leading the efforts related to Kata and CoCo at NVIDIA, Kaiser shed light on their ongoing initiative to integrate Kata into products requiring more isolation. This journey started three years ago when NVIDIA, in collaboration with Intel, embarked on the cloud-native enablement of GPUs. Despite possessing the components for cloud-native GPU execution in containers and Kubernetes, they sought additional isolation techniques. In their quest for enhanced security and stability for GPU-intensive workloads, they delved into virtualization technologies, ultimately selecting Kata due to its seamless integration with Kubernetes.

Another topic that was brought out during the episode was the difference between Docker and Kata. Kaiser clarified the distinction between the two, explaining that, while both serve as container runtimes at a high level, Kata introduces virtualization in the middle, offering heightened isolation compared to Docker’s bare metal container execution. NVIDIA’s preference for Kata stems from its compatibility with Container Network Interface (CNI), Container Runtime Interface (CRI) and Container Storage Interface (CSI), ensuring a seamless user experience regardless of the container runtime.

Highlighting the convergence of Kata and CoCo, Kaiser emphasized that the latter is “just an umbrella term because the backbone of Confidential Containers is really Kata.” He explained that CoCo requires Kata to complete the attestation process, a multi-step verification to ensure software and versions run as expected in secure environments.

The episode concluded with Kaiser presenting a demo showcasing the lift and shift character of GPUs, specifically how Kata and CoCo work together seamlessly, to run any workload, unmodified.

To watch the full episode, check out the link below. 

Got some more questions? Connect and reach out to  Zvonko Kaiser on LinkedIn!

Kristin Barrientos