Only you can prevent the brush fires in the on-prem prairies that become towering infernos in the cloud, says Oracle’s Greg Marsden.



Greg Marsden, vice president of Linux development at Oracle, wants to put an end to security brush fires.

Marsden is a founding member of a team that has contributed more than 400,000 lines of code to the Linux kernel since he started out in 2000. He went over various approaches to OS security from a design and process perspective and some of the lessons that have come up as more services start to rely on containers for isolation.

In the past year, wide-ranging security blazes have led to rethinking some classic measuring sticks.

Take uptime, for example: The Oracle team has gone over six years without a reboot. “I’m still not sure whether to be proud of that accomplishment or not,” Marsden says speaking at the recent Open Source Summit North America. “I still feel a little glimmer of joy at a 200-day uptime or 300-day uptime, but it’s now tempered a bit by the knowledge of security exploits and vulnerabilities. As well as sympathy for the poor soul who’s going to have to upgrade that application someday.”

You don’t have to be an academic researcher to notice that security attacks are ticking up, he says. “Just try starting a server on any large cloud provider public IP block these days watch your log files even without a host name assigned, attacks are going to start to roll in: security probes, SSH login attempts, etc. One way to handle that legacy OS problem is through containers.

“Strict container usage provides process isolation, creates a natural sandbox environment,” Marsden notes. “Should an attacker be successful, they’re still stuck in that container and this makes for pretty decent security.”

Snuffing out a spark

He does admit that to an extent containers share a lot with the host OS and with each other. “It’s one kernel running around underneath all the containers and a vulnerability that affects that container is a vulnerability in a kernel that’s going to affect every container running on that system.”

“We are really excited about the promise of the Kata Containers project.”

Meanwhile each container has its own user space and runtime stack with their own sets of security updates. Containers will still be updated if the on-disk version of the library is updated — but only if the container is actively restarted. Until it restarts it has the old version of all of those libraries, applications which are written to live in containers using a micro-service architecture, those would actually be updated because they’re easy to update, but if it’s going to be a legacy-type application those real-world container workloads are going to have the same uptime and availability requirements as a classic data center service, he notes.

“Despite the advantage of deployment and packaging, container security isn’t really that much better than the old model of application development,” Marsden says. Containers also provide a perfect jumping off point for side-channel attacks. These attacks work against processes running on hyper threads and C groups and namespaces don’t necessarily give native protection against sharing processor resources with potentially antagonistic workloads, Marsden adds.

An ounce of prevention: Kata Containers

“We are really excited about the promise of the Kata Containers project,” he says because rather than running as processes and namespaces, Kata Containers takes advantage of hardware virtualization for process isolation using a stripped-down kernel that boots in seconds. “These special containers use those hardware virtualization features to defend against cross-contamination with other containers and will be able to take advantage of KVMs defenses against side-channel attacks.

“That problem of one vulnerability showing up in every container is only magnified in the cloud world. Where containers share one kernel across the computer, clouds end up sharing a hypervisor across the entire fleet.” Clouds still have plenty of software diversity at the guest layer, he says but they enforce much stricter controls on service and hypervisor layers.

Clouds have managed to standardize to some extent the hypervisor and virtualization environments to provide an enforced consistency across that whole environment, he says. An ideal cloud would have just one OS platform running all of its services, just one hypervisor version that’s running in those services that’s always kept fully up-to-date — fully patched and capable of being patched for security at a moment’s notice — because when security vulnerabilities are fixed they need to be fixed everywhere, he notes.

His team relies on a put your eggs-in-one-basket (and keep your eyes on it) approach that translates into relying on rebootless live patching. Here’s why you want to think about live patching solutions for your production workloads, he says.

Cloud service providers often get early disclosures for security vulnerabilities but the end-user really only found out when those public disclosures come out. “Live patching is really the next best thing to having a system that can actually respond and pivot to attacks in real time, moving towards a self-healing OS,” Marsden says.

Catch his entire talk here.