A new track at the upcoming Austin Summit
will explore where the cutting edge of the cloud meets the cutting edge of science.
The high-performance computing (HPC) research track will focus on how universities, labs, research institutions and the like are increasingly using OpenStack for everything from supercomputing — (HPC), high-throughput computing (HTC) — to research applications (like Matlab, RStudio, iPython) and much more. OpenStack users from around the world will discuss reference architectures, best practices and case studies for HPC.
The pioneering track has three co-chairs: Guido Aben, director of eResearch, AARNET,
Bernard Meade, HPC/Cloud compute manager at the University of Melbourne and Stig Telfer, at Cambridge University.
Telfer hopes that the track will be “crammed with interesting talks.” Gentle reminder: you have until February 1 to get your proposals for talks ready for Austin.
Superuser talked to Telfer about why HPC no longer plays a "supporting actor role" in the OpenStack community and how you can get involved.
HPC has had ties to OpenStack more or less since the beginning, what’s the impetus for a Summit track on it now?
There has indeed been a long-standing and successful use case of cloud compute for scientific applications and HPC is a demanding subset of that. HPC facilities have tended to use OpenStack private clouds in a supporting actor role to a tightly-coupled supercomputer, as a high-throughput task farm performing post-processing analytics feeding on the data generated by the super.
What has changed for me is the tantalizing promise of the software-defined supercomputer: a system with the performance levels of HPC but the easy flexibility of cloud. With every release, OpenStack is getting closer to delivering this.
Conversely, what has changed very little has been the pace of development in HPC system management. Administrators in the HPC market space have long looked upon the innovations in flexible infrastructure management coming from the cloud space and demanded equivalent capabilities of HPC vendors. Admins are also frustrated by the lack of consistent interfaces for management of HPC systems from different vendors – sometimes even different products from the same vendor. At the same time, HPC users do not want to sacrifice performance for these new capabilities.
That’s a demanding set of requirements but now we are at a point where OpenStack appears able to meet those requirements. How exciting is that?
What’s the biggest challenge operators in this area facing now?
I think it has to be the skill set. Research institutions have deep expertise in HPC cluster management, but the OpenStack skill set is so different those institutions can find it a challenge to manage their new private cloud infrastructure. We would love to help with the development of those skills through creation of a self-reinforcing OpenStack/HPC community.
The second challenge I can see is how to get effective performance from OpenStack. Some of the HPC use cases are extremely demanding in areas where virtualization is typically weak: latency-driven patterns of communication and IO in particular appear to be the hypervisor’s achilles heel. Bare metal and containers are emerging capabilities for OpenStack which address this overhead – how close to the cutting edge can we get while still maintaining production service-level agreements?
Who should get involved?
Anyone who provides infrastructure for HPC and has an interest in doing it through OpenStack!
Why shouldn’t we benefit from sharing? Unlike telcos for example, HPC institutions are not typically commercial rivals. We all have the same motivation of providing effective infrastructure but we are not competing against one another by doing so.
How can they get involved?
It’s early days as yet, but there is an [hpc] topic tag already assigned for the openstack-operators mailing list and my assumption is that when we have things to discuss, that will be the forum to use.
HPC doesn’t cover all the use cases of academia, and I expect there will be other ways people will find to interact. I would expect the whole thing to fit under the openstack-operators umbrella though.
Beyond that, I think we would benefit from regular social gatherings at the Summits and I’m hoping the Foundation’s new Scientific Working Group will be a focus for organizing that. Discussions for the Working Group are held on the [email protected] mailing list with the subject-line tag [scientific-wg]
**What contributions or involvement do you need most right now from the community?**
We know there are problems to solve and we know there’s interest in solving them. What I would love to see now would be for OpenStack-using scientists and researchers from across the world to get together and compare notes in a way that enables useful information sharing. I think what we need most right now is outreach and then we need to prove the benefits of doing so.
[Cover Photo](https://www.flickr.com/photos/eager/7969600332/) // CC [BY NC](https://creativecommons.org/licenses/by-nc/2.0/)
- OpenStack Homebrew Club: Meet the sausage cloud - July 31, 2019
- Building a virtuous circle with open infrastructure: Inclusive, global, adaptable - July 30, 2019
- Using Istio’s Mixer for network request caching: What’s next - July 22, 2019