“Very few can markets of any type can absorb as much product as you can throw at it — HPC is one of them,” says researcher Christopher Willard.


High performance computing is a 35-36 billion dollar market that most people think is dominated by academia and the public sector.

Not so, says Christopher Willard, chief research officer at Intersect 360 Inc., in a recent talk at the HPC advisory council Stanford Conference.

The HPC market’s current moderate growth rate — 3.5 percent, a little bit better than inflation — is driven by commercial  markets, a trend he expects to continue for the next five years.

“Commercial industrial sites are picking up and becoming more and more enthusiastic about high performance computing and spending more money,” Willard says, noting that 55.5 percent of current HPC use is commercial, 18 percent is academic and about 26 percent is government.

First the computing, then the laundry

Two sectors to watch for commercial growth are chemistry and finance. Some of the growth in the chemistry sector “comes from things like consumer product manufacturing, like pods for laundry detergent,” which he calls a “chemical miracle.”  The financial sector is also set to grow. “This not only includes Wall Street for pricing, risk analysis and trading, but also the insurance industry and a lot of the general business segments that mainly deal with moving money around.”

Growth for the commercial side of HPC is more elastic than for most things. Willard and colleagues surveyed companies to find out how much computing power they thought they might use in coming years and even though 15 percent responded that they had all the computing power they needed right now, some 65 percent said they could double it in five years. Respondents were confident they could also use up to five and ten times more power – and only on the fence when it came to 100 times the computing power.

“Very few can markets of any type can absorb as much product as you can throw at it — HPC is one of them.”

Shifting to hyperscale

“The hyperscale market consists of arbitrarily scalable, web-facing application infrastructure distinct from general IT investment, ” Willard says by way of definition.  The team at Intersect 360 has been eyeing the market for years until “pretty much it became a segment so big that it began to dwarf the other segments.”

It’s on the same growth spurt trajectory as HPC, and then some. “With HPC you’re going to grow system until you run out of science, with hyperscale you’re going to grow systems until you run out of people in the world who want to use the Internet.”
He clarifies that it’s not simply businesses with websites running web-facing applications on standard business computers. Hyperscale is more about scalabilty — how many different jobs you can run in a given amount of time, how much you can work stuff large jobs versus versus multiple jobs. And there is no “entry level” for single scientists or users who want to try it out.

His final comparison point between HPC and hyperscale is — the one he calls “really scary” — pertaining to budget: large supercomputer facilities cost about a hundred million dollars while the largest hyperscale installations require budgets in the 1 billion dollar-range. “That says is there is enough money here to restructure the overall computer market,” he concludes. “The people who are calling the shots in defining technology are the hyper scale people…they’re getting outspent on the HPC side.”

Deep learning

Roughly 75 percent of HPC centers are doing some deep learning and artificial intelligence, he says. “We’re placing the deep deep learning market about 2 to 2.5 billion dollars.”

Unlike many, though, Willard has a been-there-seen-that take on this buzzword. Deep learning, he says, is somewhat like giving room of eight to 10-year-olds each a can of spray paint and then walking out of the room. When you come back 10 minutes later, everything’s covered with spray paint: whatever you want painted is covered, sure, but the walls are covered and the kids are covered, too.

“We’ve taken all the computer scientists in the world and given them a deep learning can of spray paint.  They’re out there covering the entire solution space with deep learning.” For the most part, he says that’s not a bad thing, even if the result is sometimes that a few things that shouldn’t be painted get a coat or two. He expects the space to double in the next couple of years, after which it will grow moderately and potentially contract within five.

“Learning takes a lot of work and a lot of computing power, but it’s not yet clear to me how many times you have to relearn something once you get a good learning set completed of a language. Do you really need to complete that process or do you just need to fine-tune it every year or so to keep up with changes?”

He also gave an overview of the big players in HPC and their current market share and outlook, including segment leader HPE, Dell, Lenovo, Huawei and Fugitsu and offered a look into storage revenue models (Dell EMC NetApp, IBM etc.)

Catch the whole 42-minute talk on YouTube.

H/T Rich Report

For more upcoming talks on HPC, check out the dedicated track which features speakers from NTT, Intel and CERN at the upcoming OpenStack Summit.