Why the US Is Losing Ground on the Next Generation of Powerful Supercomputers

“I feel the need — the need for speed.”

The tagline from the 1980s movie Top Gun could be seen as the mantra for the high-performance computing system world these days. The next milestone in the endless race to build faster and faster machines has become embodied in standing up the first exascale supercomputer.

Exascale might sound like an alternative universe in a science fiction movie, and judging by all the hype, one could be forgiven for thinking that an exascale supercomputer might be capable of opening up wormholes in the multiverse (if you subscribe to that particular cosmological theory). In reality, exascale computing is at once more prosaic — a really, really fast computer — and packs the potential to change how we simulate, model and predict life, the universe and pretty much everything.

First, the basics: exascale refers to high-performance computing systems capable of at least a billion billion calculations per second, about 50 times faster than the most powerful supercomputers in existence today. Computing systems capable of at least one exaFLOPS (a quintillion floating point operations per second) has additional significance, it’s estimated such an achievement would potentially match the processing power required to simulate the human brain.

Of course, as with any race, there is a healthy amount of competition, which Singularity Hub has covered over the last few years. The supercomputer version of NFL Power Rankings is the TOP500 List, a compilation of the most super of the supercomputers. The 48th edition of the list was released last week at the International Conference for High Performance Computing, Networking, Storage and Analysis, more succinctly known as SC16, in Salt Lake City.

In terms of pure computing power, China and the United States are pretty much neck and neck. Both nations now claim 171 HPC systems apiece in the latest rankings, accounting for two-thirds of the list, according to TOP500.org. However, China holds the top two spots with its Sunway TaihuLight, at 93 petaflops, and Tianhe-2, at 34 petaflops.

Michael Feldman, managing editor of TOP500, wrote earlier this year about what he characterized as a four-way race to exascale supremacy between the United States, China, Japan and France. The United States, he wagers, is bringing up the rear of the pack, as most of the other nations project to produce an exascale machine by about 2020. He concedes the race could be over today with enough money and power.

“But even with that, one would have to compromise quite a bit on computational efficiency, given the slowness of current interconnects relative to the large number of nodes that would be required for an exaflop of performance,” he writes. “Then there’s the inconvenient fact there are neither applications nor system software that are exascale-ready, relegating such a system to a gargantuan job-sharing cluster.”

Dimitri Kusnezov, chief scientist and senior advisor to the Secretary of the US Department of Energy, takes the long-term view when discussing exascale computing. What’s the use for all that speed if you don’t know where you’re going, he argues?

“A factor of 10 or 100 in computing power does not give you a lot in terms of increasing the complexity of the problems you’re trying to solve,” he said during a phone interview with Singularity Hub.

“We’re entering a new world where the architecture, as we think of exascale, [is] not just faster and more of the same,” he explained. “We need things to not only do simulation, but we need [them] at the same time to reach deeply into the data and apply cognitive approaches — AI in some capacity — to distill from the data, together with analytical methods, what’s really in the data that can be integrated into the simulations to help with the class of problems we face.”

“There aren’t any architectures like that today, and there isn’t any functionality like that today,” he added.

In July 2015, the White House announced the National Strategic Computing Initiative, which established a coordinated federal effort in “high-performance computing research, development, and deployment.”

The DoE Office of Science and DoE National Nuclear Security Administration are in charge of one cornerstone of that plan – the Exascale Computing Project (ECP) — with involvement from Argonne, Lawrence Berkeley, Oak Ridge, Los Alamos, Lawrence Livermore, and Sandia national labs.

Since September of this year, DoE has handed out nearly $90 million in awards as part of ECP.

More than half of the money will go toward what DoE calls four co-design centers. Co-design, it says, “requires an interdisciplinary engineering approach in which the developers of the software ecosystem, the hardware technology, and a new generation of computational science applications are collaboratively involved in a participatory design process.”

Another round of funds will support 15 application development proposals for full funding and seven proposals for seed funding, representing teams from 45 research and academic organizations. The modeling and simulation applications that were funded include projects ranging from “deep-learning and simulation-enabled precision medicine for cancer” to “modeling of advanced particle accelerators.”

The timeline — Feldman offers 2023 for a US exascale system — is somewhat secondary to functionality from Kusnezov’s perspective.

“The timeline is defined by the class of problems that we’re trying to solve and the demands they will have on the architecture, and the recognition that those technologies don’t yet exist,” he explains. “The timeline is paced by the functionality we’d like to include and not by the traditional benchmarks like LINPACK, which are likely not the right measures of the kinds of things we’re going to be doing in the future.

“We are trying to merge high-end simulation with big data analytics in a way that is also cognitive, that you can learn while you simulate,” he adds. “We’re trying to change not just the architecture but the paradigm itself.”

Kusnezov says the US strategy is certainly only one of many possible paths toward an exascale machine.

“There isn’t a single kind of architecture that will solve everything we want, so there isn’t a single unique answer that we’re all pushing toward. Each of the countries is driven by its own demands in some ways,” he says.

To illustrate his point about a paradigm shift, Kusnezov talks at length about President Barack Obama’s announcement during his State of the Union address earlier this year that the nation would pursue a cancer moonshot program. Supercomputers will play a key role in the search for a cure, according to Kusnezov, and the work has already forced DoE to step back and reassess how it approaches rich, complex data sets and computer simulations, particularly as it applies to exascale computing.

“A lot of the problems are societal, and getting an answer to them is everyone’s best interest,” he notes. “If we could buy all of this stuff off the shelf, we would do it, but we can’t. So we’re always looking for good ideas, we’re always looking for partners. We always welcome the competition in solving these things. It always gets people to innovate — and we like innovation.”


Image Credit: Sam Churchill/Flickr

Peter Rejcek
Peter Rejcekhttps://www.peterrejcek.com/
Formerly the world’s only full-time journalist covering research in Antarctica, Peter became a freelance writer and digital nomad in 2015. Peter’s focus for the last decade has been on science journalism, but his interests and expertise include travel, outdoors, cycling, and Epicureanism (food and beer). Follow him at @poliepete.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured