Twice a year, the world’s fastest supercomputers take a test to see which is top of class.

These hundred-million-dollar machines usually run on hundreds of thousands of processors, occupy warehouse floors, gobble up copious amounts of energy, and crunch numbers at an ungodly pace. All that computing is directed at some of humanity’s toughest challenges with the likes of advanced climate modeling or protein simulations to help cure diseases.

For the last two years, the US’s Summit was the fastest supercomputer on the planet. But this week, a new system took the crown. Running 2.8 times faster than Summit, Japan’s Fugaku notched a blistering 415 petaflops as measured by Top500’s high-performance linpack benchmark (HP-L).

That means Fugaku completes a simple mathematical operation 415 quadrillion times a second. You’d need every person on Earth to complete a calculation a second for 20 months—no bathroom breaks—to match what Fugaku does in a heartbeat.

Japan last claimed the top spot with its K computer in 2011. Developed by the Riken Insitute and Fujitsu, Fugaku took a billion dollars and the better part of the ensuing decade to build. It’s notable because it doesn’t use graphics processing units (GPUs), like many of its competitors, and it’s the first top supercomputer to use Arm processors—an efficient chip design commonly used in mobile devices.

In addition to all that, Fugaku is insanely fast at machine learning.

World’s Top AI Brainiac

While supercomputers have historically been mostly about military and scientific research—and Fugaku has already been crunching coronavirus data—they’re also increasingly being tailored to run machine learning algorithms. Indeed, Fugaku’s predecessor, Summit, was designed from the ground up with AI in mind.

Likewise, Fugaku will be an unparalleled AI brainiac.

By a new measure, HPL-AI, Fugaku was able to do the kind of calculations used in today’s machine learning algorithms at a speed of 1.4 exaflops. That mark is fastest in the world.

Exascale computing by more traditional measures (that is, by HP-L, not HPL-AI) is the next big computing milestone, anticipated for over a decade. The first such systems are expected next year or the year after. But for machine learning, Fugaku is already there.

That’s significant because AI researchers are scaling up machine learning algorithms at a quick pace. OpenAI, for instance, recently pulled back the curtains on a massive new machine learning algorithm for natural language processing called GPT-3. The algorithm is notable for its size, 175 billion parameters, as well as its ability to learn and perform a range of tasks.

OpenAI also partnered with Microsoft to fund and build a supercomputer dedicated to its machine learning efforts. Microsoft claimed (unofficially) that it would be the fifth fastest supercomputer in the world, though the system has surely slipped a spot courtesy of Fugaku.

Whether computing power alone is enough to fuel continued machine learning breakthroughs is a source of debate, but it seems clear we’ll be able to test the hypothesis.

Supercomputing Powers

No matter if it’s for the military, science, or AI, the world of supercomputers is still a competitive affair.

China and the US have traded top supercomputer for almost a decade. The US has recently occupied the top two spots, but China was tops for nearly five years with its Sunway TaihuLight and Tianhe-2A systems. In terms of total Top500 machines, China eclipsed the US in 2016 and hasn’t looked back. The country has 226 supercomputers to the US’s 114.

It’s no match for the US and China, but Japan is no supercomputing slouch.

The country claims third for total supercomputers on the Top500 list with 30. And despite having fewer systems, Japan’s 530 petaflops total—thanks largely to Fugaku—is now just behind China’s 565 petaflops and the US’s 644 petaflops. Even so, Fugaku may not occupy the throne very long.

Two US exascale systems—Intel and the Argonne National Laboratory’s Aurora and Cray, AMD, and Oak Ridge National Laboratory’s Frontier—are due next year, and a third, Lawrence Livermore National Laboratory’s El Capitan, is slated to arrive the year after. China also has three exascale systems in the works, one of which may be completed next year.

Pandemic Computing

As the world awaits this impending crop of exascale supercomputers, there’s plenty to be done today.

Motivated by the coronavirus pandemic, Riken put Fugaku into operation a year ahead of its original 2021 launch date. The system is being used for research into treatments as well as mapping how the virus is transmitted and can be slowed.

Satoshi Matsuoka, director of the Riken Center for Computational Science, said Fugaku’s speed is already on display. Referencing a study into Covid-19’s infamous spike protein, Matsuo said what would have taken Fugaku’s predecessor, the K computer, days or weeks took just three hours on Fugaku. And it isn’t alone.

The Covid-19 High Performance Computing Consortium has assembled 41 supercomputers capable of 483 petaflops to work on 66 projects, including some studying the virus’s biology, potential treatments, and how to improve patient care.

With its early launch, Fugaku is technically still in testing mode, but the machine should be fully operational in 2021. It’ll be amazing to see what it can do at full strength.

Image credit: Fujitsu

Jason is managing editor of Singularity Hub. He did research and wrote about finance and economics before moving on to science, technology, and the future. He is curious about pretty much everything, and sad he'll only ever know a tiny fraction of it all.