Japanese Arm-Based Supercomputer Fugaku Is Now World's Most Powerful

(Image credit: Top500)

There's a new kid on the block called Fugaku, which is a Japanese, Arm-based supercomputer that's now the world's most powerful. It is significantly faster than all of today's supercomputers, and the first Arm-based supercomputer to take home the world's fastest prize.

The system is installed at the RIKEN Center for Computational Science in Kobe, Japan, and scored a High-Performance Linpack score of 415.5 petaflops, with a peak performance of about 513 petaflops. In single-precision operations, the system is able to surpass the 1-exaflop mark. 

Powering Fugaku are a staggering 152,064 of Fujitsu's 48-core A64FX SoCs (System-on-Chip), which tally up to a total of 7.3-million CPU cores. The chips run at 2.0 GHz with a boost to 2.2 GHz, and carry 32 GB of HBM2 memory each.

Swipe to scroll horizontally
Header Cell - Column 0 CoresLinpack Performance
Fugaku7,299,072415.5 petaflop
Summit2,414,592148.6 petaflop
Sierra1,572,48094.6 petaflop
Sunway TaihuLight10,649,60093.0 petaflop
Tianhe-2A4,981,76061.4 petaflop

For comparison, IBM's Summit, which has topped the list since 2017, jots down a Linpack score of 148.6 petaflops, making the ARM-based Fugaku 2.8 times stronger than its American competitor. But, it also uses about 2.8 times as much power at a total of roughly 28 megawatts.

Not long ago, Intel also claimed that the Aurora would be the first supercomputer to break the exaflop barrier, though that system is only expected to enter operation in 2021 at the earliest.

Meanwhile, for a moment, Folding@Home had broken the exaflop barrier back in March as many donors set their home PCs up to donate their leftover resources to fighting the Coronavirus. But, that wasn't officially a supercomputer, so it never made it onto the Top500 list.

Niels Broekhuijsen

Niels Broekhuijsen is a Contributing Writer for Tom's Hardware US. He reviews cases, water cooling and pc builds.

  • DZIrl
    Now I see guys telling how ARM is more powerful than x86 or P9.
    Summit has 202752 cores at only 13MW. Fugaku has 7299072 cores at 28MW. Fugaku is 2.8 time more powerful but has 36 time more cores!
    Also Summit has 27648 NVidia V100 each at about 300W!
    Reply
  • JarredWaltonGPU
    DZIrl said:
    Now I see guys telling how ARM is more powerful than x86 or P9.
    Summit has 202752 cores at only 13MW. Fugaku has 7299072 cores at 28MW. Fugaku is 2.8 time more powerful but has 36 time more cores!
    Also Summit has 27648 NVidia V100 each at about 300W!
    It's worth noting that Top500 counts Nvidia SMs (in GPUs) as one "core" each. Fugaku has no GPUs but lots of CPUs. Summit has far fewer CPUs, but it also has six V100 GPUs per 2 Power9 CPUs, and each GPU counts as 80 'cores' -- so it still has 2,414,592 cores total, by Top500 metrics where 1 Nvidia SM = 1 core, 1 AMD CU = 1 core, and 1 CPU = 1 core.
    Reply
  • gamenadez
    The Question...
    Can it run Crysis?
    Reply
  • nofanneeded
    more powerful yes , but at a cost .... how many cores again ? there should be performance/cores comparisons
    Reply
  • Adz_au
    nofanneeded said:
    more powerful yes , but at a cost .... how many cores again ? there should be performance/cores comparisons
    What do you mean by "at what cost" Cost is not the primary reasoning here. The first real Super Computers required liquid nitrogen cooling. Who puts that kind of money into a computer building's cooling requirements?

    It's No.1 in compute. Is it extravagant? yes, who cares!
    No.1

    Intel used to make CPUs you could cook an egg on once upon a time.

    Good effort I say.
    Reply
  • bit_user
    Not long ago, Intel also claimed that the Aurora would be the first supercomputer to break the exaflop barrier, though that system is only expected to enter operation in 2021 at the earliest.
    It still could be. HPC systems are rated in terms of double-precision, so Fugaku wouldn't really be considered to have broken the exaflops barrier.
    Reply
  • bit_user
    gamenadez said:
    The Question...
    Can it run Crysis?
    No. Or, maybe in an emulator and badly.

    It's not based on GPUs, so the graphics rendering backend would be running on CPU cores.

    As for the main game logic, that would have to run in a x86 emulator.

    Even so, I'd imagine it would be practically limited to running on just one 48-core chip. So, not even worth thinking about.
    Reply
  • bit_user
    nofanneeded said:
    more powerful yes , but at a cost ....
    Well, it uses a fully-custom CPU design, so that's going to skew costs by a lot.

    For Japan, having their own homegrown HPC is surely a matter of strategic importance. So, they probably don't mind subsidizing it.

    nofanneeded said:
    how many cores again ? there should be performance/cores comparisons
    Top500 has more details.
    Reply
  • CerianK
    gamenadez said:
    The Question...
    Can it run Crysis?
    That question is archaic. The new question, (you heard it here first), is: 'Can it be Crysis'?
    AI Learns to be PacMan
    Reply
  • bit_user
    CerianK said:
    That question is archaic. The new question, (you heard it here first), is: 'Can it be Crysis'?
    AI Learns to be PacMan
    My favorite part about that:

    the AI network that generated the 50,000 Pac-Man games for training is actually really good at Pac-Man, so it rarely died. That caused GameGAN to not fully comprehend that a normal ghost can catch Pac-Man and kill it. At one point, the network would 'cheat' and turn a ghost purple when it reached Pac-Man, or allow the ghost to pass through Pacman with no ill effect, or other anomalous behavior. Additional training is helping to eliminate this.

    ...and we're talking Pac Man, here. So, good luck with it learning to plausibly simulate anything much more complex.

    Also:
    The GameGAN version of Pac-Man also targets a low output resolution of only 128 x 128 pixels right now. That's an even lower resolution than the original arcade game (224 x 288).

    ...far from playing at 4k - I hope you don't mind squinting.

    Suffice to say, I don't expect "being Crysis" is going to be a thing, anytime soon.

    Maybe someone will create a far more sophisticated model that's specifically designed for 3D game simulation and is a lot easier to train, but that starts to feel more like programming and less like machine learning.
    Reply