Japanese Arm-Based Supercomputer Fugaku Is Now World's Most Powerful
Its first job: Fight the Coronavirus
There's a new kid on the block called Fugaku, which is a Japanese, Arm-based supercomputer that's now the world's most powerful. It is significantly faster than all of today's supercomputers, and the first Arm-based supercomputer to take home the world's fastest prize.
The system is installed at the RIKEN Center for Computational Science in Kobe, Japan, and scored a High-Performance Linpack score of 415.5 petaflops, with a peak performance of about 513 petaflops. In single-precision operations, the system is able to surpass the 1-exaflop mark.
Powering Fugaku are a staggering 152,064 of Fujitsu's 48-core A64FX SoCs (System-on-Chip), which tally up to a total of 7.3-million CPU cores. The chips run at 2.0 GHz with a boost to 2.2 GHz, and carry 32 GB of HBM2 memory each.
Header Cell - Column 0 | Cores | Linpack Performance |
---|---|---|
Fugaku | 7,299,072 | 415.5 petaflop |
Summit | 2,414,592 | 148.6 petaflop |
Sierra | 1,572,480 | 94.6 petaflop |
Sunway TaihuLight | 10,649,600 | 93.0 petaflop |
Tianhe-2A | 4,981,760 | 61.4 petaflop |
For comparison, IBM's Summit, which has topped the list since 2017, jots down a Linpack score of 148.6 petaflops, making the ARM-based Fugaku 2.8 times stronger than its American competitor. But, it also uses about 2.8 times as much power at a total of roughly 28 megawatts.
Not long ago, Intel also claimed that the Aurora would be the first supercomputer to break the exaflop barrier, though that system is only expected to enter operation in 2021 at the earliest.
Meanwhile, for a moment, Folding@Home had broken the exaflop barrier back in March as many donors set their home PCs up to donate their leftover resources to fighting the Coronavirus. But, that wasn't officially a supercomputer, so it never made it onto the Top500 list.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Niels Broekhuijsen is a Contributing Writer for Tom's Hardware US. He reviews cases, water cooling and pc builds.
-
DZIrl Now I see guys telling how ARM is more powerful than x86 or P9.Reply
Summit has 202752 cores at only 13MW. Fugaku has 7299072 cores at 28MW. Fugaku is 2.8 time more powerful but has 36 time more cores!
Also Summit has 27648 NVidia V100 each at about 300W! -
JarredWaltonGPU
It's worth noting that Top500 counts Nvidia SMs (in GPUs) as one "core" each. Fugaku has no GPUs but lots of CPUs. Summit has far fewer CPUs, but it also has six V100 GPUs per 2 Power9 CPUs, and each GPU counts as 80 'cores' -- so it still has 2,414,592 cores total, by Top500 metrics where 1 Nvidia SM = 1 core, 1 AMD CU = 1 core, and 1 CPU = 1 core.DZIrl said:Now I see guys telling how ARM is more powerful than x86 or P9.
Summit has 202752 cores at only 13MW. Fugaku has 7299072 cores at 28MW. Fugaku is 2.8 time more powerful but has 36 time more cores!
Also Summit has 27648 NVidia V100 each at about 300W! -
nofanneeded more powerful yes , but at a cost .... how many cores again ? there should be performance/cores comparisonsReply -
Adz_au
What do you mean by "at what cost" Cost is not the primary reasoning here. The first real Super Computers required liquid nitrogen cooling. Who puts that kind of money into a computer building's cooling requirements?nofanneeded said:more powerful yes , but at a cost .... how many cores again ? there should be performance/cores comparisons
It's No.1 in compute. Is it extravagant? yes, who cares!
No.1
Intel used to make CPUs you could cook an egg on once upon a time.
Good effort I say. -
bit_user Not long ago, Intel also claimed that the Aurora would be the first supercomputer to break the exaflop barrier, though that system is only expected to enter operation in 2021 at the earliest.
It still could be. HPC systems are rated in terms of double-precision, so Fugaku wouldn't really be considered to have broken the exaflops barrier. -
bit_user
No. Or, maybe in an emulator and badly.gamenadez said:The Question...
Can it run Crysis?
It's not based on GPUs, so the graphics rendering backend would be running on CPU cores.
As for the main game logic, that would have to run in a x86 emulator.
Even so, I'd imagine it would be practically limited to running on just one 48-core chip. So, not even worth thinking about. -
bit_user
Well, it uses a fully-custom CPU design, so that's going to skew costs by a lot.nofanneeded said:more powerful yes , but at a cost ....
For Japan, having their own homegrown HPC is surely a matter of strategic importance. So, they probably don't mind subsidizing it.
Top500 has more details.nofanneeded said:how many cores again ? there should be performance/cores comparisons -
CerianK
That question is archaic. The new question, (you heard it here first), is: 'Can it be Crysis'?gamenadez said:The Question...
Can it run Crysis?
AI Learns to be PacMan -
bit_user
My favorite part about that:CerianK said:That question is archaic. The new question, (you heard it here first), is: 'Can it be Crysis'?
AI Learns to be PacMan
the AI network that generated the 50,000 Pac-Man games for training is actually really good at Pac-Man, so it rarely died. That caused GameGAN to not fully comprehend that a normal ghost can catch Pac-Man and kill it. At one point, the network would 'cheat' and turn a ghost purple when it reached Pac-Man, or allow the ghost to pass through Pacman with no ill effect, or other anomalous behavior. Additional training is helping to eliminate this.
...and we're talking Pac Man, here. So, good luck with it learning to plausibly simulate anything much more complex.
Also:
The GameGAN version of Pac-Man also targets a low output resolution of only 128 x 128 pixels right now. That's an even lower resolution than the original arcade game (224 x 288).
...far from playing at 4k - I hope you don't mind squinting.
Suffice to say, I don't expect "being Crysis" is going to be a thing, anytime soon.
Maybe someone will create a far more sophisticated model that's specifically designed for 3D game simulation and is a lot easier to train, but that starts to feel more like programming and less like machine learning.