Tuolumne is El Capitan's Little Brother with 200+ PetaFLOPS Performance

Nvidia Ada Lovelace and GeForce RTX 40-Series
(Image credit: Shutterstock)

Earlier this week, the Lawrence Livermore National Laboratory announced that it began to build its El Capitan supercomputer designed to deliver compute performance of over 2 FP64 ExaFLOPS for various classified national security-related research. In addition to El Capitan, LLNL will have a smaller companion system called Tuolumne for unclassified research delivering 10% to 15% of El Capitan's performance.

"We are planning to get an unclassified system that will be called Tuolumne," said Bronis R. de Supinski in an interview with ExaScaleProject.org. De Supinski is the chief technology officer for Livermore Computing at LLNL. "It will be roughly between 10% to 15% the size of El Capitan."

El Capitan, the first exascale supercomputer to be based on AMD's accelerated processing units consisting of Zen 4 general purpose cores and CDNA 3-based compute GPUs, promises to deliver performance north from 2 ExaFLOPS, which is significantly higher than that of Frontier that has Rpeak performance of 1.679 FP64 ExaFLOPS. 

Even 10% out of 2 ExaFLOPS — 200 PetaFLOPS — will make Tuolumne one of the Top 10 supercomputers in the current version of the Top 500 list. At 15% of 2 ExaFLOPS, the machine will challenge Leonardo, an Intel Xeon Platinum 8358 and Nvidia A100-based system with Rpeak performance of 304.47 PetaFLOPS. 

Classified supercomputers — like LLNL's current Sierra and forthcoming El Capitan — are used for national security matters. For example, El Capitan is expected to be used primarily for stockpile stewardship, the U.S. program of nuclear weapon reliability testing and maintenance without the use of actual testing. By contrast, unclassified supercomputers — such as LLNL's Tuolumne — are used for various computational workloads in multiple fields, such as scientific research, engineering simulations, data analysis, and weather forecasting. 

"Tuolumne will be contributing more to the wider range of scientific areas," de Supinski added. "There's a lot of materials modeling. We have typically had a wide range of molecular dynamics. Some QCD get run on the system, seismic modeling. What will probably happen is that, you know, those sorts of applications, climate, and that sort of thing will run on Tuolumne. And if there is a particular case to be made, we can occasionally provide for briefer runs on the big system."

Anton Shilov
Freelance News Writer

Anton Shilov is a Freelance News Writer at Tom’s Hardware US. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • NeoMorpheus
    Another awesome win for AMD!.


    Reply
  • bit_user
    El Capitan is expected to be used primarily for stockpile stewardship, the U.S. program of nuclear weapon reliability testing and maintenance without the use of actual testing.
    I find it a little amusing that we've been hearing this same justification of the DoE's supercomputers for a few decades, at least. You'd think that if the first few generations of such machines were capable of answering these questions, a modern desktop with a decent dGPU should be all it takes in this day and age.

    That's probably a very simplistic and ignorant take. If anyone can enlighten us on why 2 ExaFLOPS of compute power is needed for nuclear stockpile stewardship, please be my guest.

    BTW, I have no doubt the big machine will get plenty of good use. I just wish they'd drop their former pretense, if that's all it is.
    Reply
  • Allen_B
    bit_user said:
    I find it a little amusing that we've been hearing this same justification of the DoE's supercomputers for a few decades, at least. You'd think that if the first few generations of such machines were capable of answering these questions, a modern desktop with a decent dGPU should be all it takes in this day and age.

    That's probably a very simplistic and ignorant take. If anyone can enlighten us on why 2 ExaFLOPS of compute power is needed for nuclear stockpile stewardship, please be my guest.

    BTW, I have no doubt the big machine will get plenty of good use. I just wish they'd drop their former pretense, if that's all it is.
    That seems like a fair question. By now we ought to be able to be fairly confident that existing warheads will go bang! if ever called upon.

    One guess for why we might need so much additional compute power is to design new weapons. There has always been a desire to reduce size and weight, as well as reducing the amount of fissile material needed. Plus other optimizations such as reducing radioactive fallout, edging closer to a pure fusion design, etc. There's probably plenty of work to go around.
    Reply