US requests proposals for next-gen Discovery supercomputer — will be up to five times faster than the world's fastest supercomputer, arrive in 2027

ORNL
(Image credit: ORNL)

Last week, the Department of Energy (DOE) issued a request for proposals (RFP) to develop a new supercomputer named Discovery. This supercomputer will replace the current known fastest supercomputer in the world, Frontier, at Oak Ridge National Laboratory. Discovery aims to surpass Frontier's performance, offering three to five times more computational throughput (e.g., 8.5 ExaFLOPS) by 2027 or early 2028.  

ORNL mentions advanced AI, machine learning, improved energy efficiency, and comprehensive system modeling among the workloads that will run on the Discovery supercomputer. Unlike previous RFPs, this one does not specify an exact performance increase but only says that the new supercomputer has to be three to five times more powerful than its predecessor. 

Proposals for Discovery are due by August 30, 2024.  

The ORNL has a history of deploying the world's fastest supercomputers. For example, Jaguar, Titan, and Summit led the world's Top 500 list in different years. Frontier is the world's No. 1 supercomputer today. In fact, over the past decade, the facility has increased its computational power 500-fold while only quadrupling energy consumption.

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • ttquantia
    "Faster" has not been the term used in comparing supercomputers in the last decades. It used to be, back then when Cray XMP or similar was much faster than mass-produced CPUs. Nowadays the most descriptive measure is simply the number of CPUs and GPUs (together with information of how capable those GPUs are.) Which of course everybody understands.
    Reply
  • Taslios
    ttquantia said:
    "Faster" has not been the term used in comparing supercomputers in the last decades. It used to be, back then when Cray XMP or similar was much faster than mass-produced CPUs. Nowadays the most descriptive measure is simply the number of CPUs and GPUs (together with information of how capable those GPUs are.) Which of course everybody understands.
    Except speed is exactly how Top500.org classifies supercomputers.. Exascale means the system is capable of 1 quintillion calculations per second... calculations per second is a measurement of speed...
    Reply
  • Taslios
    With all the reported problems and delays on AURORA... i wonder if intel will even apply....
    Reply