Scientists used a retired supercomputer to prep for NASA’s Roman mission — Argonne Theta supercomputer created nearly four million simulated images
"Using Argonne’s now-retired Theta machine, we accomplished in about nine days what would have taken around 300 years on your laptop."
Ahead of NASA's upcoming launch of the Grace Roman Space Telescope for the "Roman Mission," researchers are using the Argonne Theta supercomputer to run OpenUniverse-powered simulations of the cosmos. The simulation is being run for the to-be-launched Grace Roman Space Telescope and the grounded Chilean Vera C. Rubin Observatory. According to Jim Chiang, who helped create the simulations, "OpenUniverse lets us calibrate our expectations of what we can discover with these telescopes [...by giving] us a chance to exercise our processing pipelines, better understand our analysis codes, and accurately interpret the results so we can prepare to use the real data right away once it starts coming in."
As cutting-edge and high-concept as this may sound, OpenUniverse is an open-source Solar System simulator leveraging OpenGL that has existed for about 24 years. The classic version of OpenUniverse also inspired other planetarium software.
Of course, NASA's implementation of OpenUniverse in 2024 is much more ambitious than the standard version since it's intended for hard science. The data used by NASA's OpenUniverse 2024 has been released as a 10-terabyte subset of the complete package, with the remaining 390 terabytes still to be processed at the time of writing.
Katrin Heitmann, cosmologist and deputy director of Argonne's High Energy Physics division and the one who managed the project's supercomputer time, stated, "Using Argonne's now-retired Theta machine, we accomplished in about nine days what would have taken 300 years on your laptop. The results will shape Roman and Rubin's future attempts to illuminate dark matter and energy while offering other scientists a preview of the types of things they'll be able to explore using the data from the telescopes."
NASA's official post on the matter refers to the project as "A Cosmic Dress Rehearsal" for the researchers involved, one year ahead of the Rubin Observatory telescope's activation in 2025 and three years ahead of the NASA Roman launch in May 2027. In particular, Roman and Rubin are both meant to help us achieve a fuller understanding of the dark energy that expands our universe and the dark matter that helps fill it.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Christopher Harper has been a successful freelance tech writer specializing in PC hardware and gaming since 2015, and ghostwrote for various B2B clients in High School before that. Outside of work, Christopher is best known to friends and rivals as an active competitive player in various eSports (particularly fighting games and arena shooters) and a purveyor of music ranging from Jimi Hendrix to Killer Mike to the Sonic Adventure 2 soundtrack.
-
bit_user Considering the operating costs of running it for those 9 days, it probably would've been cheaper to just rent the equivalent amount of compute power on AWS.Reply -
derekullo
Was literally doing the math of how many laptops I'd need when I read that lol.bit_user said:Considering the operating costs of running it for those 9 days, it probably would've been cheaper to just rent the equivalent amount of compute power on AWS.
Hard to tell if the laptop is an i3 with 4 threads or an i7 with 20+ threads.
300 years = 109,500 days
109,500 days / 9 days = 12,166 laptops
At an easy math cost of $1000 a laptop that's $12.1M in laptops!
AWS probably easier since you don't need to figure out how to cluster 12166 laptops and you don't have to find a way to get rid of 12166 laptops when the computation is done! -
bit_user
For an old supercomputer to be that much faster, I figured they must be using an old laptop as a reference point.derekullo said:Was literally doing the math of how many laptops I'd need when I read that lol.
Good thinking! The next step would be to go from laptops to cores. So, if we assume the figure dated back to the supercomputer's heyday, then they would've probably been talking about dual-core laptops, which means 25k cores. If we assume each AWS core is now twice as fast, then maybe we're back to 12k cores. If they use 192-vCPU instances (c6a.48xlarge?), they'd need 64 days worth of them. I have no idea about AWS pricing... what would that work out to?derekullo said:109,500 days / 9 days = 12,166 laptops
So, I naturally wanted to check our estimates and pulled up its specifications. It has mostly CPU instances and a handful of GPU ones.
https://www.alcf.anl.gov/alcf-resources/theta
They claim the CPU instances have a total of 281k cores, so I was only off by an order of magnitude! However, if I'm reading the entry correctly, those cores are assembled from 64-core, 1.3-GHz Intel Xeon Phi 7230. Those are quad-thread, dual-issue Atom-class CPUs. Aside from each having 2x AVX-512 pipelines, those cores are basically trash. That would put the right answer probably at some small multiple off of what I said.