Supermassive Black Hole Consumed 100 Million CPU Hours
The equivalent to 11,415 years of compute.
Yesterday's image showcasing the supermassive black hole in the center of our galaxy, the Milky Way, owes its existence to human ingenuity - and to our old friend, the CPU. Achieved thanks to a five-year partnership and research between the Event Horizon Telescope (EHT) array, the Frontera supercomputer at the Texas Advanced Computing Center (TACC) and NSF’s Open Science Grid. The image of Sagittarius A* (pronounced A-star) and its trapped light reignites the dreams and wonder of our universe. All of this is a cool 27,000 light-years from Earth, and shows an image of a black hole that's so supermassive it's estimated to be four million times more massive than the sun.
The galactic-level task took around 100 million CPU hours and the concerted efforts of 300-plus researchers to coalesce into the released image. But how does one "see" a black hole that's so massive its gravitic forces trap even lightspeed-moving particles? Well, one can actually see the contours of the black hole by paying attention to the comparably minute amount of light that actually manages to escape its event horizon. To create it, the researchers made use of the interferometry, radio wave-based scanning power of the EHT array, which includes eight radio telescopes deployed around the globe. But scanning impossibly distant celestial bodies comes with a number of caveats, such as exposure time (in this case, the cosmic equivalent of photographing a tree with a 1 second shutter speed in a windy day) and other elements such as data noise, particle interference and celestial bodies. All of which has to be accounted for.
To that effort, the researchers created a simulation library of black holes that leveraged the known physical properties of black holes, general relativity, and a number of other scientific areas. The idea was that this library could parse the enormous amount of data captured by the EHT array into an actual, viewable image - but to do so, an enormous amount of power and computing was not only necessary - it was mandatory.
“We produced a multitude of simulations and compared them to the data. The upshot is that we have a set of models that explain almost all of the data,” said Charles Gammie, a researcher at the University of Illinois at Urbana-Champaign. “It’s remarkable because it explains not only the Event Horizon data, but data taken by other instruments. It’s a triumph of computational physics.”
The vast majority of the required computing hours - around 80 million - were run on TACC’s Frontera system, a 23.5 petaflops, CentOS Linux 7-based Dell system currently ranking 13th on supercomputing's Top500 list. Frontera leverages 448,448 CPU cores courtesy of 16,016 units of Intel's Xeon Platinum 8280 chips, a Broadwell-class CPU leveraging 28 Intel cores running at 2.7GHz. The remainder 20 million simulation hours were computed on the NSF's open Science Grid, which leverages unused CPU cycles in a distributed computing fashion to unlock compute capabilities without the need to deploy costly supercomputers and related infrastructure.
“We were stunned by how well the size of the ring agreed with predictions from Einstein’s Theory of General Relativity,” added Geoffrey Bower, an EHT project scientist with the Institute of Astronomy and Astrophysics of Taipei. “These unprecedented observations have greatly improved our understanding of what happens at the very center of our galaxy and offer new insights on how these giant black holes interact with their surroundings.”
The researchers' efforts are sure to redouble after the endeavor's success, and they're now planning on doing something even more extraordinary: rather than a single still image, the next step is to film the black hole throughout a period of time, capturing the dance of the simultaneously wave and particle-like photons to showcase the black hole's dynamics. One can only wonder how many millions of CPU hours that effort will take.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Francisco Pires is a freelance news writer for Tom's Hardware with a soft side for quantum computing.
-
jeremyj_83 There are typos in the article "All of this is a cool 55 million light-years from Earth." Sagittarius A* is only 25,640 light years from Earth. You have put in 2000x the distance in the article. The distance to the supermassive black hole in the middle of M87*, imaged in 2019, is ~55 million light years.Reply
"Frontera leverages 448,448 CPU cores courtesy of 16,016 units of Intel's Xeon Platinum 8280 chips, a Broadwell-class CPU leveraging 28 Intel cores running at 2.7GHz." Frontera uses Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz Number of cores: 16 per socket, 32 per node
https://www.tacc.utexas.edu/systems/frontera -
Giroro What is a CPU hour? I have no context as to what amount of work that is.Reply
Is that calculated per core, thread, or socket? How do you normalize between CPUs with wildy different performance?
How many CPU hours can, say, a Ryzen 9 5950x work in 1 hour? -
hotaru.hino
It just means using one CPU for one hour. We could probably assume a CPU is a core since when doing CPU bound work, a core is basically another CPU. It's a similar metric as man-hour, which is literally one person working for one hour. The actual time to completion doesn't matter and performance is ignored, you simply adjust the amount of time you think you'll need if the performance is actually better or worse than some baseline you're using. For instance if you estimate 10 CPU hours of work to get something done, you can get it done in 2 hours by scheduling 5 CPUs to do the work. Or if you're running on higher performing machines than when you made the estimate, you could reduce this to 8 CPU hours.Giroro said:What is a CPU hour? I have no context as to what amount of work that is.
Is that calculated per core, thread, or socket? How do you normalize between CPUs with wildy different performance?
How many CPU hours can, say, a Ryzen 9 5950x work in 1 hour?
So basically, it's just saying they spent a cumulative total of 100 million hours crunching numbers.
EDIT: I realized this doesn't really explain why things are measured this way. The basic reason for all of this is simple: running stuff on a HPC, server, etc, is billed by the time you use it. HPCs and the like are typically time shared, the researchers don't own any of the computers that do the heavy lifting, so to speak. So for the purposes of accounting, you have to estimate how many hours you think you'll need on a system to do the work. Then the bean counters and proposal people can go "okay, we need $XXXX for compute costs" -
Nabushika jeremyj_83 said:There are typos in the article "All of this is a cool 55 million light-years from Earth." Sagittarius A* is only 25,640 light years from Earth. You have put in 2000x the distance in the article. The distance to the supermassive black hole in the middle of M87*, imaged in 2019, is ~55 million light years.
"Frontera leverages 448,448 CPU cores courtesy of 16,016 units of Intel's Xeon Platinum 8280 chips, a Broadwell-class CPU leveraging 28 Intel cores running at 2.7GHz." Frontera uses Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz Number of cores: 16 per socket, 32 per node
https://www.tacc.utexas.edu/systems/frontera
It's not just typos...
"... comparably minute amount of light that actually manages to escape its event horizon..."
Apparently whoever wrote this article doesn't understand how black holes work either. It would have taken 10 minutes to educate themself about how the photo is taken and what's actually in it, but rather than that they just make up something wildly incorrect. -
thisisaname Nabushika said:It's not just typos...
"... comparably minute amount of light that actually manages to escape its event horizon..."
Apparently whoever wrote this article doesn't understand how black holes work either. It would have taken 10 minutes to educate themself about how the photo is taken and what's actually in it, but rather than that they just make up something wildly incorrect.
No light escapes the event horizon, it is why it is called a black hole.
The light we see and it is quite bright but very very far away is from matter that orbits around the black hole (out side the event horizon) that has been accelerated to close to the speed of light.
So a black hole is both very dark and very bright! -
Colif i think its easier to showReply
Q1bSDnuIPbo
you would think we would have found their opposite form before now
yhBVrX-Naug -
Giroro hotaru.hino said:It just means using one CPU for one hour. We could probably assume a CPU is a core since when doing CPU bound work, a core is basically another CPU. It's a similar metric as man-hour, which is literally one person working for one hour. The actual time to completion doesn't matter and performance is ignored, you simply adjust the amount of time you think you'll need if the performance is actually better or worse than some baseline you're using. For instance if you estimate 10 CPU hours of work to get something done, you can get it done in 2 hours by scheduling 5 CPUs to do the work. Or if you're running on higher performing machines than when you made the estimate, you could reduce this to 8 CPU hours.
So basically, it's just saying they spent a cumulative total of 100 million hours crunching numbers.
EDIT: I realized this doesn't really explain why things are measured this way. The basic reason for all of this is simple: running stuff on a HPC, server, etc, is billed by the time you use it. HPCs and the like are typically time shared, the researchers don't own any of the computers that do the heavy lifting, so to speak. So for the purposes of accounting, you have to estimate how many hours you think you'll need on a system to do the work. Then the bean counters and proposal people can go "okay, we need $XXXX for compute costs"
So, spread over ~450,000 cores, what this article is really saying is that all that math only actually took about 10 days? And that it probably would have gone a lot faster if they had been using newer CPUs? -
hotaru.hino
Yes. Assuming what I said in my post is accurate.Giroro said:So, spread over ~450,000 cores, what this article is really saying is that all that math only actually took about 10 days? And that it probably would have gone a lot faster if they had been using newer CPUs?
For all I know they consider a CPU an entire processor.