Intel, AMD, Nvidia Claim Greenest Supercomputer Technology
The world's most prestigious supercomputers are usually spotlighted in the Top500 list of the world's fastest systems.
But there is also a similarly interesting, albeit less known listing that is showing tremendous progress in the power efficiency of some supercomputers. Intel, AMD and Nvidia are the main proponents of this group.
ORNL's Titan may be the world's fastest supercomputer, but it is only the third most efficient, according to Green500. The honor of being the greenest supercomputer system goes to University of Tennessee and its Beacon system, which is based on Xeon E5-2670 and Xeon Phi 5110P processors. The computer delivers 2,499.44 Mflops per watt.
In second is King Abdulaziz City for Science and Technology's SANAM supercomputer, based on Xeon E5-2650 and 420 dual-GPU AMD FirePro S10000 server graphics cards, with 2,351.10 Mflops per watt.
The ORNL Titan, which integrates AMD Opteron 6274 processors and Nvidia Tesla K20x graphics cards, posted 2,142.77 Mflops per watt.
The three systems cannot be compared in their absolute performance. Titan holds position #1 on the Top500 list; SANAM can be found at #52 and Beacon at #253.
Titan (560,640 CPU cores, 46,6 million Nvidia CUDA processors) delivers a sustained performance of 17.6 Pflops, while SANAM (38,400 CPU cores, 1.5 million AMD stream processors) is rated at 421 TFlops, and Beacon at 110.5 TFlops (9,216 CPU cores, undisclosed number of Xeon Phi 5110P cards with 60 cores each).
Titan has 50,233,344 CUDA cores Gruener, not 46 million. You've already published this number before, and I've already pointed it out to be inaccurate. Where are you getting this number from? It's in neither of the articles you've sourced.
It makes me question the accuracy of other news articles I don't know as much about, because I might not know any better. I think everyone would appreciate it if you and the rest of the news team would make an attempt to correct some of these mistakes from time to time, or at least acknowledge that you've made them.
Exactly, Power per cost, which is what efficiency is. And the most powerful was #3 on efficiency. Therefor you may start seeing the most efficient processor setups in the newer supercomputers being built.
I'm not sure you read, but they listed the 3 top most efficient systems. Not what claims to be, but what are the most efficient, and of the top 3, all those 3 brands were present.
I'm not sure you read, but they listed the 3 top most efficient systems. Not what claims to be, but what are the most efficient, and of the top 3, all those 3 brands were present.
Exactly, Power per cost, which is what efficiency is. And the most powerful was #3 on efficiency. Therefor you may start seeing the most efficient processor setups in the newer supercomputers being built.
There's a certain point where the annual electricity cost for powering and cooling the supercomputer exceeds the setup cost of the supercomputer.
And I'm fairly sure we're already past that point.
Actually, IBM's absence on this list is rather notable, since their Blue Gene/Q system was the best in efficiency until just recently.
Titan has 50,233,344 CUDA cores Gruener, not 46 million. You've already published this number before, and I've already pointed it out to be inaccurate. Where are you getting this number from? It's in neither of the articles you've sourced.
It makes me question the accuracy of other news articles I don't know as much about, because I might not know any better. I think everyone would appreciate it if you and the rest of the news team would make an attempt to correct some of these mistakes from time to time, or at least acknowledge that you've made them.
Where do you think you are? Engadget? THIS IS THG, they are never wrong, the industry is basically not sticking with THG's designs. If THG says its 46,6 million then its NVIDIA's fault for putting 50,233,344 in them.
If you program it to, sure, why not. If you mean a game straight off the DVD then no. Greatly simplified, a supercomputer consists of a whole bunch of individual computers working together. A bog-standard game would only get to use one of them as that's all it's programmed to do.
You must be thinking of the Raspberry Super-Pi, which hasn't been built yet. ;-)
Yep. Furthermore, when a current petascale supercomputer can use the same energy as a small town, the goal of exascale supercomputers is impossible without drastic changes. It's becoming all about low power (energy) in order to achieve high power (computing).
Yes, but they can run an incredible simulation of running a game.
That's why the goal is so many years off... Lower power DDR4 memory and much more efficient CPUs and graphics cards will be available at the time and if they're not efficient enough for them, they can do what was done with one of those Blue Gene computers, undervolt/underclock.