Berkeley (CA) - In supercomputing, the sky is the limit, literally. In an effort to enable more credible global climate change predictions, researchers from UC Berkeley believe that the way to go is a new kind of cloud supercomputer that includes 20 million processors delivering a peak performance of 200 PFlops to simulate 1-km scale climate models. At the same time, this proposed system would not require a power plant all for itself. How that is possible you ask? These guys are looking into ultra-efficient embedded RISC CPUs.
We are just about ready to transition from the Gigaflop into the Petaflop era and today we heard about a new proposal from UC Berkeley and Tensilica that, at least on paper, could put supercomputer development into warp speed. In a dramatic departure from current supercomputer architectures and upcoming hybrid systems, this proposed system would rely on embedded processors with minimal power consumption.
The researchers believe that 20 million Tensilica RISC processors would deliver at least 10 PFlops of sustained performance, while topping out at about 200 PFlops. The power consumption of such a system is estimated at about 4 Mega Watts and the construction and typical operation cost at about $75 million. A 200 PFlops system that is built on today’s common architecture could cost up to $1 billion and consume 200 Mega Watts - which is the equivalent of what a city with 100,000 people consumes.
In comparison, the currently fastest supercomputer tops out at 576 GFlops.
There is little performance information about Tensilica’s Xtensa LX extensible processors, which could allow us to compare them to what your typical server processor offers. What we know, however, is that Tensilica builds its processors in 90 nm and 130 nm processes and runs the chips at clock speeds between 150 and 450 MHz. The power consumption is "less than 0.1 mWatt per MHz", which puts such a processor at a power consumption of about 45 mWatts in a worst case scenario, according to the manufacturer.
So, what would a 200 PFlop system be able to accomplish?
According to the researchers, such a computer would make global climate change predictions more understandable and more credible. Climate models are created today largely by using historical data of rainfall, hurricanes, sea surface temperatures and carbon dioxide in the atmosphere. Accurate cloud simulations are much more complex, however, and well within the reach of current supercomputers. Past cloud models, the researcher claim, lack the details that could improve the accuracy of climate predictions: The required accuracy can only be provided by a system that can cope with 1 km-scale models that provide rich details not available in existing models.
To develop such a 1-km cloud model, the scientists said they will need a supercomputer that is 1000 times more powerful than what is available today, the researchers say. And the proposed 200 PFlops Tensilica system could put them into that range, at least in theory.
However, the UC Berkeley researchers claim that this "climate computer" is not just a concept: Michael Wehner, Lenny Oliker and John Shalf said they have been working with scientists from Colorado State University to build a prototype system in order to run a new global atmospheric model developed at Colorado State University. "What we have demonstrated is that in the exascale computing regime, it makes more sense to target machine design for specific applications," Wehner said. "It will be impractical from a cost and power perspective to build general-purpose machines like today’s supercomputers."