IBM has announced its Power Systems Servers, which will be the first to sport the new Power9 processor, a chip that has been in development for four years.
The computing giant built the processor for compute-intensive AI workloads, and it claims the Power9 systems will have the ability to improve the training times of deep learning frameworks by nearly 4x. As a result, companies will be able to make more accurate AI applications in a faster manner.
The Power9-based AC922 Power Systems will be the world's first to embed PCI-Express 4.0, next-generation Nvidia NVLink, and OpenCAPI--which IBM says that, when combined, can accelerate data movement, calculated at a rate that's 9.5x faster than PCI-E 3.0 based x86 systems.
The system was specifically designed to pave the way for demonstrable performance improvements across several AI frameworks and accelerated databases. This would in turn allow data scientists to build applications faster, stemming from deep learning insights in scientific research and real-time fraud detection.
“Google is excited about IBM's progress in the development of the latest Power technology," said Bart Sano, VP of Google Platforms. "The Power9 OpenCAPI Bus and large memory capabilities allow for further opportunities for innovation in Google data centers."
"We’ve built a game-changing powerhouse for AI and cognitive workloads,” added Bob Picciano, SVP of IBM Cognitive Systems. “In addition to arming the world’s most powerful supercomputers, IBM Power9 Systems is designed to enable enterprises around the world to scale unprecedented insights, driving scientific discovery enabling transformational business outcomes across every industry.”
So what is deep learning exactly? It's a machine learning method that retrieves information by "crunching through millions of processes and data to detect and rank the most important aspects of the data."
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Today's announcement is one of the many pertaining to artificial intelligence from large companies in the chip and server industries, as well as others, showcasing the reality that AI can utilize huge computing power. Take yesterday's announcement from Nvidia, for example, wherein the company revealed that it made a breakthrough in the field by reducing the time it takes to train artificial intelligence.
Power9 is currently being used by the U.S. Department of Energy’s “Summit”, in addition to Sierra supercomputers.
-
AgentLozen Hypothetically, how would a POWER9 CPU stack up against a Core i7?Reply
If you could get Microsoft to rewrite Windows 10 for the POWER architecture and get some apps and games that could similarly use both x86 and POWER CPUs, I wonder how the benchmarks would turn out.
I'm under the impression that the design philosophy for server CPUs is to focus on processing threads and not pay so much attention to IPC. Maybe a POWER9 would only beat Intel chips in specific, highly threaded applications. -
therealduckofdeath Well, you said it yourself, I think. It would struggle for 99.9% of the things a Core processor is designed for. The things a normal person uses a PC for. I doubt it would even compete with a pre-Ryzen AMD processor.Reply -
derekullo Some quick info on Power9Reply
Max. CPU clock rate 4 GHz
Min. feature size 14 nm (FinFET)
Cores 12 SMT8 cores or 24 SMT4 cores
L1 cache 32+32 KB per core
L2 cache 512 KB per core
L3 cache 120 MB per chip
Linux is supported on the 24 core version.
It was designed for servers and has a server price tag $6000+.
SMT or simultaneous multi-threading is basically hyper-threading without patent infringement.
The 4 in SMT4 means 4 threads per core so the 24 core becomes 96 threads.
60 threads doing Monero :) -
ByteJuggler Argh, I just spent 5 minutes writing a comment which tomshardware has now eaten due to asking me to log in. Anyway, In short: Historically the Power processors have been very competitive (usually slightly faster) on a core-with-same-threads-enabled basis. Largely perhaps due to the Power processors having higher clock rates than equivalent x86 server Xeon parts. Also to note, the Power architecture is no new kid on the block, and was for example used in the XBox 360. Consequently I would expect Power9 to be very competitive with the latest x86 processors in most workloads. However, in very concurrently workloads the Power architecture wins substantially because while x86 Intel only ever gives you at most 2 threads per core, Power8 gives you 8 threads per core, and I imagine Power9 will give you at least that, if not more. The bottom line, it is likely quite competitive regardless of workload, with it being significantly faster in some highly concurrent workloads. But watch the pricetag, perhaps...Reply -
TJ Hooker Argh, I just spent 5 minutes writing a comment which tomshardware has now eaten due to asking me to log in.
Yeah, this happens to me all the time, but usually if this happens you can go Back one page and the text you wrote will still be in the comment field. You can then copy it, log in as required, paste and resubmit it.
I really wish Tom's would just sort out their site though. The format changes seem completely unnecessary to me, and have resulted in a site that's been buggy in one way or another ever since they started rolling it out months ago. -
therealduckofdeath The most annoying thing about Tom's completely broken comment/community setup is that they've been made aware of this for years and still refuse to fix it.Reply -
Rock_n_Rolla In additon:Reply
POWER9 has a revised instruction sets encoded with its procs. and its specially designed or more streamlined with Ai / Deep learning / Massive data crunching apps in all walks of studies and research as long as their frameworks can go to get the best result and analyzation in da shortest time possible.
When it comes to horse power and power consumption, a single rack of power 9 server with 24 cores, will save you a bit on electricity but will give you tremendous computing power in shortest time depending on needs as compared to a single rack i7 / Xeon server having 2 cpus with 20 cores / 40 threads on each. In a 24 core / 96 thread Power9 server u can imagine the computation advantages and capabilities in can offer while spending a bit less as compared Intel's Xeon single rack server products. IMO. -
oneplanet4all The Power9 feeds data to the NVidia GPUs, which is where the real computing is done. This is pretty much what the i-processors from Intel do as well, only on a much smaller scale... GPUs compute much faster and have greater parrellellism than CPUs. The CPU is for many of todays computation tasks just a data shuffling device and not where the real number crunching takes place...Reply -
oneplanet4all The Power9 feeds data to the NVidia GPUs, which is where the real computing is done. This is pretty much what the i-processors from Intel do as well, only on a much smaller scale... GPUs compute much faster and have greater parrellellism than CPUs. The CPU is for many of todays computation tasks just a data shuffling device and not where the real number crunching takes place...Reply -
tommysandi Power systems are not only used for AI, they are freqently used for EPRs, EPM and other enterprise applications. In those scenarios GPUs are completely irrelevant, while CPUs with many threads are fundamental to keep performance consisten when many users are connected. That said unfortunately most of enterprise applications are terrible from an efficiency perspective and they required a massive amount of resources because they are suboptimal collection of different sfotware created in different time by different vendors that have been acquired or merged.Reply