IBM Announces 'Game-Changing' Power9 Servers For AI

IBM has announced its Power Systems Servers, which will be the first to sport the new Power9 processor, a chip that has been in development for four years.

The computing giant built the processor for compute-intensive AI workloads, and it claims the Power9 systems will have the ability to improve the training times of deep learning frameworks by nearly 4x. As a result, companies will be able to make more accurate AI applications in a faster manner.

The Power9-based AC922 Power Systems will be the world's first to embed PCI-Express 4.0, next-generation Nvidia NVLink, and OpenCAPI--which IBM says that, when combined, can accelerate data movement, calculated at a rate that's 9.5x faster than PCI-E 3.0 based x86 systems.

The system was specifically designed to pave the way for demonstrable performance improvements across several AI frameworks and accelerated databases. This would in turn allow data scientists to build applications faster, stemming from deep learning insights in scientific research and real-time fraud detection.

“Google is excited about IBM's progress in the development of the latest Power technology," said Bart Sano, VP of Google Platforms. "The Power9 OpenCAPI Bus and large memory capabilities allow for further opportunities for innovation in Google data centers."

"We’ve built a game-changing powerhouse for AI and cognitive workloads,” added Bob Picciano, SVP of IBM Cognitive Systems. “In addition to arming the world’s most powerful supercomputers, IBM Power9 Systems is designed to enable enterprises around the world to scale unprecedented insights, driving scientific discovery enabling transformational business outcomes across every industry.”

So what is deep learning exactly? It's a machine learning method that retrieves information by "crunching through millions of processes and data to detect and rank the most important aspects of the data."

Today's announcement is one of the many pertaining to artificial intelligence from large companies in the chip and server industries, as well as others, showcasing the reality that AI can utilize huge computing power. Take yesterday's announcement from Nvidia, for example, wherein the company revealed that it made a breakthrough in the field by reducing the time it takes to train artificial intelligence.

Power9 is currently being used by the U.S. Department of Energy’s “Summit”, in addition to Sierra supercomputers.

This thread is closed for comments
    Your comment
  • AgentLozen
    Hypothetically, how would a POWER9 CPU stack up against a Core i7?
    If you could get Microsoft to rewrite Windows 10 for the POWER architecture and get some apps and games that could similarly use both x86 and POWER CPUs, I wonder how the benchmarks would turn out.

    I'm under the impression that the design philosophy for server CPUs is to focus on processing threads and not pay so much attention to IPC. Maybe a POWER9 would only beat Intel chips in specific, highly threaded applications.
  • therealduckofdeath
    Well, you said it yourself, I think. It would struggle for 99.9% of the things a Core processor is designed for. The things a normal person uses a PC for. I doubt it would even compete with a pre-Ryzen AMD processor.
  • derekullo
    Some quick info on Power9

    Max. CPU clock rate 4 GHz
    Min. feature size 14 nm (FinFET)
    Cores 12 SMT8 cores or 24 SMT4 cores
    L1 cache 32+32 KB per core
    L2 cache 512 KB per core
    L3 cache 120 MB per chip

    Linux is supported on the 24 core version.

    It was designed for servers and has a server price tag $6000+.

    SMT or simultaneous multi-threading is basically hyper-threading without patent infringement.

    The 4 in SMT4 means 4 threads per core so the 24 core becomes 96 threads.

    60 threads doing Monero :)