Google Has Developed Its Own Data Center Server Chips

Google server SoC development
(Image credit: Google)

Google has made significant progress in its endeavor to develop its own data center chips, according to a new report. The Information says that a key milestone has just been reached, which means that Google can plan to roll out server systems powered by the new chips starting from 2025.

This is not the first processor that Google has successfully put through R&D - the company has previously made an ASIC for servers and an SoC for mobile devices. The search giant started using its internally developed Tensor Processing Unit (TPU) as far back as 2015. The TPU was an ASIC designed to accelerate AI and neural network machine learning, which also found uses in custom SSDs, network switches, and NICs. For AI processing it slotted into the firm’s TensorFlow framework, but Google kept on using third-party CPUs and GPUs for a number of other key processes/processing tasks. Google’s TPU has reached its fourth generation, and now it looks like Google wants to go further in using its own silicon in the server space.

In addition to the TPU ASIC, a much fuller SoC from Google exists for use in its devices. The newest Tensor G2 chip for mobiles mixes the latest Arm Cortex cores and Mali G710 graphics with a custom TPU, an ISP, a security core and caches – and is made by Samsung on its 5nm process. Progress with this chip over recent years, and the TPU, might have helped crystallize the new server chip development plans for greater control, efficiency, and TCO reductions.

The two sources speaking to The Information, one with direct knowledge of the project and another who had been briefed upon it, indicate that Google is working hard to catch up with cloud server business rival Amazon. Amazon launched its AWS Graviton processor with Arm architecture in 2018 – and it is now in its third generation, boasting impressive performance and efficiency optimizations over an already attractive server proposition.

Google server SoC development

(Image credit: Google)

Some other morsels of information shared by The Information are that Google’s server chip R&D team are working on two Arm-based 5nm chips. An SoC dubbed ‘Cypress’, is an in-house design by the Google Israel team. Meanwhile, a design codenamed ‘Maple’, which is based on the foundations of a Marvell Technology SoC, is in trial production at TSMC. Overseeing both designs is Uri Frank, a 25-year Intel CPU design veteran, who became Google’s VP of Engineering for server chip design in March 2021. It is understood that Frank sees Cypress as Plan A, with Maple waiting ready in the wings as Plan B.

With mass production of these chips potentially beginning in 2024, the source reckons Google data centers could be using them by 2025. Whether Google’s Plan A or Plan B makes the cut, it doesn’t look like this will be good news for PC x86 CPU makers like Intel or AMD.

Mark Tyson
Freelance News Writer

Mark Tyson is a Freelance News Writer at Tom's Hardware US. He enjoys covering the full breadth of PC tech; from business and semiconductor design to products approaching the edge of reason.

  • digitalgriffin
    And more of intels lunch gets eaten
    Reply
  • bolweval
    digitalgriffin said:
    And more of intels lunch gets eaten
    Competition is good for us consumers!
    Reply
  • bit_user
    digitalgriffin said:
    And more of intels lunch gets eaten
    I wonder how much of their x86 fleet is currently Intel vs. AMD. Google's Stadia used AMD CPUs and GPUs, for instance.

    The timeframe seems a little crazy, to me. I wonder what's taking them so long - are they actually designing custom cores?? If they're only doing a rollout of their TSMC N5 CPUs in 2025, AMD will already be on TSMC N4 or N3 by then. And Intel will be on 18A or whatever.

    I wonder if these leaks were intended to stave off investor calls for more job cuts. I've heard some investors want Google to layoff up to 3x as much as it's done so far, and the hardware division seems like it might be a juicy target.
    Reply
  • bit_user
    bolweval said:
    Competition is good for us consumers!
    I suppose, if you're using Google's cloud services or a downstream consumer of a service that does. But, don't expect them to sell these CPUs on the open market.

    The downside of this trend towards all cloud providers making their own CPUs is that if Intel's or AMD's volumes significantly drop, they might have to increase prices due to having fewer units over which to recoup their engineering costs.
    Reply
  • OriginalRealist
    Remember, Intel hopes to be fabricating the chips for all those who do their own thing...
    Reply
  • bit_user
    OriginalRealist said:
    Remember, Intel hopes to be fabricating the chips for all those who do their own thing...
    Sure, they're trying to stand up a foundry business and need more volume. I get that, but manufacturing is typically a lower-margin affair than IP. So, what Intel would really prefer is to sell you their chips. Obviously, if they can't do that, then they'd at least prefer you use their foundries instead of TSMC or Samsung.
    Reply
  • Vanderlindemedia
    digitalgriffin said:
    And more of intels lunch gets eaten

    Bit weird. X86 or X64 CPU's are good at "everything" but for specific workloads your throwing away effiency. When you design a chip specific for your type of workload you make huge steps in efficiency. Simple as that.
    Reply
  • jkflipflop98
    That's right. The workload determines the best use of hardware.

    If you're only ever going to do one thing and one thing only for the rest of time, then designing an ASIC that does that one thing is going to be your best path. You get max performance and efficiency in exchange for reduced flexibility.

    If you're in an environment where you have to pivot to different workloads depending on the situation then you need a general processor like x86. It burns more power but you can do literally anything with it. Anything from file servers to complex physics simulations.
    Reply
  • bit_user
    Vanderlindemedia said:
    Bit weird. X86 or X64 CPU's are good at "everything" but for specific workloads your throwing away effiency.
    There are some ARM server CPUs that are also quite versatile. Amazon's Graviton 3 processors have 256-bit SVE, which gives them floating point chops to match their integer performance. Compared to any other server CPU made on a comparable process node (TSMC N7), they're surely the most efficient. 64 cores, with 8-channel DDR5 and PCIe 5.0, in just 100 W.
    https://www.semianalysis.com/p/amazon-graviton-3-uses-chiplets-and
    And it's already been in service for more than a year!

    Vanderlindemedia said:
    When you design a chip specific for your type of workload you make huge steps in efficiency. Simple as that.
    Partly. I think because Google has their TPUs, they won't bother wasting a lot of die space with wide vector or matrix arithmetic.

    Other than that, I would expect them to simply use ARM Neoverse N2 cores. Then again, considering the timeframe, I really have to wonder if they're make a custom design. Google certainly has the ego to try and undertake such a task, whereas Amazon was very smart not to.

    To be honest, if Ampere's new custom cores are any good, I think it would have just been cheaper for Google to acquire that company. Amazon actually bought Annapurna Labs, which is how their Graviton program came into being.
    Reply