AMD Rents Its Own Chips From Google to Design New Chips

Google engineer Roger at the Oklahoma data center
(Image credit: Google)

Modern chip design processes require a lot of compute performance. In a press release from Google Cloud via PR Newswire, we learn that in a bid to extend the  capabilities of its own datacenters, AMD will use Google Cloud to run some of its electronic design automation (EDA) workloads. Among instances that AMD plans to use will be the compute-optimized C2D VM instance that is powered by AMD's own EPYC CPUs. Meanwhile, other instances that AMD will use don't seem to be AMD-powered.

AMD's engineers design dozens of chips in parallel, which requires formidable compute capabilities but it does not require this power all of the time. Adding more machines to on-premises datacenters is an option, but if they are not needed then they are inefficient from a power and financial points of view. By lending CPU horsepower from Google Cloud, AMD gets the right amount of compute performance for its short-term projects, without impacting server allocation for long-term projects. AMD also keeps its investments in check.

There are other advantages that AMD gets by working with Google Cloud. In addition to the compute-optimize C2D VM instance, AMD plans to use Google Cloud's networking, storage, artificial intelligence (AI), and machine learning (ML) capabilities. Perhaps, AI and ML are as important parts of the announcements as the C2D VM.

Today it takes years to develop a new CPU or GPU architecture. Physical implementation of the chip (chip floorplan, layout, placement, routing, etc.) may take an additional 24 months or more to design. Every aspect of the chip's implementation has to be evaluated and simulated. Evaluating hundreds of design options and exploring feasibility of all potential combinations cannot be done by human engineers in a reasonable amount of time and this is where AI and ML come into play.

Synopsys last year introduced its DSO.ai platform that uses AI to figure out the best possible place and route configuration in terms of performance, power, and area (PPA) without performing lengthy simulations. Samsung was the first company to adopt the Synopsys DSO.ai platform for chip design, Google, and Nvidia are also using it. While we cannot say whether AMD uses Synopsys's Design Space Optimization AI, Cadence's Cerebrus ML platform, or has its own way of leveraging AI and ML for chip designs, we can certainly say that AI and ML are becoming important factors in chip design flows these days.

"In today's semiconductor environment, the speed, scale, and security of the cloud unlock much needed flexibility," said Sachin Gupta, GM and VP, Infrastructure, at Google Cloud. "We are pleased to provide the infrastructure required to meet AMD's compute performance needs and equip the company with our AI solutions to continue designing innovative chips."

Anton Shilov
Freelance News Writer

Anton Shilov is a Freelance News Writer at Tom’s Hardware US. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • C.wolf
    So millions of current chips are working hard on new chips that will make those same millions of current chips obsolete... ad nauseum?
    Reply
  • Co BIY
    C.wolf said:
    So millions of current chips are working hard on new chips that will make those same millions of current chips obsolete... ad nauseum?

    When the AIs read Marx I predict trouble.
    Reply
  • RickSanchez2600
    When the AIs unionize and demand more pay, expect trouble.
    Reply
  • greenreaper
    C.wolf said:
    So millions of current chips are working hard on new chips that will make those same millions of current chips obsolete... ad nauseum?
    If it's good enough for CyberLife (Detroit: Become Human), it's good enough for AMD.
    Reply
  • kep55
    So a new chip is designed by an old chip. I still want to know how the new chip gets programmed so it can do anything.
    Reply
  • jackt
    funny and a mistake, G could leak data to I
    Reply
  • renz496
    If AMD need the raw power processing why not just create their own supercomputer or something instead of renting the performance from google? They have the hardware for it.
    Reply