Lisa Su: AI Will Dominate Chip Design

AMD ROCm
(Image credit: AMD)

Just like other large chip designers, AMD has already started to use artificial intelligence for designing chips. In fact, Lisa Su, chief executive of AMD, believes that eventually AI-enabled tools will dominate chip design as the complexity of modern processors is increasing exponentially.

Su, believes that AI will dominate certain areas of chip design, she said at the 2023 World Artificial Intelligence Conference (WAIC) held in Shanghai. She also emphasized the need for interdisciplinary collaboration to enable better hardware design in the future, reports DigiTimes.

Previously both Jensen Huang, chief executive of Nvidia, and Mark Papermaster, chief technology officer of AMD, noted that chip development is an ideal application for AI. AMD is already utilizing AI in semiconductor design, testing, and verification. The company also has the intention to leverage generative AI more broadly in future chip design applications.

At AMD, AI is already in chip design, particularly in the 'place and route' stage, where sub-blocks of chip designs are positioned and optimized for better performance and lower energy consumption, Papermaster told Tom's Hardware in May. AI's ability to continuously iterate and learn from patterns greatly accelerates the process of achieving an optimized layout, thereby increasing performance and energy efficiency. Papermaster said that AI will even expand into more important aspects of chip design, like microarchitecture designs, particularly after certain hurdles are overcome to protect IP.

AI is also already employed in verification suites to reduce the time needed to detect bugs during the chip's development process, from conception to the verification and validation phases. Furthermore, AI assists in generating test patterns. With billions of transistors in a chip design, ensuring comprehensive test coverage is essential to guarantee that the product is flawless upon leaving the manufacturing floor. AI's ability to learn from each successive run, identify gaps in test coverage, and adjust the testing focus accordingly, significantly speeds up the process and enhances test coverage.

Artificial intelligence (AI) is increasingly playing a pivotal role in supporting and assisting chip designs. All three leading makers of electronic design automation (EDA) tools — Ansys, Cadence, and Synopsys — offer AI-enabled software to their clients, although Synopsys seems to be a bit ahead of its competitors when it comes to AI-enabled tools.

Earlier this year Synopsys launched Synopsys.ai, the first end-to-end AI-driven EDA solution. This enables developers to use AI throughout all stages of chip development, from architecture to design and manufacturing.

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • Alvar "Miles" Udell
    Bet this still means every couple of generations will be a "Tock" optimization release. Would be nice to have all "Tick" ones though...
    Reply
  • thestryker
    AMD pivoted to using more machine learning sooner than Intel due to R&D budget, but I think in the end it has served them well. It sounds like most of what AMD/Intel are using it for is time savings. It'll be interesting to see if the increased AI usage makes an impact on additional features supported.

    I don't really think this is going to have much of an impact on the timing/generations of products we see on shelves. Though it may allow for more diverse SKUs as both companies move towards using more extensive tile/chiplet designs as the engineering time would be lower.
    Reply
  • brandonjclark
    I want to make something clear. If they focus on TRUE AI processing advancements, edge computing will not benefit.

    Make no mistake, the major providers have invested HEAVILY (as in: all in) on cloud computing. They want you to treat the cloud as the computer. No single PC is going to be doing any amount of AI-driven work as it takes too much computational power. Now, could someone build a nueral processing P2P network, sure. But companies like AMD and Intel and others will simply disable features or, as I'm starting to believe, these AI-focused future chips will in practicality stay on the server chip lineups.

    Yes, manufacturers might tout "AI POWEREDZ" on chip retail boxes and commercials (do those still exist?), but they can't let this genie out of the bottle. They have to drive sales towards their clouds.
    Reply
  • Thunder64
    brandonjclark said:
    I want to make something clear. If they focus on TRUE AI processing advancements, edge computing will not benefit.

    Make no mistake, the major providers have invested HEAVILY (as in: all in) on cloud computing. They want you to treat the cloud as the computer. No single PC is going to be doing any amount of AI-driven work as it takes too much computational power. Now, could someone build a nueral processing P2P network, sure. But companies like AMD and Intel and others will simply disable features or, as I'm starting to believe, these AI-focused future chips will in practicality stay on the server chip lineups.

    Yes, manufacturers might tout "AI POWEREDZ" on chip retail boxes and commercials (do those still exist?), but they can't let this genie out of the bottle. They have to drive sales towards their clouds.

    Why not? I saw a commerical for an "AI" washing machine.
    Reply
  • bit_user
    thestryker said:
    It sounds like most of what AMD/Intel are using it for is time savings. It'll be interesting to see if the increased AI usage makes an impact on additional features supported.
    According to prior claims, it seems like it can deliver performance/power improvement nearly equivalent to a node-shrink. So, I don't see it as either/or.

    thestryker said:
    I don't really think this is going to have much of an impact on the timing/generations of products we see on shelves.
    It could enable more rapid turnaround of spins tailored to specific market niches or in response to the competitive or market environment.
    Reply
  • bit_user
    brandonjclark said:
    No single PC is going to be doing any amount of AI-driven work as it takes too much computational power.
    Depends on what. You can still train smaller networks on desktop hardware.
    https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/
    Training does take an awful lot of compute and memory bandwidth, however. Certain networks just can never be trained using an amount of compute and infrastructure a consumer could possibly afford, much less power. It was rumored to take something like a month on 10,000 A100 GPUs to train GPT-3

    brandonjclark said:
    Now, could someone build a nueral processing P2P network, sure. But companies like AMD and Intel and others will simply disable features or,
    In general, no. It would be too expensive to have a substantial amount of compute on-die that's just disabled. Intel's client Golden Cove cores physically don't have AMX, for instance.

    However, the main instance of this that we do actually know about wasn't by AMD.
    "NVIDIA gave the GeForce cards a singular limitation that is none the less very important to the professional market. In their highest-precision FP16 mode, Turing is capable of accumulating at FP32 for greater precision; however on the GeForce cards this operation is limited to half-speed throughput. This limitation has been removed for the Titan RTX, and as a result it’s capable of full-speed FP32 accumulation throughput on its tensor cores."

    Source: https://www.anandtech.com/show/13668/nvidia-unveils-rtx-titan-2500-top-turing
    Reply
  • bit_user
    jackt said:
    AI will only make cpu/gpu even more expensive without a reason. And create another chip shortage eventually.
    Why do you think so?

    EDA (Electronic Design Automation) tools have been used to enable bigger and complex semiconductor designs since the 1980's. As transistor counts have ballooned and design constraints of newer manufacturing technologies have increased, EDA tools have had to keep pace, with less and less being done by hand. One way to look at this is just the next generation of said tools.

    I believe that chip designers wouldn't use these AI-enabled tools if they didn't offer real benefits in cost, performance, efficiency, or time-to-market.
    Reply
  • TerryLaze
    bit_user said:
    Why do you think so?
    Because he sees AI and CPU in the same sentence and believes it means AI inside the CPU.
    No time to read that they are talking about AI helping in designing CPUs, either with or without AI in the end product.
    Reply
  • PlaneInTheSky
    nope

    Reply
  • palladin9479
    This is one of those area's where real AI can be used, and it doesn't require massive amounts of training either as the scope is extremely limited. Mostly it's the next level in testing and design automation with the results being massive savings in manhours allowing for faster product development. This doesn't mean faster product releases, but that each release will have more in it.
    Reply