Skip to main content

Nvidia Uses GPU-Powered AI to Design Its Newest GPUs

Nvidia AI applied to GPU design
(Image credit: Nvidia)

Nvidia's chief scientist recently talked about how his R&D teams are using GPUs to accelerate and improve the design of new GPUs. Four complex and traditionally slow processes have already been tuned by leveraging machine learning (ML) and artificial intelligence (AI) techniques. In one example, using AI/ML accelerated inference can speed a common iterative GPU design task from three hours to three seconds.

Bill Dally is chief scientist and SVP of research at Nvidia. HPC Wire has put together a condensed version of a talk Dally shared at the recent GTC conference, in which he discusses the development and use of AI tools to improve and speed up GPU design. Dally oversees approximately 300 people, and these clever folk generally work in the research groups set out below.

RTX was the result of moonshoot  research (Image credit: HPC Wire / Nvidia)

In his talk, Dally outlined four significant areas of GPU design where AI/ML can be leveraged to great effect: mapping voltage drop, predicting parasitics, place and routing challenges, and automating standard cell migration. Let's have a look at each process, and how AI tools are helping Nvidia R&D get on with the brain work instead of waiting around for computers to do their thing.

Mapping voltage drop shows designers where power is being used in new GPU designs. Using a conventional CAD tool will help you calculate these figures in about three hours, says Dally. However, once trained, Nvidia's AI tool can cut this process down to three seconds. Such a reduction in processing time helps a lot with a process like this, which is iterative in nature. The process, as it stands, offers 94% accuracy, which is the tradeoff for the huge iterative speed increase.

Image 1 of 3

Nvidia AI applied to GPU design

(Image credit: HPC Wire / Nvidia)
Image 2 of 3

Nvidia AI applied to GPU design

(Image credit: HPC Wire / Nvidia)
Image 3 of 3

Nvidia AI applied to GPU design

(Image credit: HPC Wire / Nvidia)

Predicting parasitics using AI is particularly pleasing for Dally. He says that he spent quite some time as a circuit designer and this new AI model cuts down a lengthy multi-personnel, multi-skilled process. Again the simulation error is reasonably low, at <10% in this case. Cutting down these traditionally lengthy iterative processes can free up a circuit designer to be more creative or adventurous.

Place and routing challenges are important to chip design as they are like planning roads through a busy conurbation. Getting this wrong will result in traffic (data) jams, requiring rerouting or replanning of layouts for efficiency. Using Graph Neural Networks (GNNs) to analyze this issue in chip design helps highlight areas of concern and act on issues intelligently.

Lastly, automating standard cell migration using AI is another very useful tool in Nvidia's chip design toolbox. Dally talks about the great effort previously required to migrate a chip design from seven to five nanometer, for example. Using reinforcement learning AIs, "92% of the cell library was able to be done by this tool with no design rule or electrical rule errors," he says. This is welcome for its huge labor savings "and in many cases, we wind up with a better design as well," continues Dally.

Last year at GTC, Dally's talk stressed the importance of prioritizing AI and told of the five separate Nvidia labs indulging in AI research projects. We can't wait to see and hear about whether Nvidia's homegrown AI tools have been important to the design of Ada Lovelace GPUs and getting them ready for TSMC 5nm. Dally seems to hint that automating standard cell migration using AI was used in some 7nm to 5nm transition(s) recently.

Mark Tyson
Mark Tyson

Mark Tyson is a Freelance News Writer at Tom's Hardware US. He enjoys covering the full breadth of PC tech; from business and semiconductor design to products approaching the edge of reason.