Nvidia's ChipNeMo LLM Will Help Design Chips
Nvidia's specialized AI will help make chip designers more productive
Nvidia has unveiled ChipNeMo, a specialized large language model with 43 billion parameters aimed at bolstering chip design productivity. The tool promises to streamline various aspects of chip design by answering questions, condensing bug reports, and crafting scripts for electronic design automation (EDA) tools.
"The goal here is to make our designers more productive," said Bill Dally, Nvidia's chief scientist, in an interview with EE Times ahead of the International Conference on Computer-Aided Design. "If we even got a couple percent improvement in productivity, this would be worth it. And our goals are actually to do quite a bit better than that."
Nvidia's ChipNeMo was trained on Nvidia's own dataset harvested from internal code and text repositories that include architecture and design documents. This pre-training approach ensures that ChipNeMo operates with a nuanced understanding of Nvidia's specific chip design and architectures. ChipNeMo can answer general questions related to chip design, summarize detailed bug documentation into short paragraphs (for easier understanding of bugs), and write short scripts to interface with CAD tools. The tool can also run logic simulations and test benchmarks early in design process to verify performance and design viability.
The ChipNeMo is a huge reservoir of knowledge that can be parsed through fairly quickly by a machine powered by a single Nvidia A100 GPU, and is meant to speed up chip design process. This tool will be particularly useful for novice designers, enabling them to find essential information quickly, thereby saving time and effort of senior designers.
One of the problems with generative AI tools is that they often generate inaccurate or outright fake responses. When it comes to chip design, this could lead to very expensive errors — so, to avoid this, Nvidia is using a retrieval augmented generation (RAG) technique. RAG references a database of source documents to ground the model's outputs. This approach will reduce the likelihood of the model creating inaccurate or 'hallucinated' responses, and ensure that the generated responses are based on actual, pre-existing knowledge.
ChipNeMo is intricately tailored for Nvidia's GPUs and internal processes and thus is not slated for broader commercial release. Nonetheless, the tool symbolizes a pioneering approach in leveraging an LLM to refine and speed up chip design methodologies and processes.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.