Chinese Researchers Used AI to Design RISC-V CPU in Under 5 Hours

AI CPU design
(Image credit: arxiv.org)

A group of Chinese scientists has published (PDF) a paper titled "Pushing the Limits of Machine Design: Automated CPU Design with AI." The paper details the researchers' work in designing a new industrial-scale RISC-V CPU in under 5 hours. It is claimed this AI-automated feat was about 1000x faster than a human team could have finished a comparable CPU design. However, some may poke fun at the resulting AI-designed CPU performing approximately on par with an i486.

The goal of the Chinese research team was to answer the question of whether machines can design chips like humans. Earlier AI-crafted designs have been relatively small or limited in scope, reckons the team. Thus, to test the boundaries of AI design, the researchers thought they would try and get an AI to automatically design a RISC-V CPU.

(Image credit: arxiv.org)

Projects like this typically start with a period of machine learning. Training consisted of observing a series of CPU inputs and outputs. The scientists generated a Binary Speculation Diagram (BSD) from this I/O and leveraged principles of Monte Carlo-based expansion and Boolean functions to hone the accuracy and efficiency of the AI-based CPU design. Thus the CPU design was formed "from only external input-output observations instead of formal program code," explains the scientists. It also boasted an impressive 99.99999999999% accuracy.

Using the above-outlined process, an automated AI design of a CPU was created. The taped-out RISC-V32IA instruction set CPU was fabricated at 65nm and could run at up to 300 MHz. Running the Linux (kernel 5.15) operating system and SPEC CINT 2000 on the AI-generated CPU validated its functionality. In Drystone benchmarks, the AI-generated CPU performed on par with an i486. Interestingly, it appears to be a little bit faster than an Acorn Archimedes A3010 in the same test.­

(Image credit: arxiv.org)

Though some might be unimpressed by the performance of the AI-generated CPU, the scientists also seem quite proud that their generated BSD "discovered the von Neumann architecture from scratch."

(Image credit: arxiv.org)

The building a new RISC-V CPU from scratch using AI isn't just of academic interest, or of potential use for making new CPUs from the ground up. According to the researchers, AI could be used to significantly reduce the design and optimization cycles in the existing semiconductor industry. Moreover, in their conclusion, the scientists even ponder whether this research might be taken further to form the foundation of a self-evolving machine.

This is by no means our first story on AI being used to advance computer processor designs. In March, we reported on Nvidia using AI to optimize chip designs, particularly the floor-planning work. Also, in May, we reported on Synopsys boasting its DSO.ai software had been used in over 200 customer chip designs.

Mark Tyson
News Editor

Mark Tyson is a news editor at Tom's Hardware. He enjoys covering the full breadth of PC tech; from business and semiconductor design to products approaching the edge of reason.

  • Interesting to know that they used a new AI approach to design RISC-V CPU automatically, by generating the Boolean function/logic circuit represented by BSD, which can also automatically generate optimal instruction-level parallelism. Much better than the conventional BDD generation problems tackled in existing EDA tools.

    Because using "conventional" and traditional AI learning techniques usually fail if they are used to design CPUs, just basically only from using input-output.

    They can only generate correct circuit logic around about 300 logic gates, which is no match when compared to that of latest industrial CPUs. Even the Intel 80486 CPU rounds up to about 300,000 logic gates.

    It appears the CPU has been codenamed as "Qimeng 1" (after translation), and 4 million logic gates were generated in 5 hours, which is 4000 times larger than the chip that can be designed by GPT-4.

    Not directly related to this news:

    yTMRGERZrQEView: https://www.youtube.com/watch?v=yTMRGERZrQE&ab_channel=TechTechPotato%3AClips%27n%27Chips
    Reply
  • Steve Nord_
    Yeah! It got fabbed! You know it's missing something basic to the what, Athlon IIs and Pentium Core 3s, to not hit 3 GHz at 65 nm, maybe overall metal layers maybe data from that first successful mask gen, but they ran the mini gauntlet and made deadline for the run!

    Maybe even a readable outing on arXiv, thanks Chinese Research Team! (Will it have more than one conditional branch? Very very speculative execution? Represent everything in 4s complement? Discover content-addressable memory on its own for network to-do? Generate and throw trash, exceptions, flags or tightly scoped codepages at threads reaching out of scope?)

    Whoa, it's short! Separate methods section barely deals with the fab constraints. Notably their AI is reinforcement learning for an adder and a few Boolean theorems (couched in terms of oracles.) So BSD is their own invention rather than Haight-Ashbury.ait.ac.cn and a Binary Speculation Diagram graph. Humble bragging rights to casting with that operation space. Can this be a 1034-bit adder?
    Reply
  • JamesJones44
    I would be curious to know how much time it took to gather the data and train the model for the overall time calculation. I'm guessing those parts took longer than 5 hours.
    Reply
  • TechyIT223
    What's the success rate of this method?
    Reply
  • TechyIT223 said:
    What's the success rate of this method?

    Hard to confirm, since this is just an early proof of concept/experimental kind of thing.

    But they suggested that performance could still be improved with augmented algorithms, and in the paper's conclusion, team also speculates on a self-evolving machine that can design its own iterative upgrades and improvements.

    While that may be a bit far off in the future, the AI did independently discover the 'von Nuemann architecture' through its observation of inputs and outputs.

    This means there is room for improvement, and the algorithm can be tweaked to focus on fine-grain architecture optimization to work around traditional bottlenecks, a task which will be quite difficult for human engineers to accomplish.
    Reply
  • Sippincider

    It is claimed this AI-automated feat was about 1000x faster than a human team could have finished a comparable CPU design. However, some may poke fun at the resulting AI-designed CPU performing approximately on par with an i486.

    I wouldn't poke fun too quickly. What happens when we give AI that same 5,000 hours, and it produces a chip which blows everything out of the water and we humans have no idea how it works?
    Reply
  • gg83
    Metal Messiah. said:
    Interesting to know that they used a new AI approach to design RISC-V CPU automatically, by generating the Boolean function/logic circuit represented by BSD, which can also automatically generate optimal instruction-level parallelism. Much better than the conventional BDD generation problems tackled in existing EDA tools.

    Because using "conventional" and traditional AI learning techniques usually fail if they are used to design CPUs, just basically only from using input-output.

    They can only generate correct circuit logic around about 300 logic gates, which is no match when compared to that of latest industrial CPUs. Even the Intel 80486 CPU rounds up to about 300,000 logic gates.

    It appears the CPU has been codenamed as "Qimeng 1" (after translation), and 4 million logic gates were generated in 5 hours, which is 4000 times larger than the chip that can be designed by GPT-4.

    Not directly related to this news:

    yTMRGERZrQEView: https://www.youtube.com/watch?v=yTMRGERZrQE&ab_channel=TechTechPotato%3AClips%27n%27Chips
    I'd buy into whatever Jim Keller is selling. That guy is awesome.
    Reply
  • shoddyyMic1
    gg83 said:
    I'd buy into whatever Jim Keller is selling. That guy is awesome.
    Yeah too bad he is not much involved these days in chip design projects. Right now I suppose he is focusing more on AI and stuff like that. Tenstorrent more like?
    Reply
  • Yeah, Jim Keller joined AI chip startup Tenstorrent as CTO in Dec 2020, but became its CEO in January 2023.
    Reply
  • TechyIT223
    Metal Messiah. said:
    Hard to confirm, since this is just an early proof of concept/experimental kind of thing.

    But they suggested that performance could still be improved with augmented algorithms, and in the paper's conclusion, team also speculates on a self-evolving machine that can design its own iterative upgrades and improvements.

    While that may be a bit far off in the future, the AI did independently discover the 'von Nuemann architecture' through its observation of inputs and outputs.

    This means there is room for improvement, and the algorithm can be tweaked to focus on fine-grain architecture optimization to work around traditional bottlenecks, a task which will be quite difficult for human engineers to accomplish.
    Thanks that makes sense 🙂 I would like to see more implementation of this AI for chip designing in near future
    Reply