A team of researchers with New York University (NYU) has done the seemingly impossible: they've successfully designed a semiconductor chip with no hardware definition language. Using only plain English - and the definitions and examples within it that can define and describe a semiconductor processor - the team showcased what human ingenuity, curiosity, and baseline knowledge can do when aided by the AI prowess of ChatGPT.
While surprising, it goes further: the chip wasn't only designed. It was manufactured; it was benchmarked, and it worked. The two hardware engineers' usage of plain English showcases just how valuable and powerful ChatGPT can be (as if we still had doubts, following the number of awe-inspiring things it's done already).
The chip designed by the research team and ChatGPT wasn't a full processor; nothing in the way of an Intel or AMD processor like the ones in our list of best CPUs. But it's an element of a whole CPU: the logic responsible for creating a novel 8-bit accumulator-based microprocessor architecture. Accumulators are essentially registers (memory) where the results of intermediate calculations are stored until a main calculation is completed. But they're integral to how CPUs work; perhaps other necessary bits can also be designed.
Usually, teams work in several stages to bring a chip to design and manufacturing; one such stage has to do with translating the "plain English" that describes the chip and its capabilities to a chosen Hardware Descriptor Language (HDL) (such as Verilog), which represents the actual geometry, density and general disposition of the different elements within the chip that's required for the etching itself.
ChatGPT being a pattern recognition machine (just like humans - although we're both a bit more than that as well), it's incredible help with languages of any kind: vocal, written, and, here specifically, hardware-based. ChatGPT allowed the engineers to skip the HDL stage, which, while impressive, must leave HDL engineering specialists slightly nervous. Especially since the researchers said they expect fewer human-induced errors in the HDL translation process, contribute to productivity gains, shorten design time and time to market, and allow for more creative designs.
One thing that’s a bit more concerning (or debatable, at least) is the desire to eliminate the need for HDL fluency among chip designers. Being an extremely specialized and complex field, it’s a relatively rare skill that’s very hard to master.
“The big challenge with hardware description languages is that not many people know how to write them,” Dr. Pearce said. “It’s quite hard to become an expert in them. That means we still have our best engineers doing menial things in these languages because there are just not that many engineers to do them.”
Of course, automating parts of this process will be a definite boon. It could alleviate the human bottleneck by speeding up already-existing specialists even as new ones are brought up and trained. But there’s a risk to putting this skill entirely dependent on a software-based machine that depends on electricity (and server connectivity, in ChatGPT’s case) to run.
There’s also the matter of trusting what’s essentially an inscrutable software black box and its outputs. We’ve seen what can happen with prompt injection, and LLMs aren’t immune from vulnerabilities. We could even consider them to have expanded vulnerabilities since, besides being a piece of software, it’s a piece of software that results from training. And it isn’t science fiction to consider the option of a chip-based LLM being infected during its training phase towards introducing a “demonically clever” hardware-based back door leading to... somewhere. This may sound hyperbolic, and yes, it’s on the absolute low end of the possibility scale; but with mutating malware and other nasty surprises springing from even today’s versions of Large Language Models, what’s to say of what’ll be spewing from them tomorrow?
The researchers used commercially and publicly available Large Language Models (LLMs) to work on eight hardware design examples, working through the plain English text towards its Verilog (HDL) equivalent in live back-and-forth interaction between the engineers and the LLM.
“This study resulted in what we believe is the first fully AI-generated HDL sent for fabrication into a physical chip,” said NYU Tandon’s Dr. Hammond Pearce, research assistant professor, and a research team member. “Some AI models, like OpenAI’s ChatGPT and Google’s Bard, can generate software code in different programming languages, but their application in hardware design has not been extensively studied yet. This research shows AI can benefit hardware fabrication too, especially when it’s used conversationally, where you can have a kind of back-and-forth to perfect the designs.”
There are already several Electronic Design Automation (EDA) tools, with AIs showing impressive results in chip layout and other elements. But ChatGPT isn’t a piece of specialized software; apparently, it can write poetry and do an EDA cameo. The road toward becoming an EDA designer now has a much lower knowledge barrier for entry. Perhaps one day, enough bits and pieces of the CPU will be opened up so that anyone with enough determination (and invaluable help) of ChatGPT can design their CPU architecture at home.
Yes, many questions can be asked about what that means. But doesn’t it have potential?