'Thermodynamic computing' could slash energy use of AI image generation by a factor of ten billion, study claims — prototypes show promise but huge task required to create hardware that can rival current models
"It will still be necessary to work out how to build the hardware to do this"
A mind-bending new report claims that 'thermodynamic computing' could, in theory, drastically reduce the energy consumed by AI to generate images, using just one ten-billionth of the energy of current popular tools. As reported by IEEE Spectrum, two recent studies hint at the potential of this burgeoning technology, but its proponents admit the solution is rudimentary.
According to the report, Lawrence Berkeley National Laboratory staff scientist Stephen Withelam claims thermodynamic computing could be used for AI image generation "with a much lower energy cost than current digital hardware can." In a January 10 article published by Whitelam and Corneel Casert, also of Berkeley, the pair outlined how "it was possible to create a thermodynamic version of a neural network," laying the foundations for generating images using thermodynamic computing.
The world's first 'thermodynamic computing chip' reached tape out last year. Thermodynamic computing is much more akin to quantum or probabilistic computing than your traditional gaming PC, using noise and physical energy to solve problems.
According to the report, the thermodynamic computer is given a set of images, and then left to let the images degrade. The natural random interactions run until equilibrium is achieved between the computer's components. The computer is then tasked with working out the probability of reversing this decay process, before adjusting the values to make that as likely as possible.
Whitelam followed this research up with an article in Physical Review Letters on January 20, in which he details that this process can be used to create a thermodynamic computer that can be used to generate the image of some handwritten digits.
Naturally, that's a long way off the intense image generation capabilities of Google Gemini's Nano Banana Pro, or any other AI image generator you can think of. However, it serves as a proof of concept that somehow, one day, thermodynamic computing could be used for AI image generation.
"This research suggests that it’s possible to make hardware to do certain types of machine learning," Whitelam told IEEE. Specifically, "image generation — with considerably lower energy cost than we do at present." Given how rudimentary this proof of concept is, Whitelam warns that thermodynamic image generation to rival mainstream options is a long way off. "We don’t yet know how to design a thermodynamic computer that would be as good at image generation as, say, DALL-E," he reportedly said. "It will still be necessary to work out how to build the hardware to do this.”
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
That's quite the catch, but in a world where AI buildouts and data center growth are putting unprecedented strain on global energy supply, a future process that could reduce AI image generation energy usage by a factor of ten billion would certainly be a breakthrough.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Stephen is Tom's Hardware's News Editor with almost a decade of industry experience covering technology, having worked at TechRadar, iMore, and even Apple over the years. He has covered the world of consumer tech from nearly every angle, including supply chain rumors, patents, and litigation, and more. When he's not at work, he loves reading about history and playing video games.