MIT's Protonic Resistors Enable Deep Learning to Soar, In Analog

A team of researchers with the Massachusetts Institute of Technology (MIT) have been working on a new hardware resistor design for the next era of electronics scaling - particularly in AI processing tasks such as machine learning and neural networks. 

Yet in what may seem like a throwback (if a throwback to the future can exist), their work focuses on a design that's more analog than digital in nature. Enter protonic programmable resistors - built to accelerate AI networks by mimicking our own neurons (and their interconnecting synapses) while accelerating their operation a million times -- and that's the actual figure, not just hyperbole.

All of this is done while cutting down energy consumption to a fraction of what's required by transistor-based designs currently used for machine-learning workloads, such as Cerebras' record-breaking  Wafer Scale Engine 2.

While our synapses and neurons are extremely impressive from a computational standpoint, they're limited by their "wetware" medium: water.

While water's electrical conduction is enough for our brains to operate, these electrical signals work through weak potentials: signals of about 100 millivolts propagating over milliseconds, through trees of interconnected neurons (synapses correspond to the junctions through which neurons communicate via electrical signals). One issue is that liquid water decomposes with voltages of 1.23 V - more or less the same operating voltage used by the current best CPUs. So there's a difficulty in simply "repurposing" biological designs for computing.

Another issue is that biological neurons aren't built on the same scale as modern transistors are. They're much bigger - ranging in sizes from 4 microns (.004 mm) to 100 microns (.1 mm) in diameter. When the latest available GPUs already carry transistors at the 6 nm range (with a nanometre being 1,000 times smaller than a micron), you can almost imagine the difference in scale, and how much more of these artificial neurons you can fit into the same space.

The research focused on creating solid-state resistors which, as the name implies, create resistance to electricity's passage. Namely, they resist the ordered movement of electrons (negatively-charged particles). If using material that resists electricity's movement (and that thus should in turn generate heat) sounds counterintuitive, well, it is. But there are two distinct advantages to analog deep-learning compared to its digital counterpart.

First, in programming resistors, you are including the required data for training in the resistors themselves. When you program their resistance (in this case, by increasing or reducing the number of protons in certain areas of the chip), you're adding values to certain chip structures. This means that information is already present in the analog chips: There's no need to ferry more of it in and out towards external memory banks, which is exactly what happens in most current chip designs (and RAM or VRAM). All of this saves latency and energy.

According to the researchers, their resistors are a million times faster (again, an actual figure) than previous-generation designs, due to them being built with phosphosilicate glass (PSG), an inorganic material that is (surprise) compatible with silicon manufacturing techniques, because it's mainly silicon dioxide.

Perhaps materials research will save Moore's Law from its untimely death.

Francisco Pires
Freelance News Writer

Francisco Pires is a freelance news writer for Tom's Hardware with a soft side for quantum computing.