Facebook's chief artificial intelligence researcher told Bloomberg the company is working on a new semiconductor design made specifically to handle the mass amounts of data used by AI.
The researcher, Yann LeCun, is also a computer science professor at New York University who specializes in deep learning. So he's well-placed to help solve the hardware limitations AI researchers must face. "We don’t want to leave any stone unturned," LeCun told Bloomberg, "particularly if no one else is turning them over."
LeCun reportedly said that Facebook wants to develop semiconductors that can better train deep learning algorithms by manipulating data all at once instead of having to break it down into manageable tasks. (Essentially: Facebook wants its AI to eat the metaphorical cake in a single bite instead of slicing it up.)
These chips would, ideally, let deep learning algorithms quickly train smarter AI. It would also have the benefit of allowing Facebook to rely on its own hardware instead of buying off-the-shelf products from other companies. That way the company would be able to develop its AI hardware and software in tandem.
We already knew that Facebook planned to help design its own AI chips: Bloomberg reported last April that it was hiring a hardware team to design its own silicon. The company also announced at CES 2019 that it was working on the Intel Nervana Neural Network Processor for Inference alongside Intel.
But there's a difference between working on a chip with a company like Intel and designing a new kind of semiconductor. It will be interesting to see where Facebook takes this project in the future. The company's known for taking on massive projects only to significantly roll back or outright cancel them a few years in.