OpenAI Exploring Building Custom AI Chips for ChatGPT: Report

Robot hands look into a crystal ball with silicon wafers
(Image credit: Shutterstock (2244706463 1921584086))

OpenAI, the developer of highly popular ChatGPT artificial intelligence servers, is contemplating the development of its custom AI chips due to the high costs and scarce availability of Nvidia's compute GPUs and other processors for AI inference and training. The organization is even considering potential acquisitions to obtain the necessary IP and speed up the process, according to Reuters.

Designing a custom chip would be a major strategic shift for OpenAI and entail significant investment. Furthermore, OpenAI's custom silicon endeavor to create its own AI processors might span several years, which means a continued reliance on major chip designers, such as Nvidia, Intel, and AMD. While acquiring an existing chip company is considerably easier and might accelerate the process, given the fact that there are loads of AI hardware startups out there, there is no assurance of success. 

Entering the domain of chip development would position OpenAI alongside companies like Amazon Web Services (AWS), Google, and Tesla, both of which have undertaken initiatives to design critical chips for their services, including custom CPUs, custom AI processors for training and inference, and video transcoding. It is worth noting that some tech giants have encountered obstacles in custom processor development. For instance, Meta had to abandon certain AI chip projects due to the challenges it faced. 

The primary driver for OpenAI's exploration into going a custom silicon route is the challenge posed by the limited supply of AI GPUs, which it uses to train and run its large language model. OpenAI's dependence on GPUs is evident from its use of a massive supercomputer provided by Microsoft, which employs 10,000 of Nvidia's A100 units. Meanwhile, Nvidia holds more than 80% of the global AI processor market share, according to Reuters.

OpenAI's CEO, Sam Altman, has voiced his concerns about the shortage of GPUs and the enormous expenses involved in operating AI software on such platforms. The operation costs for ChatGPT alone are immense; if it scales up to even a tenth of Google search's magnitude, the financial implication would be around $48.1 billion initially for GPUs and then an ongoing $16 billion annually.

Interestingly, Microsoft, an OpenAI backer, is reportedly crafting its custom AI processor. This move might indicate a potential drift in the relationship between the two entities. Meanwhile, OpenAI could adopt Microsoft's hardware and vice versa.

Anton Shilov
Freelance News Writer

Anton Shilov is a Freelance News Writer at Tom’s Hardware US. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • Order 66
    Yay, AI-driven GPU shortage averted. /s seriously though, this might help things if OpenAI decides to go through with it.
    Reply
  • Elusive Ruse
    This just in: Jensen's jacket suffered a microtear.
    Reply
  • Order 66
    Elusive Ruse said:
    This just in: Jensen's jacket suffered a microtear.
    Oh no guess he will have to raise the prices of all 40 series cards by $500 to cover the cost of the new jacket. /s
    Reply
  • Elusive Ruse
    Order 66 said:
    Oh no guess he will have to raise the prices of all 40 series cards by $500 to cover the cost of the new jacket. /s
    Nah, he will have the video editors VFX fix the tear using next-gen neural network stitching methods.
    Reply
  • Crazyy8
    Elusive Ruse said:
    Nah, he will have the video editors VFX fix the tear using next-gen neural network stitching methods.
    DLSS 4.0 to increase the resolution of the jacket and use Thread Gen to generate "fake threads" onto the jacket with a %500 performance increase on certain jackets.
    Reply