OpenAI, the developer of highly popular ChatGPT artificial intelligence servers, is contemplating the development of its custom AI chips due to the high costs and scarce availability of Nvidia's compute GPUs and other processors for AI inference and training. The organization is even considering potential acquisitions to obtain the necessary IP and speed up the process, according to Reuters.
Designing a custom chip would be a major strategic shift for OpenAI and entail significant investment. Furthermore, OpenAI's custom silicon endeavor to create its own AI processors might span several years, which means a continued reliance on major chip designers, such as Nvidia, Intel, and AMD. While acquiring an existing chip company is considerably easier and might accelerate the process, given the fact that there are loads of AI hardware startups out there, there is no assurance of success.
Entering the domain of chip development would position OpenAI alongside companies like Amazon Web Services (AWS), Google, and Tesla, both of which have undertaken initiatives to design critical chips for their services, including custom CPUs, custom AI processors for training and inference, and video transcoding. It is worth noting that some tech giants have encountered obstacles in custom processor development. For instance, Meta had to abandon certain AI chip projects due to the challenges it faced.
The primary driver for OpenAI's exploration into going a custom silicon route is the challenge posed by the limited supply of AI GPUs, which it uses to train and run its large language model. OpenAI's dependence on GPUs is evident from its use of a massive supercomputer provided by Microsoft, which employs 10,000 of Nvidia's A100 units. Meanwhile, Nvidia holds more than 80% of the global AI processor market share, according to Reuters.
OpenAI's CEO, Sam Altman, has voiced his concerns about the shortage of GPUs and the enormous expenses involved in operating AI software on such platforms. The operation costs for ChatGPT alone are immense; if it scales up to even a tenth of Google search's magnitude, the financial implication would be around $48.1 billion initially for GPUs and then an ongoing $16 billion annually.
Interestingly, Microsoft, an OpenAI backer, is reportedly crafting its custom AI processor. This move might indicate a potential drift in the relationship between the two entities. Meanwhile, OpenAI could adopt Microsoft's hardware and vice versa.
Stay on the Cutting Edge
Join the experts who read Tom's Hardware for the inside track on enthusiast PC tech news — and have for over 25 years. We'll send breaking news and in-depth reviews of CPUs, GPUs, AI, maker hardware and more straight to your inbox.
Anton Shilov is a Freelance News Writer at Tom’s Hardware US. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.
Yay, AI-driven GPU shortage averted. /s seriously though, this might help things if OpenAI decides to go through with it.Reply
This just in: Jensen's jacket suffered a microtear.Reply
Oh no guess he will have to raise the prices of all 40 series cards by $500 to cover the cost of the new jacket. /sElusive Ruse said:This just in: Jensen's jacket suffered a microtear.
Nah, he will have the video editors VFX fix the tear using next-gen neural network stitching methods.Order 66 said:Oh no guess he will have to raise the prices of all 40 series cards by $500 to cover the cost of the new jacket. /s
DLSS 4.0 to increase the resolution of the jacket and use Thread Gen to generate "fake threads" onto the jacket with a %500 performance increase on certain jackets.Elusive Ruse said:Nah, he will have the video editors VFX fix the tear using next-gen neural network stitching methods.