What does it take for OpenAI to make its own AI chips?
- OpenAI has been discussing strategies on AI chips since at least last year.
- Its options include working with other chip makers and diversifying its suppliers beyond Nvidia.
The modern AI that we are experiencing today would not have been possible without some highly specialized chips. Neural networks, the fundamental algorithmic framework responsible for driving significant AI advancements over the last decade depend highly on those chips. Without the hardware, none of the breakthrough advances in AI software we witness today would have been possible.
Most of the world’s AI chips are produced by California-based microchip maker Nvidia Corp. Throughout over a decade, Nvidia has established a formidable advantage in the production of chips capable of executing intricate AI functions, including image and facial recognition, speech recognition, and generating text for chatbots like ChatGPT.
Today, especially since OpenAI’s ChatGPT has become a global phenomenon, Nvidia has become a one-stop shop for AI development, from chips to software to other services. The world’s insatiable hunger for more processing power has even pushed Nvidia to become a US$1 trillion company this year.
While tech giants like Google, Amazon, Meta, IBM, and others have also produced AI chips, Nvidia accounts for more than 70% of AI chip sales today. It holds an even more prominent position in training generative AI models, according to the research firm Omdia. For context, OpenAI’s ChatGPT fuses a Microsoft supercomputer that uses 10,000 Nvidia Graphics Processing Units (GPUs).
But for OpenAI, while Nvidia is necessary, the dependency may need to be revised in the long term. Because if, according to Bernstein analyst Stacy Rasgon, each ChatGPT query costs the company around 4 cents, the amount will grow in line with the use of ChatGPT.
Reports have also indicated that OpenAI spends a staggering US$700,000 daily to run ChatGPT. With that in mind, the company has been considering working on its own AI chips to avoid a costly reliance on Nvidia. CEO Sam Altman has indicated that the effort to get more chips is tied to two major concerns: a shortage of the advanced processors that power OpenAI’s software and the “eye-watering” costs associated with running the hardware necessary to power its efforts and products.
OpenAI’s plans for AI chips: To make or not to make?
According to recent internal discussions described to Reuters, the company has been actively discussing this matter but has yet to decide the next step. The discussion centered around solving the shortage of expensive AI chips that OpenAI relies on, people familiar with the matter told Reuters.
Simultaneously, there have been reports that Microsoft is also looking in-house and has accelerated its work on codename Athena, a project to build its own AI chips. While it’s unclear if OpenAI is involved in the same project, Microsoft reportedly plans to make its AI chips available more broadly within its company and the former as early as this year.
A report by The Verge indicated that Microsoft may also have a road map for multiple future generations of chips. A separate report suggested that any chip reveal will likely occur at Microsoft’s Ignite conference in Seattle in November. Athena is expected to compete with Nvidia’s flagship H100 GPU for AI workloads in data centers if that happens.
“The custom silicon has been secretly tested by small groups at Microsoft and partner OpenAI,” according to Maginative’s report. However, if OpenAI were to move ahead in building a custom chip independently of Microsoft, it would involve a heavy investment that could amount to hundreds of millions of dollars a year in costs, with no guarantee of success.
What about an acquisition?
While the company has been exploring making its own AI chips since late last year, sources claim that the ChatGPT owner has evaluated a potential acquisition target. Undoubtedly, an acquisition of a chip company could speed the process of building OpenAI’s chip – as it did for Amazon and its acquisition of Annapurna Labs in 2015.
“OpenAI had considered the path to the point where it performed due diligence on a potential acquisition target, according to one of the people familiar with its plans,” Reuters stated. Even if OpenAI plans include a custom chip – by acquisition or otherwise – the effort will likely take several years.
In short, whatever the path may be, OpenAI would still be highly dependent on Nvidia for a while.
READ MORE
- 3 Steps to Successfully Automate Copilot for Microsoft 365 Implementation
- Trustworthy AI – the Promise of Enterprise-Friendly Generative Machine Learning with Dell and NVIDIA
- Strategies for Democratizing GenAI
- The criticality of endpoint management in cybersecurity and operations
- Ethical AI: The renewed importance of safeguarding data and customer privacy in Generative AI applications