CIOTech Outlook Team | Thursday, 10 July 2025, 03:08 IST
Nvidia is deepening its relationship with OpenAI, as the ChatGPT developer has confirmed that it does not currently have plans to adopt the internal artificial intelligence chips from Google at scale.
This confirmation comes upon close of the existing agreement for services between the two artificial intelligence providers. Nvidia confirmed its position in OpenAI’s infrastructure when it posted the Reuters report on X (formerly Twitter) with the comment, "We’re proud to partner with OpenAI and continue powering the foundation of their work."
OpenAI stated there were existing experiments with Google's tensor processing units (TPU's), but that it currently had no plans to scale it. OpenAI said it will continue to use (and rely) on GPUs, which is why it chose to use AMD as a vendor to further expand the computational workloads of their growing demands.
Also Read: Google Launches 'AI Mode' in Search India: How to Use It
The clarification comes shortly after OpenAI entered into a cloud services agreement with Google Cloud in May. While the agreement provides OpenAI access to Google’s infrastructure to train and deploy models like ChatGPT, this does not indicate a general departure from their main hardware providers.
Google, in an effort to expand the reach of its TPUs—which until recently, were only used internally, has began offering the TPUs to outside customers. Reportedly, Apple and other startups such as Anthropic and Safe Super intelligence (co-founded by OpenAI’s Ilya Sutskever) have adopted the chips.
While Google’s TPUs offer potential cost and performance advantages, OpenAI seems to be comfortable in their current relationship with Nvidia and AMD for fulfilling the high-performance requirements of its AI systems—in the short-term at least.