Nvidia (NASDAQ: NVDA) warned in its latest Form 10-K that three direct customers individually generated 12%, 11%, and 11% of its total revenue, while an additional indirect customer—widely believed to be OpenAI—also topped the 10% threshold through reseller channels, highlighting how heavily the chipmaker relies on a small group of hyperscale buyers.
Now OpenAI is hedging its bets. Reports show that the ChatGPT creator has begun renting Google Cloud’s fourth-generation TPU chips, diversifying away from Nvidia’s flagship H100s in search of lower costs and faster scaling.
Nvidia sales are so dependent from OpenAI to the point the company include the risk as a specific disclosure in its financial report (article in post in quote)⚠️
— JustDario 🏊♂️ (@DarioCpx) June 27, 2025
OpenAI is now starting to use other chips, like many others will do soon, to significantly cut costs.
As I warned… https://t.co/PqcvJqjBTF pic.twitter.com/5I2jo9iylf
Google’s TPUs are application-specific and far thriftier on power. An A100 GPU draws roughly 400W, while a Cloud TPU v3 typically sips 120-150W for the same matrix-math workload. At hyperscale utilization, those watts translate into millions of dollars in annual opex.
Chinese start-up DeepSeek has already shown that judicious code-path pruning can chop inference costs by orders of magnitude, bluntly arguing that “hardware monopolies keep US models unprofitable.”
Against this backdrop, Nvidia executives have unloaded more than $1 billion in stock over the past year—$500 million in June alone as the share price touched fresh highs.
Information for this briefing was found via the sources mentioned. The author has no securities or affiliations related to this organization. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.