OpenAI, the company behind ChatGPT, has initiated a partnership with Broadcom and Taiwan Semiconductor Manufacturing Company (TSMC) to develop its first custom-designed artificial intelligence chip, the Reuters reports. The company is focusing on creating an inference chip, which processes new information based on trained AI models.
The artificial intelligence company has assembled a specialized team of approximately 20 chip engineers for this project, including Thomas Norrie and Richard Ho, who previously worked on Tensor Processing Units at Google. Through its collaboration with Broadcom, OpenAI has secured manufacturing capacity with TSMC, with plans to produce its first custom chip by 2026.
Supply Chain Maneuvering
In response to growing infrastructure demands, OpenAI is implementing a multi-faceted strategy to secure chip supply and manage costs. The company is expanding its processor sources by incorporating AMD chips alongside Nvidia’s graphics processing units (GPUs) through Microsoft’s Azure platform.
Initially, OpenAI considered establishing its own network of chip manufacturing facilities, known as foundries. However, the company abandoned these plans due to the substantial costs and time requirements involved, choosing instead to concentrate on chip design efforts.
The news of these developments had an immediate impact on the stock market, with Broadcom shares rising more than 4.5% and AMD shares increasing by 3.7% following the announcement.
Market Impact
OpenAI’s decision to develop its own chip while maintaining relationships with multiple suppliers mirrors strategies employed by other major technology companies, including Amazon, Meta, Google, and Microsoft. As one of the largest purchasers of Nvidia’s GPUs, OpenAI’s moves could influence the broader technology sector.
Currently, Nvidia maintains over 80% of the market share in AI chips. However, supply constraints and increasing costs have prompted major customers to explore alternatives. AMD has projected $4.5 billion in AI chip sales for 2024, following the launch of its MI300X chips in late 2023.
The development comes as OpenAI faces significant operational costs. The company is projected to experience a $5 billion loss this year on $3.7 billion in revenue, with computing costs representing its largest expense category. These costs include hardware, electricity, and cloud services required for processing large datasets and developing AI models.
Despite pursuing chip supply diversification, OpenAI maintains its commitment to working with Nvidia, particularly for access to Nvidia’s new generation of Blackwell chips. The company has been careful to preserve its relationship with the dominant chip manufacturer while pursuing its own development initiatives.