Tech giant Apple utilised Google’s tensor processing units (TPUs) to train two artificial intelligence (AI) models, the company revealed in a research paper.  

The move contrasts with the industry’s heavy reliance on Nvidia’s graphics processing units (GPUs) for AI development.  

According to Reuters, Nvidia holds about 80% of the market, even when factoring in chips produced by Google, Amazon.com, and other cloud computing companies. 

At the Worldwide Developers Conference in June 2024, Apple unveiled Apple Intelligence, a personal intelligence system embedded within iOS 18, iPadOS 18, and macOS Sequoia.  

This system includes generative models tailored for user tasks, capable of adapting to their activities in real-time.  

These models are designed to enhance user experiences, such as text composition, notification management, image creation, and app interaction simplification. 

Despite dominating approximately 80% of the AI processor market, Nvidia’s hardware was notably absent from Apple’s research paper detailing its AI infrastructure.  

In the research paper, Apple did not directly state that it did not use Nvidia chips; however, its description of the hardware and software infrastructure for its AI tools and features did not include any reference to Nvidia hardware. 

Instead, Apple highlighted its use of Google’s TPUs, specifically 2,048 TPUv5p chips for the iPhone AI model and 8,192 TPUv4 processors for the server AI model.  

In contrast to Nvidia, which offers its chips and systems as separate products, Google provides access to TPUs via its Google Cloud Platform.  

Customers who wish to use these chips must develop their software within Google’s cloud environment. 

Apple’s engineers have expressed in their research paper that Google’s TPUs could enable the creation of even more advanced models than those currently discussed.  

This development comes as Apple begins rolling out beta versions of Apple Intelligence and follows reports that the company delayed new AI features to address bugs.