Hyperscalers, defined as large cloud service providers that provide services such as computing and storage at enterprise scale, are looking to expand their partnerships beyond the traditional leaders in AI chips, most notably Nvidia.

This shift could be driven by the recognition that a single vendor may not be able to fully address the unique and evolving demands of these global operations. By collaborating with a broad range of semiconductor companies, hyperscalers aim to use their expertise and innovative technologies to benefit their specific needs.

One example of this is Oracle’s decision to partner with AMD and buy 30,000 AMD MI355X AI accelerators, which was revealed in its Q2 2025 earnings call. This was an unexpected move considering Oracle’s involvement in the Stargate Project, in which it aims to deploy 64,000 Nvidia GPUs at its Texas facility by the end of 2026. Oracle’s order to AMD suggests hyperscalers are looking to diversify their hardware vendors, possibly cracking Nvidia’s 90% chokehold on the AI chip market.

Likewise, Google has decided to partner with MediaTek for the development of its next AI chip. Although Google has a partnership with Broadcom, it is diversifying and collaborating with the company due to MediaTek’s ties with TSMC and its ability to charge less per chip.

Focus on in-house chip development is a growing trend

A notable trend among hyperscalers is the increasing focus on the design and production of proprietary AI chips. This strategic move allows them to tailor their hardware specifically to the unique demands of their workloads, resulting in enhanced performance and efficiency. By developing their own chips, hyperscalers can fine-tune various aspects, such as processing power, memory architecture, and energy consumption, to align perfectly with their operational requirements.

In-house chip development also significantly reduces dependence on external suppliers, which can be a critical factor in maintaining competitive advantage. By controlling the design and production processes, hyperscalers can mitigate risks associated with supply chain disruptions, price volatility, and technological obsolescence.

For example, Meta has announced that it is testing its first in-house chip for training AI systems, as it looks to develop its own custom silicon and reduce reliance on external vendors. It is partnering with TSMC to build the chip on a five-nanometer process. Although Meta remains one of Nvidia’s biggest customers, infrastructure costs and DeepSeek’s recent breakthrough has raised concerns about the costs associated with chip development.

Hyperscalers will continue to focus on AI chip performance and efficiency

As the complexity and scale of AI workloads continue to escalate, hyperscalers are placing a heightened emphasis on the development or use of chips that not only deliver superior performance but also maximise energy efficiency. This dual focus is essential for effectively scaling AI applications, which often require substantial computational resources and can be energy-intensive. The demand for performance stems from the need to process vast amounts of data quickly and accurately. AI models, particularly those used in deep learning, require significant computational power to train and infer.

Hyperscalers are also using too much computing power for inference tasks, leaving less for training new AI models. Keeping up with the increased computational demand for AI models has led to a slowdown in the development of complex AI models, as not enough power is being dedicated to training. By optimising these chips for high throughput and low latency, hyperscalers can ensure that they can keep up with the increasing demand of inference tasks while having enough computing power left over to develop new and improved models.

Hyperscalers are keen to command more of their value chain

Hyperscalers are continuously evolving their AI chip strategies to keep up with the increasingly fierce competition in the AI space, and the technological progress that comes with it. Several of these companies have chosen to diversify their vendors instead of being reliant on just one, creating the conditions to meet their specific needs through collaboration.

By developing their own in-house chips, some companies have begun the process of decreasing their reliance on external AI chip companies and shielding them from possible price fluctuations. With the demand for newer and better AI products increasing, hyperscalers will take advantage of economies of scale, aiming to gain more control over the value chain.