Alphabet is one of NVIDIA's largest purchasers of artificial intelligence chips, leasing these processors to Google Cloud customers including OpenAI and Meta Platforms. However, the company's ambitions to develop its own AI chips remain undiminished.
According to seven individuals involved in related negotiations, Alphabet has recently approached several small cloud service providers that primarily lease NVIDIA chips, proposing that their data centers also deploy the company's AI processors.
Company representatives involved in these deals have privately disclosed that Alphabet has reached agreements with at least one cloud service provider, including London-based Fluidstack, which will deploy Alphabet's Tensor Processing Units (TPUs) in its New York data center.
Additionally, Alphabet has attempted to secure similar agreements with other cloud service providers focused on NVIDIA chips, including Crusoe, which is building a data center with extensive NVIDIA chip deployment for OpenAI, and CoreWeave, which leases NVIDIA chips to Microsoft and OpenAI.
The reasons behind Alphabet's first-time decision to deploy TPUs in other cloud service providers' data centers remain unclear. Analysts suggest this could be because the company's internal data center construction cannot keep pace with growing chip demand, or it may be seeking new customers for its TPUs through other cloud service providers, such as AI application developers. This approach mirrors the model of cloud service providers leasing NVIDIA graphics cards.
If the latter scenario is accurate, Alphabet's strategy would represent more direct competition with NVIDIA, as the chip giant primarily sells processors to these cloud service providers. Regardless of the underlying motivation, deploying TPUs in other cloud service providers' data centers would inevitably reduce the number of NVIDIA GPUs utilized in these facilities.
A research team led by Gil Luria, equity research analyst at investment firm D.A. Davidson, indicated that an increasing number of cloud service providers and major AI developers are interested in TPUs as a means to reduce dependence on NVIDIA. After communicating with researchers and engineers from multiple leading artificial intelligence laboratories, they found positive industry sentiment toward Alphabet's machine learning and AI-customized accelerator chips.
Consequently, the analyst team believes that if Alphabet were to merge its TPU business with its AI research division DeepMind and spin it off as a separate public entity, market demand would be robust. According to Luria's team estimates, this business unit could be valued at approximately $900 billion, compared to their earlier estimate of $717 billion.
"No one wants to have just one source... to be completely dependent on a single supplier for critical components."
"If this business were actually spun off, investors would simultaneously acquire a leading AI accelerator chip supplier and a top-tier AI laboratory, which could become one of Alphabet's most valuable assets."
NVIDIA CEO Jensen Huang has expressed skepticism about competing chip projects, stating that AI application developers prefer GPUs due to their broader versatility and more robust software support.
Courting NVIDIA's "Friends"
The negotiations demonstrate Alphabet's efforts to approach emerging cloud service providers that NVIDIA has prioritized for support. Unlike Google Cloud and Amazon Web Services, these companies almost exclusively lease NVIDIA chips and show greater willingness to procure diverse NVIDIA products than traditional cloud service providers. NVIDIA has also invested capital in these companies and prioritized supply of its most sought-after chips.
Alphabet primarily uses TPUs to develop its proprietary AI models, such as the Gemini series, with internal TPU demand surging in recent years.
However, Alphabet has long leased TPUs to other companies. For example, Apple and Midjourney both rent TPUs through Google Cloud. In early summer, Alphabet even briefly sparked OpenAI's interest in leasing TPUs, though OpenAI subsequently changed course.
Internal discussions at Alphabet have explored expanding the TPU business to increase revenue and reduce the cloud computing division's dependence on expensive NVIDIA chips. Two former executives have disclosed that leadership has also considered selling TPUs directly to customers outside Google Cloud.
Analysts note that small cloud service providers like CoreWeave and Fluidstack, such as Fluidstack providing NVIDIA GPUs to startups like Mistral, have strong commercial incentives to prioritize NVIDIA chip servers given AI developers' general preference for NVIDIA products.
However, Alphabet appears to have found ways to encourage Fluidstack to support its TPU expansion plans: if Fluidstack cannot afford the lease costs for its upcoming New York data center, Alphabet will provide up to $3.2 billion in backstop support. This commitment helped Fluidstack and its data center partners secure debt financing to construct facilities.
Rising TPU Demand
Alphabet's sixth-generation Trillium TPU chips have experienced strong demand since opening to external customers in December. Analysts anticipate "significant increases" in demand for the seventh-generation Ironwood TPU. Ironwood represents the company's first chip specifically designed for large-scale AI inference tasks – model deployment and operation after training completion.
Analysts note that Alphabet's TPU chips can achieve computational power of up to 42.5 exaflops (quintillion floating-point operations) and have substantially increased high-bandwidth memory capacity. These chips demonstrate "significantly improved cost efficiency," which analysts identify as a primary factor attracting attention from leading laboratories.
Startup Anthropic previously used TPUs on a small scale, but analysts point out that the company is currently recruiting TPU kernel engineers, potentially signaling consideration of transitioning from Amazon Web Services' Trainium chips to TPUs. Trainium is Amazon's AI training-designed chip, and the company has invested $8 billion in Anthropic.
Analysts also indicate that Elon Musk's xAI company has shown interest in purchasing TPUs, partly due to "significant improvements in JAX-TPU tool support" this year. JAX is a high-performance computing Python library developed by Alphabet that enables efficient program execution on TPUs. Until recently, the JAX ecosystem limited possibilities for large-scale TPU deployment outside of the company.
According to D.A. Davidson's DaVinci developer dataset, TPU-related developer activity on Google Cloud increased approximately 96% during the six-month period from February to August 2025.