Tech Giants Follow NVIDIA's Blueprint: Google and AMD Invest Heavily to Build AI Chip Ecosystems

Deep News
8 hours ago

NVIDIA's successful model of creating a "computing-power-for-finance" loop by supporting CoreWeave has proven so attractive that it has compelled Google and AMD to follow suit, using substantial capital to build their own AI chip ecosystems.

On February 20, according to The Wall Street Journal citing informed sources, Alphabet is exploring ways to leverage its strong financial resources to build a broader AI ecosystem. The core strategy involves providing funding to data center partners on the condition that they use Alphabet's Tensor Processing Units (TPUs). Concurrently, reports indicate that Advanced Micro Devices is aggressively promoting its AI chips by guaranteeing loans for customers.

This strategy is viewed by the market as a direct imitation of NVIDIA's "CoreWeave model"—by supporting emerging cloud providers, or "Neoclouds," companies can bypass traditional cloud giants, which are either dominated by NVIDIA or developing their own chips, to establish their own dedicated networks.

**Emulating the NVIDIA Playbook: Investing in "Neoclouds" and Former Miners**

Sources reveal that Alphabet is in talks to invest approximately $100 million in cloud computing startup Fluidstack, a deal that would value Fluidstack at around $75 million. This is not merely a financial investment but a strategic move to create binding partnerships. Companies like Fluidstack are key players in the current AI computing market, specializing in providing computing power services for AI companies. NVIDIA previously used a similar tactic by supporting CoreWeave, enabling it to stockpile large quantities of GPUs and carve out a market niche.

Alphabet now aims to replicate this approach. Sources state that Alphabet hopes the investment will "help amplify Fluidstack's growth potential and encourage more computing providers to use its AI chips."

Beyond direct investment, Alphabet is also reaching out to former cryptocurrency mining companies undergoing transformation. Reports indicate that Alphabet has provided financing support for projects involving companies like Hut 8, Cipher Mining, and TeraWulf. These firms possess ready-made data center infrastructure and are eager to transition into AI computing factories; Alphabet's financial backing is expected to secure their adoption of TPUs.

**AMD's Aggressive Gambit: Guaranteeing Loans and Offering Leasebacks**

Compared to Alphabet's direct investments, Advanced Micro Devices' approach appears more aggressive and carries higher risk tolerance. Reports indicate that AMD will provide a substantial guarantee for a $300 million loan to data center startup Crusoe from Goldman Sachs, with the funds intended for purchasing AMD's AI chips.

A notable aspect is the "safety-net clause." Sources reveal that if Crusoe cannot find customers to use the chips, AMD has agreed to lease them back from Crusoe. This means AMD acts as the "lessee of last resort," eliminating demand-side concerns for the client.

While this tactic may boost short-term sales, it exposes the chipmaker to significant risk if AI demand slows. A market downturn would directly impact AMD's balance sheet.

**Why the Detour?**

The reason both Alphabet and AMD are choosing this indirect path is that the main route is effectively blocked. For Alphabet, although star startups like Anthropic are increasing their use of TPUs, traditional cloud competitors such as Amazon Web Services and Microsoft Azure show little interest in TPUs.

Industry insiders note, "Major cloud service providers seem lukewarm, partly because they view Alphabet as a competitor." Furthermore, Amazon AWS is itself heavily invested in developing its own AI chips. Therefore, supporting neutral third-party "Neoclouds" has become the optimal strategy for Alphabet and AMD to break through the blockade.

**Internal Debates and Production Bottlenecks**

To accelerate TPU commercialization, there have even been internal discussions at Alphabet about potential organizational restructuring. Sources indicate that some managers within Google Cloud have revived a long-standing internal debate: "whether to restructure the TPU team into a separate unit." Such a spin-off could potentially allow Alphabet to attract external capital and expand investment opportunities.

However, this proposal has been officially denied by Alphabet. A representative stated clearly that "there are no plans to reorganize the TPU team" and emphasized "the advantages of keeping the chip team closely integrated with other parts of the company, such as the Gemini model development team."

Beyond organizational issues, a more practical obstacle is production capacity. Although Alphabet currently collaborates with Broadcom on TPU design, with manufacturing handled by Taiwan Semiconductor Manufacturing Company (TSMC), TSMC's advanced production capacity is stretched thin amid soaring global AI demand.

Sources suggest that "TSMC may prioritize its largest customer, NVIDIA, over Alphabet." Additionally, a global shortage of High Bandwidth Memory chips, essential for AI chips, presents another significant challenge that Alphabet must overcome to increase TPU shipments.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10