Earnings Preview | Cloud Giants Launch "AI Cost Revolution," ASIC Era Dawns! Marvell Technology (MRVL.US) Poised for Strong Results

Stock News
9 hours ago

Marvell Technology (MRVL.US), a leader in developing customized AI chips (AI ASICs) for large-scale AI data centers and a major partner for Amazon Web Services' Trainium series AI ASICs, is scheduled to report its earnings after the U.S. market closes on March 5, Eastern Time. Wall Street analysts broadly anticipate that the rising wave of AI inference and the trend of integrating large AI models into enterprise operations, often termed "micro-training," will drive significant demand for more cost-effective AI ASICs. This trend is expected to challenge Nvidia's dominant market share, estimated near 90% in the AI chip sector. Consequently, analysts project robust financial growth for ASIC leaders like Marvell and the larger player Broadcom (AVGO.US), with management likely to provide strong forward guidance.

In its most recent quarterly report for Q3 fiscal year 2026 (covering performance up to November 1, 2025), Marvell reported net revenue of approximately $2.075 billion, representing year-over-year growth of about 37%, slightly exceeding market expectations. Adjusted earnings per share also surpassed Wall Street forecasts. The company's strong performance in the third quarter reflects explosive expansion in demand for custom AI ASICs, fueled by cloud leaders' intensive construction and expansion of AI data centers.

According to consensus estimates from Wall Street analysts compiled by Zacks Investment Research, Marvell's adjusted EPS for the fourth fiscal quarter is projected to be around $0.79, implying a potential increase of 31.7% compared to the same period last year. Revenue for the quarter is anticipated to be approximately $2.21 billion, suggesting significant year-over-year growth of 21% on top of a strong prior-year base. For the full fiscal year, analysts generally expect EPS of $2.84, which would represent a surge of 80.9% compared to the previous year. Revenue expectations for Marvell's current and next fiscal years are $8.18 billion and $10 billion, respectively, implying growth rates of 41.8% and 22.3%.

Furthermore, Marvell's recent acquisition of a company specializing in optical interconnect technology is expected to enhance its capabilities in high-bandwidth, low-latency AI data center infrastructure. This acquisition is projected to gradually contribute to revenue growth in the coming years and help the company expand its footprint within the AI ecosystem. In its previous earnings report, alongside strong Q3 results and optimistic guidance, the chipmaker also announced a significant $3.25 billion deal to acquire Celestial AI, a startup focused on optical I/O interconnects, to strengthen its networking portfolio.

Marvell's CEO, Matt Murphy, stated on the earnings call that Celestial's technology will be integrated into Marvell's next-generation silicon photonics-based infrastructure hardware products. These products are expected to open up a new, potentially massive market opportunity estimated at $100 billion for the company. Murphy and other executives indicated that they anticipate meaningful revenue contributions from the Celestial AI business starting in the second half of fiscal year 2028, reaching an annualized revenue run rate of approximately $500 million by Q4 FY2028, and doubling to $1 billion by Q4 FY2029.

Market concerns regarding Nvidia's future prospects appear valid. The global generative AI boom has accelerated AI chip development efforts among cloud and chip giants, who are racing to design the fastest and most power-efficient AI computing infrastructure clusters for advanced data centers. Marvell and its primary competitor, Broadcom, leverage their expertise in high-speed interconnects and chip IP to collaborate with cloud hyperscalers like Amazon, Google, and Microsoft in creating custom AI ASIC clusters tailored to specific data center needs. This ASIC business has become critically important for both companies, exemplified by Broadcom's partnership with Google on the TPU AI cluster, a classic example of the AI ASIC approach.

The new head of AI infrastructure at Amazon, Peter DeSantis, emphasized in a recent media interview, "If we can build models on our own custom AI chips, we can do so at a fraction of the cost of pure-play AI model providers." He added, "Building hyperscale AI data centers does present cost challenges. For AI to truly transform everything, the cost structure must change."

While Nvidia (NVDA.US) currently holds a dominant share in the core AI chip market, and just reported exceptionally strong Q4 FY2026 results and guidance, its stock recently experienced a significant drop. This decline is largely attributed to growing market apprehension about announcements from hyperscalers regarding their plans to develop more cost-effective, in-house AI ASICs, which are seen as a potential long-term risk to Nvidia's hegemony.

Plans by Anthropic, often called an "OpenAI rival," to spend hundreds of billions on acquiring 1 million TPU chips, alongside considerations by Meta to purchase billions of dollars worth of Google's TPU infrastructure for its massive AI data centers, coupled with Amazon's initiatives to develop AI models using Trainium and Inferentia, collectively signal a major shift. As cloud giants initiate an "AI computing cost revolution" to increase the penetration of their custom ASICs, concerns about Nvidia's future market position are justified.

The wave of AI inference is set to significantly challenge Nvidia's market share. Major economic and power constraints are driving companies like Microsoft, Amazon, Google, and Meta to pursue in-house AI ASIC development. The primary goal is to achieve better cost and power efficiency for AI computing clusters. The enormous costs associated with building hyperscale AI data centers, akin to projects like "Stargate," are pushing tech giants to prioritize economic efficiency and extreme optimization of "cost per token" and "output per watt" under power constraints. This environment heralds a prosperous era for AI ASIC technology.

Furthermore, factors like persistent supply constraints, high costs, and delivery bottlenecks associated with advanced AI GPU clusters like Nvidia's Blackwell architecture make in-house ASICs an attractive "second source" of capacity. This provides cloud providers with greater leverage in procurement negotiations, product pricing, and ultimately, cloud service margins. Additionally, major cloud providers can opt for co-designing the entire stack—from chip and interconnect to system, compiler/runtime, scheduling, and observability—thereby improving infrastructure utilization and reducing Total Cost of Ownership (TCO).

While Nvidia's AI GPUs dominate the AI training segment, which demands high versatility and rapid iteration of computing systems, the AI inference segment, following the scaling of advanced AI technologies, prioritizes metrics like cost per token, latency, and energy efficiency. For instance, Google has positioned its Ironwood TPU generation as "built for the AI inference era," emphasizing performance, power efficiency, cost-effectiveness, and scalability. However, recent actions by Amazon demonstrate that AI ASICs also possess significant potential for training large models.

In the medium to long term, AI ASIC computing systems will undoubtedly erode Nvidia's monopoly premium and capture some market share, though not through a simple, linear replacement of GPU systems. The fundamental reason is that competition in the inference era shifts beyond pure "peak compute power" to encompass cost per token, power consumption, memory bandwidth utilization, interconnect efficiency, and overall TCO achieved through hardware-software co-design. For specific workloads, ASICs—with their custom dataflow, compilers, and interconnects—are inherently better positioned to achieve higher cost-effectiveness compared to general-purpose GPUs.

For Nvidia and AMD, this landscape likely implies real margin pressure, manifesting as reduced pricing power, market share erosion, and compression of valuation premiums, rather than an absolute collapse in demand. The AI inference super-cycle will undoubtedly challenge the GPU-dominated monopoly, but the impact is more likely to reshape industry profit pools and customer procurement structures than invalidate the underlying growth narrative for GPUs. AWS explicitly positions Trainium/Inferentia as dedicated accelerators for generative AI training and inference, claiming a 30-40% better price-performance for Trainium2 compared to its AI GPU cloud instances. Google has also stated that the training and inference for Gemini 2.0 run entirely on TPUs. This indicates that the use of custom ASICs by hyperscale cloud providers for core model training and inference has moved beyond proof-of-concept into a replicable, industrial-scale phase.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10