NVIDIA CEO Jensen Huang Announces Major Revenue Forecast, Sparks Stock Surge

Deep News
7 hours ago

NVIDIA CEO Jensen Huang delivered a highly optimistic outlook. In the early hours of the morning, the company's annual GTC conference, often referred to as the "AI Spring Gala," commenced. During his keynote address, Huang stated that he expects the company's flagship chips to generate at least $1 trillion in revenue by the end of 2027. This announcement triggered a sharp, vertical surge in NVIDIA's stock price, with intraday gains approaching 5%. The news also provided a significant boost to the Nasdaq index, which rose nearly 2% at one point during the trading session.

Additionally, NVIDIA introduced the Space-1 Vera Rubin module, designed to deploy data-center-level AI computing capabilities onto satellites and Orbital Data Centers (ODCs). The company emphasized its focus on in-orbit inference, real-time geospatial intelligence, and autonomous space missions.

Overnight, the three major U.S. stock indices all closed significantly higher. The Dow Jones Industrial Average rose 0.83%, the Nasdaq Composite climbed 1.22%, and the S&P 500 gained 1.01%. Large-cap technology stocks collectively strengthened, and most chip stocks also finished in positive territory, with the Philadelphia Semiconductor Index advancing nearly 2%. Major European stock indices also closed broadly higher, with Germany's DAX 30 index finishing up 0.67%.

In contrast, international oil prices fell sharply. WTI crude futures plunged 5.28% to settle at $93.50 per barrel, while Brent crude futures declined 2.84% to $100.21 per barrel.

Jensen Huang announced the $1 trillion revenue forecast on March 16th, Eastern Time, at NVIDIA's annual GTC developer conference. He projected that the company's next-generation AI accelerator chip architecture, Blackwell, along with the subsequent Rubin product line, would achieve this substantial revenue milestone by the end of 2027. This figure far exceeds Huang's previous sales prediction of $500 billion made in October 2025, once again highlighting the rapid expansion of the AI infrastructure investment wave.

Stimulated by this news, NVIDIA's stock experienced a rapid intraday ascent, with gains nearing 5% before paring some of the increase; the stock ultimately closed 1.65% higher. Amid the global AI boom, NVIDIA has solidified its position as the core hardware supplier in the AI infrastructure wave. Market consensus suggests that global technology companies' capital expenditures on AI infrastructure could reach hundreds of billions of dollars over the next few years.

NVIDIA's Chief Financial Officer, Colette Kress, previously stated at an event hosted by JPMorgan that, due to strong demand, the company is more optimistic about its data center business. She indicated that by the end of 2026, the expected revenue from NVIDIA's data center chips would "definitely" surpass the $500 billion forecast given in October of last year.

Wall Street analysts believe that NVIDIA's $1 trillion revenue target not only reflects confidence in its own product demand but also indicates the rapidly expanding size of the entire AI infrastructure market. For capital markets, this target alleviates investor concerns about a potential slowdown in AI demand and reinforces a core assessment: the AI computing power cycle is still in its early stages, not nearing its end. If achieved, this goal implies that global tech companies will continue to maintain high-intensity investment in AI servers, GPUs, and related systems for years to come.

As AI models grow in scale, inference demand surges, and enterprise-level AI applications accelerate their deployment, AI computing power is increasingly becoming a long-term capital expenditure direction, similar to cloud computing infrastructure.

On the product front, NVIDIA officially launched DLSS 5 at its annual GTC developer conference, describing it as the company's most significant breakthrough in computer graphics since real-time ray tracing was introduced in 2018. Utilizing a real-time neural rendering model, DLSS 5 injects pixels with "cinematic-grade" lighting and material detail, aiming to achieve interactive visuals in games that are close to Hollywood-level visual effects. In his speech, Huang compared DLSS 5 to the "GPT moment for graphics," emphasizing the new balance generative AI strikes between visual expression and artistic controllability.

According to NVIDIA, numerous top-tier publishers and game developers will integrate DLSS 5, including Bethesda, CAPCOM, NetEase, Tencent, Ubisoft, and Warner Bros. Games.

Furthermore, significant announcements included two new CPU products. The Vera CPU rack integrates 256 Vera CPUs per rack, offering double the computational efficiency and a 50% increase in operating speed compared to traditional CPUs. The Groq 3 LPX rack is equipped with 256 LPU processors, providing 128GB of on-chip SRAM and 640 TB/s of expanded bandwidth. When combined with the Vera Rubin platform, the LPX is expected to deliver a 35x improvement in inference throughput per watt.

Huang announced that the LPU chips will be manufactured by Samsung, with rack shipments anticipated to begin in the second half of this year. All three of these rack systems utilize liquid-cooling architecture.

The highly anticipated Spectrum-6 SPX, as expected, incorporated co-packaged optics (CPO) technology, delivering 5x higher optical power efficiency and 10x greater network reliability.

NVIDIA also unveiled the Space-1 Vera Rubin module, aimed at deploying data-center-level AI computing power to satellites and Orbital Data Centers. The company highlighted that its product portfolio—including Jetson Orin, IGX Thor, the RTX PRO 6000 Blackwell GPU, and the future Space-1 module—forms a complete computing architecture spanning from orbital edge computing to ground-based AI data centers and cloud analysis.

By venturing into new territories, NVIDIA is positioning its AI agent infrastructure as a new growth vector. NemoClaw is positioned as the infrastructure layer for the OpenClaw agent platform, enabling the deployment of AI agents with a "single command" and integrating Nemotron models with the OpenShell runtime environment, thereby enhancing security, privacy, and sandboxing capabilities. The focus is not only on simplified deployment but also on secure operation. NVIDIA emphasized that NemoClaw can run on RTX PCs, RTX PRO workstations, and devices like the DGX Station and DGX Spark, underscoring that "always-on AI assistants" require dedicated computing hardware.

NVIDIA also announced a further expansion of its "open model ecosystem," covering three key AI domains: Agentic AI, Physical AI, and Medical AI.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10