Future computing power competition will no longer be about individual data centers, but about cross-regional computing networks. As AI computing demand increases, single data centers are gradually approaching limits in terms of power, cooling, and land resources. Recently, NVIDIA unveiled its new Spectrum-XGS Ethernet technology at the Hot Chips conference. Through the innovative concept of "scale-across," this new technology integrates multiple geographically distributed data centers into a unified gigawatt-scale AI superfactory to address the physical limitations of individual data centers in terms of power and space.
As a coordinated hardware and software solution, this technology addresses the high latency and performance uncertainty challenges faced by traditional Ethernet in long-distance data center interconnect (DCI) through dynamic distance adaptive algorithms and advanced congestion control mechanisms, significantly improving distributed AI training efficiency.
Zhuang Changlei, Vice President of AI/Intelligent Manufacturing at Cloudwise Capital, told 21st Century Business Herald that future computing power competition will no longer be about individual data centers, but about cross-regional computing networks. NVIDIA's move aims to build a global "AI factory" network providing ubiquitous computing services.
In capital markets, related concepts such as hollow-core fiber and optical modules have received significant attention. As of this week's publication, Yangtze Optical Fibre and Cable achieved two trading limit-ups in three days, AAOI surged over 15%, Eoptolink gained over 10%, and T&S Communications rose over 20%.
**Rise of Scale-Across Model**
To build powerful AI computing clusters, there are generally two paths: Scale up and Scale out. Currently, Scale up shows better results. Cathay Haitong Research points out that Scale up network bandwidth is significantly higher than cross-server Scale out networks, making the construction of high-bandwidth, low-latency Scale up networks the mainstream technical solution in the industry.
The emergence of scale-across has become the "third pillar" of AI computing after scale-up and scale-out. Zhuang Changlei notes that traditional Scale-Up (increasing GPU density per rack) and Scale-Out (increasing rack numbers within the same data center) models cannot meet the unlimited growth demands of AI computing power. Scale-Across breaks geographical boundaries, allowing integration of data centers from different cities, countries, or even continents into unified computing resources.
Taking Coreweave, one of NVIDIA Spectrum-XGS's first customers, as an example, the cloud service provider improved data center integration efficiency by 40% after adopting this technology.
NVIDIA's use of Spectrum-XGS Ethernet to combine multiple distributed data centers into a GW-level AI supercomputing center mainly addresses large model training needs. "Training large models with hundreds of billions or even trillions of parameters requires continuous operation for months or longer, placing extremely high demands on computing cluster scale and stability," Zhuang Changlei points out. By integrating global resources through Spectrum-XGS, training time can be significantly reduced and R&D efficiency improved.
**Reshaping Data Centers**
In Zhuang Changlei's view, future computing power competition will no longer be about individual data centers, but about cross-regional computing networks. NVIDIA's move aims to build a global "AI factory" network providing ubiquitous computing services.
Western Securities research reports indicate that the data center industry mainly includes upstream equipment, facilities and software suppliers, midstream IDC builders and service providers, and downstream application customers across various industries. Meanwhile, driven by AI technology development and exponential growth in the AIGC industry, traditional Internet Data Centers (IDC) are accelerating their evolution toward Artificial Intelligence Data Centers (AIDC).
Zhuang Changlei believes the Scale-Across model is similar to adding a "fourth network layer" (cross-domain extension layer) above the traditional three-tier network architecture (core, aggregation, access). NVIDIA's intended GW-level AI supercomputing center is expected to have profound impacts on the data center industry chain.
First, it drives optical communication infrastructure upgrades. GW-level centers rely on high-speed, low-latency optical communications. For example, switches supporting 32 GPU nodes require four times more fiber than traditional cloud networks, while 72 GPU nodes require 16 times more. This will significantly drive demand for optical modules and fiber optic cables, particularly 1.6T/3.2T optical modules and hollow-core fiber.
Second, it drives high-speed PCB demand growth. High-end switches and optical modules require 22+ layer high multilayer boards or 5+ order HDI boards to support high-speed signal transmission and high-density integration. For example, Compute Trays in GB200/NVL72 cabinets require 22-layer 5-order HDI, while Switch Trays need 24-layer 6-order HDI. This benefits high-end PCB manufacturers.
Finally, it accelerates liquid cooling technology adoption, promoting the "electrification of computing power" trend. Future computing power may be scheduled and delivered through "grids" like electricity. GW-level AI factories are the prototype of this trend, potentially leading to global computing power scheduling networks.
**Hollow-Core Fiber May Benefit First**
CITIC Securities Research points out that NVIDIA's move not only marks AI networks transitioning from within data centers to cross-regional interconnection, but also indicates DCI's increasingly prominent core position in future AI training, potentially driving rapid growth in demand for next-generation transmission technologies like hollow-core fiber with ultra-low latency and high-capacity advantages.
Specifically, CITIC Securities believes hollow-core fiber replaces traditional glass cores with gas or vacuum, offering characteristics of low transmission delay, ultra-low loss, and weak nonlinearity compared to traditional fiber, plus an ultra-wide working frequency band exceeding 1000nm and higher transmission capacity, efficiently meeting multi-scenario demands.
According to Corning's March 2025 investor conference, future North American data center nodes will expand from the current 6 to 12 by 2030. As DCI connection demands increase and Ethernet penetrates long-distance transmission technology, hollow-core fiber demand is expected to grow rapidly.
Zhuang Changlei also states, "Light travels about 30% faster in air than in glass, reducing latency by 30% and attenuation by over 50% (hollow-core fiber attenuation can be as low as 0.05dB/km). This makes it very suitable for long-distance, low-latency data transmission."
Notably, Yangtze Optical Fibre and Cable, a leading manufacturer in hollow-core fiber, revealed in June investor communications this year that hollow-core fiber has disruptive advantages including ultra-low latency, ultra-low loss, and ultra-low nonlinearity. It currently has conditions for further pilot testing and promotion in areas like computing data centers and high-frequency financial trading, potentially becoming a core technology option for next-generation optical networks.
However, Yangtze Optical Fibre and Cable executives also acknowledge that hollow-core fiber applications are still in early stages. Large-scale commercialization depends on many factors including improved mass production capabilities, cost optimization, mature application scenarios, operational maintenance testing, and technical standard improvements. Stable pricing and profit levels have not yet formed, and the transition from early trial verification to large-scale commercialization still requires further product and industry chain maturation. Currently, hollow-core fiber related business has not yet significantly impacted the company's financial data.
Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.