On August 18, Lenovo showcased its comprehensive AI infrastructure solutions at the 2025 CCF National High-Performance Computing Academic Conference (CCF HPC China 2025), featuring the Lenovo ThinkSystem heterogeneous intelligent computing platform, Lenovo AI solutions, and Lenovo scientific computing integrated solutions.
The 2025 CCF National High-Performance Computing Academic Conference maintains its characteristic of "academic leadership and industrial integration," serving as a high-performance computing exchange platform that combines technical discussions, achievement demonstrations, and ecosystem connections.
During the conference, Huang Shan, Director of Strategic Management at Lenovo China Infrastructure Solutions Group, delivered a keynote presentation titled "Lenovo's Integrated Large Model Training and Inference Solutions Accelerating the New Process of Super-Intelligence Convergence." The presentation explored development trends in computing power, models, and applications within the AI wave, demonstrating Lenovo's forward-looking vision and innovative breakthroughs in AI computing infrastructure for enterprise AI application deployment and industry intelligent transformation.
Currently, the artificial intelligence industry is developing rapidly, with global computing power demand continuously rising. Computing power has become a crucial foundational resource supporting digital economy development. Although China's computing power scale has reached world-leading levels, a unified large market for standardized and inclusive computing power services has not yet formed, creating contradictions between tight computing power supply and ineffective utilization of some computing resources. Therefore, efficiency improvement has become the main development line for the computing power layer, continuously integrating with the model layer and application layer to jointly drive the AI wave through cycles and accelerate enterprise AI deployment.
Huang Shan believes there is a clear evolution path and demand hierarchy in enterprise AI deployment processes. From a deployment perspective, this covers application stages including intelligent agent development and model development, combined with software optimization through model fine-tuning, entering the widespread application phase. Specifically, enterprise AI deployment computing power demands face three stages: from trial use and exploration needs, to pursuing solution integration and long-term updates including model iteration and hardware expansion growth stages, ultimately advancing to mature stages focused on deep coordination between model fine-tuning and application deployment under large-scale production scenarios, pursuing optimal input-output ratios.
This demand progression drives continuous iteration of enterprise AI solutions, covering all stages of enterprise AI deployment needs, helping customers overcome technical barriers and accelerate intelligent transformation processes.
Huang Shan stated, "Currently, AI application computing infrastructure architecture has gradually moved toward heterogeneous convergence. Enterprises need robust, reliable, and highly efficient computing foundations, as well as cluster management software platforms to complete pooling, scheduling, management, and optimization of diverse heterogeneous computing power, truly unleashing computing capabilities to help enterprises achieve full lifecycle management of computing power needs, thereby accelerating AI computing power inclusivity."
Facing the rapidly expanding scale of enterprise artificial intelligence applications, many enterprises encounter situations where high-performance computing (HPC) clusters and intelligent computing clusters coexist. The coexistence of these two cluster systems leads to scattered resource management, not only causing resource idleness and waste but also increasing management complexity and burden. Under the trend of super-intelligence convergence, achieving unified management and scheduling of HPC clusters and intelligent computing clusters has become a challenge.
According to introductions, as the core of Lenovo's AI infrastructure "one horizontal, five vertical" strategy, the Lenovo ThinkSystem heterogeneous intelligent computing platform can uniformly manage heterogeneous computing power, achieving management and scheduling of heterogeneous computing clusters, allowing customers to easily obtain integrated, stable general-purpose, AI, and scientific computing power. The latest version 3.0 adds four breakthrough innovative technologies: AI inference acceleration algorithm suite, AI compilation optimizer, AI training and inference slow node failure prediction and self-healing system, and expert parallel communication algorithms, directly addressing key pain points in large model application deployment and continuously breaking through computing power efficiency limits.