At the 2025 Baidu Cloud Intelligence Conference held today, Shen Dou, Executive Vice President of Baidu Group and President of Baidu Intelligent Cloud Business Group, announced a comprehensive upgrade to Baidu Intelligent Cloud's AI computing infrastructure. The new Baizhou AI Computing Platform 5.0 delivers significant capability improvements across four key areas: networking, computing power, inference systems, and integrated training-inference systems, aimed at breaking through AI computing efficiency bottlenecks.
In networking, the platform offers faster communication and lower latency to improve model training and inference efficiency. For computing power, Kunlun chip super nodes are now online, making super-scale computing officially available. The inference system incorporates three core strategies - "decoupling," "adaptive," and "intelligent scheduling" - to boost throughput while reducing latency. For integrated training-inference, Baidu has released the Baizhou reinforcement learning framework to maximize computing resource utilization and enhance training and inference efficiency.
Particularly noteworthy in the computing power upgrade is that following the announcement of Kunlun chip super nodes at the Create 2025 Baidu AI Developer Conference in April, the upgraded Baizhou AI Computing Platform 5.0 has now been officially launched on Baidu Intelligent Cloud's public cloud services.
Currently, the industry's largest open-source models have reached 1 trillion parameters. With Kunlun chip super nodes, anyone can easily deploy and run these models in just a few minutes using a single cloud instance.
The content is for reference only and does not constitute investment advice. Investors should operate at their own risk.