According to Korean media reports, NVIDIA has discontinued the development of first-generation SOCAMM memory and shifted its R&D focus to SOCAMM2. SOCAMM was originally positioned as modular LPDDR memory, designed to provide high bandwidth and low power consumption advantages similar to HBM in AI servers. However, due to technical development obstacles, the plan has been forced to reset.
Previously, NVIDIA had already listed SOCAMM1 support for up to 18TB LPDDR5X capacity and 14.3TB/s bandwidth in its GB300 NVL72 specifications, positioning it as an alternative solution for data center computing.
Reports indicate that SOCAMM2 upgrades include increasing speeds from 8533MT/s to 9600MT/s and potentially supporting LPDDR6.
In terms of manufacturers, Micron Technology announced mass production of SOCAMM1 in March this year, becoming the first memory supplier to introduce the product into AI servers. In contrast, Samsung and SK Hynix only disclosed in earnings conference calls that they plan to begin mass production in the third quarter of 2025. If SOCAMM1 is confirmed to be discontinued, this would allow competitors to narrow the gap with Micron Technology.
The market previously estimated that SOCAMM shipments could reach 600,000 to 800,000 units in 2025, highlighting NVIDIA's original intention for rapid adoption. This direct transition from SOCAMM1 to SOCAMM2 may indicate that NVIDIA is accelerating upgrades to maintain its leading position in AI competition.