The demand for DRAM and NAND from artificial intelligence is expanding at a pace that exceeds the industry's supply capacity. Based on current trends, this specific demand is projected to surpass 50% of the total addressable market (TAM) for the memory industry this year. According to analyst assessments, a substantial easing of the supply bottleneck is unlikely before 2028. Micron Technology CEO Sanjay Mehrotra recently stated in an interview that the AI industry is still in its "first innings." He pointed out that as inference workloads scale, the demand for tokens will continue to rise, and the speed of token generation relies on faster, higher-capacity memory. He emphasized that memory has become a "strategic asset" for customers. The core challenge currently is not demand or pricing, but the extreme tightness of supply itself and the inability to ramp up production quickly. Micron's recently announced fiscal 2026 second-quarter results support this view: the company achieved record highs in revenue, gross margin, earnings per share, and free cash flow, with the third quarter expected to set new records again. The supply gap is not expected to see a turning point before 2028. Micron noted that demand for both traditional servers and AI servers remains robust but is constrained by tight supplies of both DRAM and NAND. Analyst views suggest that, considering the construction lead times required for new wafer fabs, the tight supply situation is unlikely to ease significantly before 2028. More specific quantitative forecasts indicate that as 12-layer HBM4 for next-generation AI GPUs begins volume production, more conservative projections suggest the industry might only meet about 60% of DRAM demand by 2027. This supply-demand dynamic will solidify memory manufacturers' pricing power. However, if mass production progress falls short of expectations, manufacturers' market positions could also face pressure. The momentum on the demand side continues to accelerate. Agentic AI workloads are pushing CPU memory support specifications towards 400GB—approximately four times current levels. Next-generation AI GPUs are set to feature HBM4, offering significant bandwidth improvements alongside substantially expanded capacity limits. Meanwhile, LPDDR is becoming the preferred solution for large-scale AI deployments due to its power efficiency advantages. In response to the tight supply environment, Micron is advancing its next-generation portfolio across multiple product lines. The company is currently supplying 36GB (12-Hi) HBM4 DRAM for platforms, with existing HBM3 processes expected to reach mature yields. Next-generation HBM4E memory is planned to begin production ramp-up next year. In the LPDDR segment, Micron recently introduced a 256GB solution based on LPDDR5X, capable of supporting capacities up to 2TB. For DDR5, the company is supplying components for systems, with a single chip supporting up to 12TB of capacity. The effects of supply tightness are also being felt in the consumer segment. Micron anticipates low double-digit percentage declines in PC and smartphone shipments, primarily driven by supply constraints and rising prices. A notable trend signal is that 32GB is becoming the standard memory configuration for PCs running Agentic AI workflows locally.