Key Takeaways Copied to clipboard!
- The current DRAM shortage is driven by a supply constraint stemming from conservative capital expenditure during the prior COVID-related down cycle, coinciding with an accelerating demand surge fueled by AI, particularly the wafer-intensive HBM.
- HBM (High Bandwidth Memory) is a specialized, complex, stacked DRAM crucial for AI accelerators, which consumes significantly more wafer capacity per bit compared to commodity DRAM, thus cannibalizing supply for traditional uses like PCs and mobile devices.
- Despite commodity DRAM margins currently being higher than HBM margins due to soaring spot prices, memory makers are prioritizing investment in HBM as the long-term growth driver, leading to continued shortages across the board, potentially lasting until 2027 or beyond.
Segments
AI Crowding Out Commodities
Copied to clipboard!
(00:02:19)
- Key Takeaway: AI is creating a crowding out effect across commodities, specifically impacting energy and memory chip prices, forcing consumer-facing companies to raise prices or curtail supply.
- Summary: The hosts express concern that resources across industrial commodities are being diverted to feed the AI beast. This is already manifesting in rising energy prices and, more recently, memory chip price surges. Companies like Apple and Nintendo are facing consequences, potentially having to raise prices or cut supply due to exploding DRAM costs.
DRAM Price Surge Timing
Copied to clipboard!
(00:04:48)
- Key Takeaway: Spot DRAM prices, despite the AI narrative being prevalent for years, only saw a significant surge starting in late 2023, indicating a sudden, sharp supply-demand imbalance.
- Summary: While AI-related chip stocks surged earlier, spot DRAM prices remained relatively stable until late last year before escalating sharply. This indicates that the immediate, massive demand shock for memory only recently hit the spot market. The imbalance is severe enough that many companies are already losing out while others profit.
Guest Introduction and Core Imbalance
Copied to clipboard!
(00:05:38)
- Key Takeaway: Ray Wang identifies the core imbalance as constrained incremental wafer capacity for 2024/2025 due to conservative capital expenditure during the prior COVID-era down cycle, clashing with rapidly accelerating AI demand.
- Summary: Guest Ray Wang explains that memory producers avoided overinvesting in capacity expansion during the post-COVID downturn, leading to limited incremental wafer capacity entering 2024 and 2025. Simultaneously, demand is accelerating rapidly, primarily driven by AI workloads. This mismatch between constrained supply and accelerating demand is the fundamental driver of the current situation.
Commodity DRAM Characteristics
Copied to clipboard!
(00:09:34)
- Key Takeaway: DRAM has historically behaved like a commodity due to continuously falling cost-per-bit and standardized product specifications set by a committee, making price the primary competitive factor.
- Summary: DRAM historically exhibits commodity characteristics because the cost per bit consistently declines annually, making cost competitiveness crucial. Furthermore, standardized industry specifications make significant product differentiation difficult between suppliers. This dynamic forces competition primarily on market price.
HBM vs. Commodity DRAM
Copied to clipboard!
(00:10:53)
- Key Takeaway: HBM (High Bandwidth Memory) is a specialized DRAM formed by stacking multiple dice to achieve necessary bandwidth for AI, making its manufacturing process far more complex and less commoditized than standard DRAM.
- Summary: HBM emerged to address the limited memory bandwidth when scaling AI models, which outpaces the growth of traditional DRAM bandwidth. HBM involves stacking 8, 12, or more DRAM dice, requiring complex front-end and back-end packaging technology. This complexity allows HBM suppliers to differentiate based on technology, leading to better margins compared to commodity DRAM.
AI Memory Consumption Breakdown
Copied to clipboard!
(00:16:47)
- Key Takeaway: AI memory demand is voracious across training and inference, with HBM being critical, but server CPU DRAM (LPDDR/DDR) and memory for agentic AI workloads also contribute significantly.
- Summary: Memory demand from AI is pervasive, requiring HBM for training and inference acceleration. Furthermore, server CPU DRAM is needed for both training workloads and inference processing, especially the memory-intensive decode phase. Agentic AI specifically requires substantial CPU-based servers, which contain large amounts of standard DRAM.
Demand Destruction Evidence
Copied to clipboard!
(00:21:06)
- Key Takeaway: Demand destruction is already visible in the market through price hikes for PCs and significant downward revisions in smartphone outlooks, particularly in the Chinese market.
- Summary: Consumer electronics companies are already implementing price hikes for products like PCs from Dell and Lenovo due to memory costs. Smartphone outlooks, especially in China, are being cut by 10-15% by component suppliers like MediaTek. Meaningful market impact from these price increases is expected to show up in the second half of 2024.
Supply-Side Fixes and Constraints
Copied to clipboard!
(00:24:13)
- Key Takeaway: The primary short-term fix for the supply crunch relies on memory makers rapidly migrating production to more advanced process nodes (like 1B and 1C) to increase bit density per wafer, despite cleanroom capacity constraints.
- Summary: Demand-side fixes like downgrading products are difficult as they hurt competitiveness, so the focus is on supply. The main constraint this year is limited cleanroom space, which restricts new wafer capacity coming online. Manufacturers must accelerate process node migration (e.g., to 1C) to increase bit output per existing wafer, though this is challenging and competes with HBM allocation.
Producer Investment Impulse
Copied to clipboard!
(00:28:03)
- Key Takeaway: Despite short-term commodity DRAM margins being temporarily higher, the major memory makers (Micron, Hynix, Samsung) are strategically increasing CapEx to expand capacity and invest heavily in HBM technology for long-term growth.
- Summary: Memory management recognizes that sustainable demand requires capacity expansion, though new fabs take years to come online (e.g., 2028). CapEx for DRAM is significantly increasing across the top three players to fund both new fabs and advanced process node migration. HBM remains a high priority because it represents a new, differentiated growth driver essential for future market share.
Chinese Producer Competition
Copied to clipboard!
(00:31:38)
- Key Takeaway: A technology gap of approximately three to four years exists between leading Korean producers and Chinese memory makers like CSMT, whose revenue is overwhelmingly derived from the domestic Chinese market.
- Summary: Chinese memory suppliers like CSMT primarily compete within China, which accounts for about 25% of global DRAM demand. While Korean giants dominate high-end and HBM production, CSMT is gaining traction in low-to-medium-end commodity DRAM. Chinese government policy is also pushing domestic HBM development to support local AI hardware.
Allocation Priority for Suppliers
Copied to clipboard!
(00:33:01)
- Key Takeaway: Memory suppliers prioritize allocating scarce supply to the server DRAM and HBM sectors, as these segments together constitute over half of the total DRAM market and are experiencing the fastest growth.
- Summary: The highest-tier customers receive priority allocation, but the allocation strategy is heavily sector-based. Server DRAM and HBM are the top focus areas for the next few years because they represent the fastest-growing and most profitable segments. Mobile demand growth, driven only by increased content per device, is relatively flat compared to AI server needs.
Structural Shift vs. Super Cycle
Copied to clipboard!
(00:38:35)
- Key Takeaway: This AI-driven cycle is structurally different from past memory super cycles because the HBM demand growth constrains commodity DRAM supply capacity simultaneously, leading to a potentially rare four-year cycle lasting into late 2027.
- Summary: The current situation deviates from historical cycles, which typically peaked within 15-18 months. The key difference is that the new demand driver, HBM, directly competes for wafer capacity needed for commodity DRAM. This dynamic suggests the current period of high demand and tight supply could extend significantly, potentially through the second half of 2027.
Hyperscaler Purchasing Power
Copied to clipboard!
(00:44:08)
- Key Takeaway: While increased DRAM prices impact the cost basis for hyperscalers’ massive CapEx, they struggle to secure better long-term pricing because memory suppliers prefer the flexibility of selling at volatile spot rates for maximum profit.
- Summary: The increased cost of memory does affect the overall CapEx budget for hyperscalers, forcing them to potentially reduce the volume of memory purchased for a fixed budget. Hyperscalers attempt to secure long-term agreements for better pricing, but memory suppliers are reluctant to commit, preferring to capitalize on the current high-margin spot market.