SK Hynix ramps up HBM4 assembly to prepare for next-gen graphics cards

Next-Generation Datacenter GPUs to Leverage Advanced HBM4 Memory

Leading GPU manufacturers such as AMD and Nvidia have confirmed that their upcoming datacenter graphics cards will incorporate the latest High Bandwidth Memory 4 (HBM4) technology. South Korean semiconductor leader SK Hynix is poised to dominate the supply chain for these cutting-edge memory chips, following its recent announcement of successful HBM4 development and imminent mass production.

SK Hynix Accelerates HBM4 Production, Boosting Market Confidence

On Friday, SK Hynix revealed it had finalized the development of HBM4 and is gearing up for large-scale manufacturing. This milestone triggered a 7% surge in the company’s stock value, reflecting investor optimism about the growing demand for high-performance memory solutions in AI and datacenter applications.

HBM4: The Backbone of Future AI and HPC Accelerators

High Bandwidth Memory has become indispensable for advanced AI accelerators and high-performance computing (HPC) GPUs from industry giants like Nvidia and AMD. The upcoming Nvidia Rubin and AMD Instinct MI400 series, both slated for release in 2026, rely heavily on the availability of HBM4 to meet their performance targets.

Overcoming the Limitations of Current HBM Technologies

Existing HBM3 modules are capped at 36 GB capacity and approximately 1 TB/s bandwidth per stack, which limits total memory bandwidth to around 8 TB/s for GPUs such as Nvidia’s B300 and AMD’s MI355X. This bottleneck has driven the push toward HBM4, which promises substantial improvements in both capacity and speed.

Significant Performance Gains with HBM4 Integration

At Nvidia’s GTC event in March, the company unveiled that its Rubin GPUs will feature 288 GB of HBM4 memory, delivering an aggregate bandwidth of 13 TB/s. AMD, meanwhile, revealed at its Advancing AI conference in June that its MI400 GPUs will push the envelope further, offering up to 432 GB of HBM4 with bandwidth nearing 20 TB/s. These advancements are critical for powering next-generation AI workloads and large-scale data processing.

Technical Innovations Driving HBM4 Efficiency and Speed

SK Hynix has doubled the number of input/output terminals from 1,048 in HBM3e to 2,048 in HBM4, effectively doubling the bandwidth. This architectural enhancement also contributes to a more than 40% increase in energy efficiency, a crucial factor given that HBM typically consumes more power than conventional DRAM used in servers. For context, AMD’s MI325 GPU power consumption has escalated from 250 W to nearly 1,000 W, underscoring the need for more efficient memory solutions.

Moreover, SK Hynix’s HBM4 modules exceed JEDEC’s official standards by achieving operational speeds of 10 Gb/s, setting a new benchmark for memory performance in the industry.

Competitive Landscape: Samsung and Micron’s HBM4 Developments

While SK Hynix currently leads the market, Samsung and Micron are also advancing their HBM4 offerings. Micron began sampling 36 GB, 12-high HBM4 stacks in June, utilizing a 2048-bit interface similar to SK Hynix’s design to double bandwidth compared to HBM3e. The American company plans to scale production in 2025.

Samsung, on the other hand, views HBM4 as an opportunity to reclaim market share, particularly with Nvidia. However, Samsung has faced challenges in validating its HBM3e stacks for Nvidia’s Blackwell GPU accelerators, which has delayed its penetration into this lucrative segment.

Industry Implications and Geopolitical Factors

The memory market is also influenced by geopolitical dynamics, with the U.S. government restricting Samsung and SK Hynix from manufacturing memory chips in China. This policy could reshape supply chains and impact global memory availability for GPU manufacturers.

Looking Ahead: The Future of High-Performance Memory in AI and HPC

As AI workloads grow increasingly complex, the demand for faster, larger, and more energy-efficient memory solutions intensifies. HBM4’s advancements in capacity, bandwidth, and power efficiency position it as a cornerstone technology for next-generation GPUs and AI accelerators. With SK Hynix leading production and competitors like Samsung and Micron ramping up efforts, the memory landscape is set for significant evolution in the coming years.

More from this stream

Recomended