With data growth driven by artificial intelligence (AI) increasing exponentially, so does the uptake of high-bandwidth memory (HBM).
www.eetimes.com, Aug. 26, 2025 –
But it’s still a premium memory, and not the most straightforward technology to implement. And with Nvidia setting a rapid pace for GPU development, standards are struggling to keep up – this means customization is essential if HBM is going to continue riding the coattails of GPU and accelerator adoption.
According to a recently published report by Dell’Oro Group, on-going AI expansion drove the server and storage component market to 62% year-over-year growth in 1Q 2025, which included a surging demand for HBM, as well as accelerators and NiCs.
In an interview with EE Times, Baron Fung, senior research director at Dell’Oro, said Nvidia’s Blackwell GPU, combined with the rollout of custom accelerators by major cloud service providers, is driving the AI accelerator market.
Dell’Oro’s Data Center IT Semiconductors and Components Quarterly Report noted that SK Hynix led the HBM market with a 64 percent revenue share, followed by Samsung and Micron Technology.
Fung said HBM uptake began to significantly take off in 2022 due to the growth of training-focused GPUs. “In the meantime, we’re also seeing this increasing adoption rate for AI servers or even accelerators.”
Revenues for AI servers have gone from 20% of the market to about 60% in the last several years, Fung said, with HBM and GPU capacity and performance also growing by leaps and bounds. This growth has put pressure on HBM supply, with vendors getting booked at least a year in advance, he said.
Samsung has struggled to meet the requirements for supplying HBM chips, which has allowed SK Hynix and Micron to gain ground, with Micron’s next-generation HBM4 featuring a 2048-bit interface set to go into production in 2026.
Micron’s HBM4E is to follow in subsequent years, introducing an option to customize its base die. The company reported a near 50% sequential increase in HBM revenue in Q3 FY’25, pushing its annualized run rate to $6 billion.
Fung said the uncertainties fueled by tariffs are adding a layer of complexity to supply chains, which will impact HBM pricing.
There are more readily available alternatives to HBM for high-performance computing needs. “Some of the lower-tier GPUs are using GDDR, but you won’t have that fast interconnect speed that HBM and the tight integration of the GPU offer,” Fung said. Other options include low-latency DRAM or flash SSDs for training model storage, but for top-tier performance and latency, HBM is essential.
Fung said that GPU makers are creating the roadmap for HBM vendors, and with constrained supply, there will be a strong market for all three major players, with SK Hynix in the lead, followed by Samsung and Micron.
The challenge for HBM makers is that GPU vendors have ramped up the cadence to release new technology once a year, which is much faster than the typical update cycle of memory standards.
Unlike previous memory technology transitions, which took four to five years, HBM generations now evolve every two to two-and-a-half years, accelerating the pace of innovation, Jin Yokoyama, senior director and memory product marketing manager at Advantest, told EE Times. HBM in data centers and accelerators is experiencing rapid growth and technological advancement, he said.
HBM wafer production is rising sharply, outpacing traditional DRAM advancements, such as DDR5, Yokoyama said. Unlike previous memory technology cycles, which transitioned every four to five years, HBM generational shifts now occur every two to two and a half years. He said this rapid evolution presents significant challenges for testing manufacturers like Advantest, as it must keep pace with faster product cycles and increasingly complex design requirements.
Testing requirements also vary by manufacturer, Yokoyama said. Key challenges include increased data bandwidth and device capacity, which necessitate the development of higher-speed testing solutions and enhanced thermal management for heat dissipation.
Further compounding the complexity of HBM testing is the increasing adoption of custom implementations for advanced AI and SoC use cases, Yokoyama said.
Traditionally, HBM standards were defined by JEDEC, with vendors manufacturing both memory core and base logic wafers, he said, but with HBM4, SoC vendors and hyperscalers increasingly require customized HBM functionality, optimizing features to match their specific AI ASICs or custom SoCs for maximum performance.
Yokoyama said this trend is leading to more logic and controller functionality being integrated directly into the HBM base logic die. The manufacturing of base logic dies is shifting to foundries like TSMC, which use advanced processes such as 3nm or 5nm. These advanced processes, in turn, require more advanced and flexible testing processes.
Khurram Malik, senior director of product marketing at Marvell Technology, said the exponential growth of data due to robots, sensors, and other Internet of Things (IoT) edge devices has disrupted the traditional linear development of HBM, with HBM4E already coming to market within three years of the HBM3 specification being published by JEDEC.
Marvell collaborates with all the major HBM vendors with its custom HBM compute architecture, including Micron, Samsung, and SK Hynix. Malik said the architecture, which was announced in late 2024, enables the design of (HBM) systems explicitly tailored for AI accelerators (XPUs) by integrating advanced 2.5D packaging technology and custom interfaces.
HBM memory bandwidth and IO counts are doubling with each new generation, Malik said, with packages growing denser and more complex. Reaching consensus in the industry is challenging, as advances like HBM4 and HBM5 increase IO counts from 2,000 to 4,000, necessitating innovative packaging to accommodate these high-bandwidth connections within limited space.
Malik said Nvidia has not only reduced its product development cycle, allowing it to release a new GPU every year, but it also expects to double memory bandwidth and capacity. JEDEC standards, however, take time. That means Nvidia is opting for a custom solution, he said.