AI expands HBM footprint
By Gary Hilson, EETimes (January 20, 2022)
High bandwidth memory (HBM) is becoming more mainstream. With the latest iteration’s specifications approved, vendors in the ecosystem are gearing to make sure it can be implemented so customers can begin to design, test and deploy systems.
The massive growth and diversity in artificial intelligence (AI) means HBM is less than niche. It’s even become less expensive, but it’s still a premium memory and requires expertise to implement. As a memory interface for 3D-stacked DRAM, HBM achieves higher bandwidth while using less power in a form factor that’s significantly smaller than DDR4 or GDDR5 by stacking as many as eight DRAM dies with an optional base die which can include buffer circuitry and test logic.
Like all memory, HBM makes advances in performance improvement and power consumption with every iteration. A key change when moving to HBM3 from HBM2 will a 100% performance improvement in the data transfer rate from 3.2/3.6Gbps to 6.4Gbps max per pin, said Jinhyun Kim, principal engineer with Samsung Electronics’ memory product planning team.
A second fundamental change is a 50% increase in the maximum capacity from 16GB (8H) to 24GB (12H). Finally, HBM3 implements on-die error correction code as an industry-wide standard, which improves system reliability, Kim said. “This will be critical for the next generation of artificial intelligence and machine learning systems.”
![]() |
E-mail This Article | ![]() |
![]() |
Printer-Friendly Page |
Related News
- MIPS Expands Global Footprint with New Design Center and Talent for Systems Architects and AI Compute
- Cadence Expands Design IP Portfolio Optimized for Intel 18A and Intel 18A-P Technologies, Advancing AI, HPC and Mobility Applications
- Ceva Expands Embedded AI NPU Ecosystem with New Partnerships That Accelerate Time-to-Market for Smart Edge Devices
- Tenstorrent Expands Deployment of Arteris' Network-on-Chip IP to Next-Generation of Chiplet-Based AI Solutions
- SensiML Expands Platform Support to Include the RISC-V Architecture
Breaking News
- Weebit Nano and DB HiTek to demonstrate chips integrating Weebit ReRAM at PCIM 2025
- Silvaco Expands Product Offerings in Photonics and Wafer-Scale Plasma Modeling for AI Applications with Acquisition of Tech-X Corporation
- IC'Alps joins Intel Foundry Accelerator program as Value Chain Alliance (VCA) and Design Services Alliance (DSA) partner
- Movellus Debuts Industry-First On-Die Power Delivery Network Analyzer
- Weebit Nano Q3 FY25 Quarterly Activities Report
Most Popular
- Movellus Debuts Industry-First On-Die Power Delivery Network Analyzer
- Intel Foundry Gathers Customers and Partners, Outlines Priorities
- Siemens and Intel Foundry advance their collaboration to enable cutting-edge integrated circuits and advanced packaging solutions for 2D and 3D IC
- CFX 0.13µm eFuse OTP IP has been applied in the mass production of over 15,000 CMOS image sensors
- Arteris Joins Intel Foundry Accelerator Ecosystem Alliance Program to Support Advanced Semiconductor Designs