LPDDR SDRAM, initially developed for low-power mobile devices such as smartphones, tablets, and personal computers, is now gaining traction as a preferred memory solution for artificial intelligence (AI) applications. As large language models (LLMs) advance and become integrated across diverse networks, the demand for sophisticated and efficient AI hardware to support these models continues to increase. LPDDR5X, which is currently being deployed in data centers for AI training and inference, was selected due to its optimal balance of power efficiency, performance, capacity, and cost.
The evolution of LPDDR memory has reached a new milestone with the recent publication of the LPDDR6 standard by JEDEC. LPDDR6 elevates per-bit speed to 14.4Gbps and expands the data width to 48 bits (compared to 32 bits for LPDDR5X), resulting in a total bandwidth of 691Gb/s—twice that of the previous generation. Furthermore, LPDDR6 introduces support for two 24-bit channels, each comprising two 12-bit sub-channels, thus enabling enhanced channel performance optimization based on varying data types.
Security and reliability remain paramount in datacenter environments. The new standard incorporates several essential features tailored for these requirements, including per row activation counting (PRAC) for data integrity, programmable link protection, on-die ECC, error scrubbing, command/address (CA) parity, and memory built-in self-test (MBIST) for improved error detection and system dependability.
LPDDR6 also continues to serve as a leading memory solution for mobile devices, offering enhancements such as DVFSL, partial self and active refresh support, dynamic efficiency mode for low-bandwidth operation, and a lower operating voltage for increased power savings.
Coinciding with the release of the JEDEC specification, Cadence announced the availability of its LPDDR6/LPDDR5X 14.4Gbps dual-mode PHY and memory controller. Fabricated using advanced process nodes and taped out earlier this year, this solution delivers peak performance at 14.4Gbps in LPDDR6 mode while operating at core voltage. To maximize design flexibility, the PHY also supports LPDDR5X at speeds of up to 10.7Gbps. Both the PHY and controller are now available, with numerous customers actively engaged in chiplet and monolithic designs.
Memory capacity is a critical consideration for AI inference, as increasing model complexity drives higher memory requirements. Cadence collaborates closely with clients to develop floor plans and topologies that are tailored to meet specific performance and capacity goals. The company's system design experts offer reference designs for various PHY/controller configurations and packaging options, such as CAMM2 for LPDDR5X solutions.
As AI infrastructure advances, system designers require flexible memory options to optimize performance, capacity, and cost. The Cadence LPDDR6/5X solution provides a versatile new memory IP option, complementing the company's comprehensive portfolio of memory products for AI applications—including the 10.7G LPDDR5X, 12.8Gbps HBM4, and 12.8G DDR5 MRDIMM—along with Cadence's chiplet framework for heterogeneous chiplet integration.
To learn more about these products, please visit A Family of High-Speed On-Chip Memory Interface IP at cadence.com.