Design & Reuse

Industry Expert Blogs

DEEPX, Rambus, and Samsung Foundry Collaborate to Enable Efficient Edge Inferencing Applications

- Rambus, Inc.
February 11, 2026

As artificial intelligence (AI) continues to proliferate across industries – from smart cities and autonomous vehicles to industrial automation, robotics, edge servers, and consumer electronics – edge inferencing has become a cornerstone of next-generation computing. Delivering real-time, low-power AI processing at the edge requires close coordination across AI compute architectures, memory subsystems, and silicon platforms. To meet these demands, DEEPX is collaborating with Rambus and Samsung Foundry to deliver a highly optimized solution that combines efficient AI compute, high-bandwidth memory interfaces, and advanced logic process technology.

A Proven Foundation Scaling Forward

As the foundation of this collaboration, DEEPX worked with Rambus and Samsung Foundry on the DX-M1 AI processor, fabricated using Samsung Foundry’s 5nm technology and integrating silicon-proven LPDDR5 controller IP from Rambus. DX-M1 has been deployed across a range of edge applications, including robotics, edge servers, AI-enabled IT services, smart cameras, and factory automation. Looking to the next generation of edge AI, DEEPX is developing the DX-M2 processor for ultra-low-power generative AI inference on edge devices using Samsung Foundry’s 2nm process technology. Samsung Foundry’s GAA-based 2nm platform is designed to deliver further improvements in power efficiency and performance scaling as edge AI workloads grow in complexity.

Through the Samsung Advanced Foundry Ecosystem (SAFETM) IP Alliance, Rambus works closely with Samsung Foundry to optimize its memory controller IP for advanced Samsung process technologies, enabling DEEPX to integrate proven IP more efficiently, lower design risk, and accelerate time to production for next-generation designs.

A Unified Solution for Edge AI

The collaboration between DEEPX, Rambus, and Samsung Foundry brings together three core pillars of edge inferencing:

  • AI Inference Technology: DEEPX contributes its ultra-efficient AI inference processors, designed to deliver high performance with minimal power consumption—ideal for endpoint devices such as AI PC, AI of Things, automotive, edge server, robotics, and industrial sensors.
  • High Performance Memory:  Rambus enhances memory performance with its LPDDR5/5X memory controller IP, which supports data rates up to 9.6 Gbps and features advanced bank management, command queuing, and look-ahead logic to maximize throughput and minimize latency.
  • Advanced Process Technology:  Samsung Foundry provides the silicon platform and ecosystem enablement that support DEEPX’s edge AI development, helping reduce integration complexity and improve design predictability through advanced logic processes and the SAFETM Alliance. Samsung Foundry’s 2nm GAA – process technology represents a key next step for DEEPX’s DX-M2 processor, supporting further gains in power efficiency and performance scaling.

Together, these technologies empower edge devices to run complex AI workloads locally, with low power and performance efficiency, setting the stage for the next generation of edge inferencing.

Optimized Memory for AI Inference

The Rambus LPDDR5/5X memory controller IP is purpose-built for applications requiring high memory throughput at low power. It supports features such as:

  • Queue-based user interface with reordering scheduler
  • Look-ahead activate, precharge, and auto-precharge logic
  • Support for burst lengths BL16 and BL32
  • Parity protection and in-line ECC
  • Compatibility with LPDDR5T, LPDDR5, and LPDDR5X devices
  • Interoperability with Samsung LPDDR5/5X PHY

These capabilities are essential for AI inference, where memory bandwidth and latency directly impact model responsiveness and accuracy.

The Value of Samsung Foundry’s “One-Stop-Shop” Model

Samsung Foundry brings together advanced logic process technology and a tightly aligned SAFETM IP ecosystem through a vertically integrated technology stack that simplifies complex programs. By coordinating cutting-edge logic processes -, IP readiness, and manufacturing considerations earlier in the design cycle, Samsung Foundry helps reduce multi-vendor friction, improves integration efficiency, and accelerates time-to-market.

For edge AI applications such as DEEPX’s DX-M roadmap, Samsung Foundry’s scalable process portfolio – from FinFET to leading-edge 2nm GAA – supports aggressive power-performance targets while maintaining manufacturability. Through collaboration with the SAFETM ecosystem, memory controller IP from partners like Rambus can be efficiently integrated, helping reduce risk and accelerate time to silicon.

This ecosystem-driven model allows customers to focus on AI architecture and application differentiation, while relying on a stable and scalable silicon platform to support current and future edge AI designs.

Empowering the AI Revolution at the Edge

This collaboration exemplifies the power of ecosystem synergy. By combining DEEPX’s AI compute innovation, Samsung Foundry’s manufacturing excellence and ecosystem enablement, and Rambus’ memory interface leadership the trio is enabling a new generation of edge devices that are smarter, faster, and more secure.

Whether it’s enabling real-time object detection in smart cameras, predictive maintenance in industrial systems, or intelligent navigation in autonomous drones, the joint solution is poised to transform how AI is deployed at the edge.

Looking Ahead: Pushing the Boundaries with LPDDR6

Looking ahead, DEEPX and Rambus are extending their collaboration to the next frontier: LPDDR6 & LPDDR6-PIM (Processing In Memory). As AI models grow in complexity and demand even greater memory bandwidth, LPDDR6 is poised to deliver speeds exceeding 9.6 Gbps, while reducing operational power by up to 30% compared to LPDDR5X.

DEEPX, with its roadmap for next-generation AI chips like the DX-M2, is aligning its architecture to take full advantage of LPDDR6’s capabilities.

This forward-looking collaboration underscores the trio’s commitment to redefining what’s possible in edge AI—delivering smarter, faster, and more efficient solutions that scale with the future of computing.

This collaboration exemplifies the power of ecosystem synergy. By combining DEEPX’s AI compute innovation, Samsung Foundry’s manufacturing excellence and ecosystem enablement, and Rambus’ memory interface leadership the trio is enabling a new generation of edge devices that are smarter, faster, and more secure.

Whether it’s enabling real-time object detection in smart cameras, predictive maintenance in industrial systems, or intelligent navigation in autonomous drones, the joint solution is poised to transform how AI is deployed at the edge.

Looking Ahead: Pushing the Boundaries with LPDDR6

Looking ahead, DEEPX and Rambus are extending their collaboration to the next frontier: LPDDR6 & LPDDR6-PIM (Processing In Memory). As AI models grow in complexity and demand even greater memory bandwidth, LPDDR6 is poised to deliver speeds exceeding 9.6 Gbps, while reducing operational power by up to 30% compared to LPDDR5X.

DEEPX, with its roadmap for next-generation AI chips like the DX-M2, is aligning its architecture to take full advantage of LPDDR6’s capabilities.

This forward-looking collaboration underscores the trio’s commitment to redefining what’s possible in edge AI—delivering smarter, faster, and more efficient solutions that scale with the future of computing.