Camera SLVS-EC v.2.0 5.0Gbps / MIPI D-PHY v2-1 4.5Gbps combo Receiver 4-Lane
UPMEM Puts CPUs Inside Memory to Allow Applications to Run 20 Times Faster
Company CTO to Discuss Processing-in-Memory Approaches at HOT CHIPS Conference
STANFORD, CALIFORNIA – August 19, 2019 – UPMEM announced today a Processing-in-Memory (PIM) acceleration solution that allows big data and AI applications to run 20 times faster and with 10 times less energy. Instead of moving massive amounts of data to CPUs, the silicon-based technology from UPMEM puts CPUs right in the middle of data, saving time and improving efficiency. By allowing compute to take place directly in the memory chips where data already resides, data-intensive applications can be substantially accelerated. UPMEM reduces data movement while leveraging existing server architecture and memory technologies.
UPMEM CTO and Co-Founder Fabrice Devaux will discuss this new approach along with user case studies in a session titled “True Processing in Memory with DRAM Accelerator” at the HOT CHIPS Conference in Stanford, Calif., on August 19, 2019.
“Today, applications in the data center and at the edge are becoming increasingly data-intensive and processing them becomes constrained by the energy cost of the data movement between the memory and the processing cores, as well as the limited bandwidth between them,” said Devaux. “In my session, I will explain how PIM technology can address those challenges and bring unprecedented benefits to organizations of all sizes. Here at UPMEM, we think that making in-situ processing a practical reality is a major advance in computing.”
“Offloading most of the processing in the memory chips while leveraging existing computing technologies is directly benefiting our target customers running critical software applications in data centers,” says Gilles Hamou, CEO and co-founder of UPMEM. “The level of interest we have been experiencing clearly demonstrates the market need and we are looking forward to sharing more details about customer adoption in the upcoming months.”
The PIM chip, embedding UPMEM’s proprietary processors (DRAM Processing Units, DPUs) and main memory (DRAM) on a memory chip, is the low-cost, ultra-efficient building block of this technology. Together with its Software Development Kit (SDK), delivered on standard DIMM modules, the UPMEM PIM solution accelerates data-intensive applications with a seamless integration into standard servers.
“Today’s AI- and ML-driven applications are rapidly increasing the volume, velocity and variety of data, while simultaneously increasing the need to process data in real-time,” said Steffen Hellmold, vice president of corporate business development at Western Digital, an investor in UPMEM through the company’s strategic investment fund, Western Digital Capital. “UPMEM’s innovative PIM acceleration solution intelligently integrates processing with DRAM memory, providing the flexibility to create the purpose-built, data-centric compute architectures that will be essential to meet the demands of the zettabyte age.”
Current use cases include genomics companies where mapping or comparing DNA fragments against a reference genome involves tens of GBs of data. The UPMEM PIM modules are installed in existing servers to replace the regular DRAM memory modules and the the UPMEM PIM accelerator then reduces operations from hours to minutes, delivering an unprecedented level of efficiency and performance.
About UPMEM
UPMEM is bringing to market an ultra-efficient, scalable and programmable PIM technology that allows drastic reduction of data movement in the computing node for data-intensive applications in the data center and at the edge. UPMEM was founded in 2015, with headquarters in Grenoble, France, and a network of partners from Asia to the U.S. The team combines both entrepreneurial and technical expertise, ranging from processor architecture, software design, and low-level application workloads. Among UPMEM investors are Western Digital, Partech, C4 Ventures, Supernova Invest, and the French tech innovation agency.
|
Related News
- UPMEM Announces the First Processing In-Memory Chip Accelerating Big Data Applications
- Rambus Advances AI 2.0 with GDDR7 Memory Controller IP
- Ferroelectric Memory GmbH (FMC) Raises $20 Million to Accelerate Next-Generation Memory for AI, IoT, Edge Computing, and Data Center Applications
- Gyrfalcon is Named as a Top 10 Processor for AI Acceleration at the Endpoint in 2020 by EE Times
- Synopsys Unveils Fusion Compiler, Enabling 20 Percent Higher Quality-of-Results and 2X Faster Time-to-Results
Breaking News
- Ceva multi-protocol wireless IP could simplify IoT MCU and SoC development
- Controversial former Arm China CEO founds RISC-V chip startup
- Fundamental Inventions Enable the Best PPA and Most Portable eFPGA/DSP/SDR/AI IP for Adaptable SoCs
- Cadence and TSMC Collaborate on Wide-Ranging Innovations to Transform System and Semiconductor Design
- Numem at the Design & Reuse IP SoC Silicon Valley 2024
Most Popular
- GUC provides 3DIC ASIC total service package to AI/HPC/Networking customers
- Qualitas Semiconductor Appoints HSRP as its Distributor for the China Markets
- Siemens collaborates with TSMC on design tool certifications for the foundry's newest processes and other enablement milestones
- Credo at TSMC 2024 North America Technology Symposium
- Huawei Mate 60 Pro processor made on SMIC 7nm N+2 process
E-mail This Article | Printer-Friendly Page |