Wave Computing and Sonics to Present on Deep Learning Technology At ML DevCon
“Overcoming the Memory System Challenge in Dataflow Processing”
San Jose, Calif. – April 25, 2017 – Sonics, Inc., the world’s foremost supplier of on-chip network (NoC) and power management technologies and services, and Wave Computing (http://wavecomp.ai/) today announced that the companies will jointly deliver a technical presentation on Deep Learning technology at the Machine Learning Developers Conference on Thursday, April 27th 2017. The title of the presentation is “Overcoming the Memory System Challenge in Dataflow Processing,” and it will be given by Darren Jones, VP Hardware Engineering of Wave, and Drew Wingard, CTO of Sonics. ML DevCon is being held at the Santa Clara, CA Convention Center and the presentation is scheduled for 11:55 am on April 27th. For more information, read the presentation abstract.
“Wave is at the forefront of the group of fabless semiconductor companies targeting Deep Learning applications with their chips,” said Wingard. “Our presentation will provide an architecture overview of Wave’s recently completed Dataflow Processor Unit (DPU) chip and its unique implementation of advanced NoC technology in the memory subsystem. Specifically, it will highlight Wave’s use of Sonics’ Interleaved Multichannel Technology (IMT) that enables the connection of both Hybrid Memory Cube (HMC) and Double Data Rate (DDR) memories by a single NoC fabric to support the very high bandwidth access requirements of the DPU architecture.”
“Our DPU computing fabric leverages huge memory bandwidth to support the faster training of Deep Learning algorithms,” said Jones. “With its IMT capability, Sonics’ NoC helped us abstract the complexities of the heterogeneous memory subsystem to simplify the software model and enable dynamic bandwidth balancing of the DPU traffic while optimizing the physical chip design.”
About Wave
Wave Computing is a Silicon Valley startup that is revolutionizing the Machine Learning industry. The company’s world-class team is developing the Wave Dataflow Processing Unit (DPU), employing a disruptive, massively parallel dataflow architecture. When introduced, Wave’s DPU-based solution will be the world’s fastest and most energy efficient deep learning computer family.
Wave’s solution will enable enterprise and public sector customers to build and deploy smarter, faster Deep Learning applications to improve their business processes through the insights gained from the growing volume of data available to them. With funding from Tier 1 VCs, an IP portfolio including over 70 patents, and a track record of execution, Wave is dedicated to accelerating the application of Machine Intelligence in the datacenter and beyond. Wave is based in Campbell, California USA.
About Sonics, Inc.
Sonics, Inc. (San Jose, Calif.) is the trusted leader in on-chip network (NoC) and power-management technologies used by the world’s top semiconductor and electronics product companies, including Broadcom®, Intel®, Marvell®, MediaTek, and Microchip®. Sonics was the first company to develop and commercialize NoCs, accelerating volume production of complex systems-on-chip (SoC) that contain multiple processor cores. Based on the ICE-Grain™ Power Architecture, Sonics’ ICE-G1™ is the industry’s first complete Energy Processing Unit (EPU), which enables rapid development of SoC power management subsystems. Sonics is also a catalyst for ongoing discussions about design methodology change via the Agile IC Methodology LinkedIn group. Sonics holds approximately 150 patent properties supporting customer products that have shipped more than four billion SoCs. For more information, visit sonicsinc.com.
|
Related News
- Wally Rhines: Deep Learning Will Drive Next Wave of Chip Growth
- Wave Computing Accelerates its Machine Learning Software Bring-up by 12 Months Using Synopsys ZeBu Server Emulation System
- Neurxcore Introduces Innovative NPU Product Line for AI Inference Applications, Powered by NVIDIA Deep Learning Accelerator Technology
- Syntiant's Deep Learning Computer Vision Models Deployed on Renesas RZ/V2L Microprocessor
- NEUCHIPS Secures $20 Million in Series B2 Funding to Deliver AI Inference Platform for Deep Learning Recommendation
Breaking News
- After TSMC fab in Japan, advanced packaging facility is next
- A System On Module (SoM) developed by Electra IC: BitFlex-SPB-A7 FPGA SoM
- Weebit Nano to demo its ReRAM technology on GlobalFoundries' 22FDX® platform
- SoC Secure Boot Hardware Engine IP Core Now Available from CAST
- QuickLogic and Zero-Error Systems Partner to Deliver Radiation-Tolerant eFPGA IP for Commercial Space Applications
Most Popular
- Former Moortec executives create chip monitor startup
- PrimisAI Unveils Premium Version of RapidGPT, Redefining Hardware Engineering
- Arteris Expands Ncore Cache Coherent Interconnect IP To Accelerate Leading-Edge Electronics Designs
- Arm Announces New Automotive Technologies to Accelerate Development of AI-enabled Vehicles by up to Two Years
- Arm's Broadest Ever Automotive Enhanced IP Portfolio Designed for the Future of Computing in Vehicles
E-mail This Article | Printer-Friendly Page |