Scalable, On-Die Voltage Regulation for High Current Applications
Cadence Accelerates Intelligent SoC Development with Comprehensive On-Device Tensilica AI Platform
New Tensilica AI engine boosts performance and AI accelerators provide a turnkey solution for consumer, mobile, automotive and industrial AI SoC designs
SAN JOSE, Calif., Sep 13, 2021 -- Cadence Design Systems, Inc. (Nasdaq: CDNS) today unveiled its Tensilica® AI Platform for accelerating AI SoC development, including three supporting product families optimized for varying data and on-device AI requirements. Spanning the low, mid and high end, the comprehensive Cadence® Tensilica AI Platform delivers scalable and energy-efficient on-device to edge AI processing, which is key to today’s increasingly ubiquitous AI SoCs. A new companion AI neural network engine (NNE) consumes 80% less energy per inference and delivers more than 4X TOPS/W compared to industry-leading standalone Tensilica DSPs, while neural network accelerators (NNAs) deliver flagship AI performance and energy efficiency in a turnkey solution.
Targeting intelligent sensor, internet of things (IoT) audio, mobile vision/voice AI, IoT vision and advanced driver assistance systems (ADAS) applications, the Tensilica AI Platform delivers optimal power, performance and area (PPA) and scalability with a common software platform. Built upon the highly successful application-specific Tensilica DSPs already shipping in volume production in leading AI SoCs for the consumer, mobile, automotive and industrial markets, the Tensilica AI Platform product families include:
- AI Base: Includes the popular Tensilica HiFi DSPs for audio/voice, Vision DSPs, and ConnX DSPs for radar/lidar and communications, combined with AI instruction-set architecture (ISA) extensions.
- AI Boost: Adds a companion NNE, initially the Tensilica NNE 110 AI engine, which scales from 64 to 256 GOPS and provides concurrent signal processing and efficient inferencing.
- AI Max: Encompasses the Tensilica NNA 1xx AI accelerator family—currently including the Tensilica NNA 110 accelerator and the NNA 120, NNA 140 and NNA 180 multi-core accelerator options—which integrates the AI Base and AI Boost technology. The multi-core NNA accelerators can scale up to 32 TOPS, while future NNA products are targeted to scale to 100s of TOPS.
All of the NNE and NNA products include random sparse compute to improve performance, run-time tensor compression to decrease memory bandwidth, and pruning plus clustering to reduce model size.
Comprehensive common AI software addresses all target applications, streamlining product development and enabling easy migration as design requirements evolve. This software includes the Tensilica Neural Network Compiler, which supports these industry-standard frameworks: TensorFlow, ONNX, PyTorch, Caffe2, TensorFlowLite and MXNet for automated end-to-end code generation; Android Neural Network Compiler; TFLite Delegates for real-time execution; and TensorFlow Lite Micro for microcontroller-class devices.
“AI SoC developers are challenged to get to market faster with cost-effective, differentiated products offering longer battery life and scalable performance,” said Sanjive Agarwala, corporate vice president and general manager of the IP Group at Cadence. “With our mature, extensible and configurable platform based on our best-in-class Tensilica DSPs and featuring common AI software, Cadence allows AI SoC developers to minimize development costs and meet tight market windows. By enabling AI across all performance and price points, Cadence is driving the rapid deployment of AI-enabled systems everywhere.”
Supporting Quotes
“Scaling low power on-device AI capabilities requires extremely efficient multi-sensory compute. Cadence and the TensorFlow Lite for Microcontrollers (TFLM) team have been working together for many years to co-develop solutions that enable the most cutting-edge, low-footprint use cases in the AI space. The trend for real-time audio networks to use LSTM-based neural nets for the best performance and efficiency is a key example. Working closely with Cadence, we are integrating a highly optimized LSTM operator on Tensilica HiFi DSPs that enables the next level of performance improvements for key use cases like voice-call noise suppression. We are excited to continue this collaboration and provide industry-level innovation in the low-energy AI space.”
- Pete Warden, Technical Lead of TensorFlow Lite Micro at Google
“On-device AI deployment on our KL720—a 1.4 TOPS AI SoC targeted for vehicles, smart home, smart security, industrial control applications, healthcare and AI of things (AIoT)—is key to both our customers’ success and our mission to enable AI everywhere, for everyone. Cadence’s high-performance, low-power Tensilica Vision DSPs pack a lot of compute capacity with AI ISA extensions plus the necessary AI software to tackle the latest AI challenges.”
- Albert Liu, Founder and CEO of Kneron
“Integrating a Cadence Tensilica HiFi 4 DSP into the NXP i.MX RT600 crossover MCU not only provides high-performance DSP capabilities for a broad range of audio and voice processing applications, but also increases inference performance, enabling AI even in very low-power, battery-operated products. The HiFi neural network library allows NXP to take full advantage of the AI capabilities of the HiFi 4 DSP and integrate it into NXP’s eIQ Machine Learning Software Development Environment supporting the TensorFlow Lite Micro and Glow ML inference engines.”
- Cristiano Castello, Sr. Director of Microcontrollers Product Innovation at NXP Semiconductors
“As AI applications are rapidly propagating from the cloud to the edge, integrating an AI accelerator on-device has become a must to meet low-latency requirements in ADAS, mobile, intelligent sensors and the IoT. AI SoCs require mature accelerator IP that addresses the varying needs of each market and includes a comprehensive software solution. With its Tensilica AI Base, AI Boost and AI Max technology that provides a clear migration path as performance and energy requirements evolve, Cadence is a compelling IP provider for comprehensive on-device AI IP solutions.”
- Mike Demler, Senior Analyst at The Linley Group
Availability
The NNE 110 AI engine and the NNA 1xx AI accelerator family support Cadence’s Intelligent System Design™ strategy, which enables pervasive intelligence for SoC design excellence, and are expected to be in general availability in the fourth quarter of 2021. For more information on the Tensilica AI Platform and new AI IP, please visit www.cadence.com/go/TensilicaAI.
About Cadence
Cadence is a pivotal leader in electronic design, building upon more than 30 years of computational software expertise. The company applies its underlying Intelligent System Design strategy to deliver software, hardware and IP that turn design concepts into reality. Cadence customers are the world’s most innovative companies, delivering extraordinary electronic products from chips to boards to systems for the most dynamic market applications, including consumer, hyperscale computing, 5G communications, automotive, mobile, aerospace, industrial and healthcare. For seven years in a row, Fortune magazine has named Cadence one of the 100 Best Companies to Work For. Learn more at cadence.com.
|
Cadence Hot IP
Related News
- New Cadence Joint Enterprise Data and AI Platform Dramatically Accelerates AI-Driven Chip Design Development
- Cadence Palladium Z2 Enterprise Emulation Platform Accelerates Microchip's Data Center Solutions SoC Development
- Hiroshima University Research Team Accelerates the Development of a Computer-Aided Medical Diagnosis System with Cadence Tensilica Vision P6 DSP Core and Protium S1 FPGA-Based Prototyping Platform
- Dream Chip and Cadence Demo Automotive SoC Featuring Tensilica AI IP at embedded world 2024
- Cadence Accelerates On-Device and Edge AI Performance and Efficiency with New Neo NPU IP and NeuroWeave SDK for Silicon Design
Breaking News
- Comprehensive ADC/DAC and AFE IP Solutions: Empowering Next-Gen Applications Across Diverse Technology Nodes
- Intel Announces Retirement of CEO Pat Gelsinger
- Tenstorrent closes $693M+ of Series D funding led by Samsung Securities and AFW Partners
- VeriSilicon partners with LVGL to enable advanced GPU acceleration for wearable devices and beyond
- HighTec C/C++ Compiler Suite Supports Andes' ISO 26262 Certified RISC-V IP for Automotive Safety and Security Applications
Most Popular
- Intel Announces Retirement of CEO Pat Gelsinger
- Tenstorrent closes $693M+ of Series D funding led by Samsung Securities and AFW Partners
- HighTec C/C++ Compiler Suite Supports Andes' ISO 26262 Certified RISC-V IP for Automotive Safety and Security Applications
- VeriSilicon partners with LVGL to enable advanced GPU acceleration for wearable devices and beyond
- Alphawave Semi Drives Innovation in Hyperscale AI Accelerators with Advanced I/O Chiplet for Rebellions Inc
E-mail This Article | Printer-Friendly Page |