UFS 3.0 Host Controller with AES Encryption compatible with M-PHY 4.0 and UniPro 1.8
Comprehensive IP platform for edge to on-device AI
* Domain-specific, extensible, and configurable AI platform built upon mature, volume production-proven Tensilica architecture
* Industry-leading performance and energy efficiency for on-device AI applications
* Comprehensive, common AI software addresses all target markets
* Low-, mid-, and high-end AI product families for the full spectrum of PPA and cost points
* Scalable from 8 GOPS to 32 TOPS currently and 100s of TOPS for future AI requirements
* As demand has increased for AI-based tasks in a wide range of applications and vertical segments, on-device and edge AI processing has become more and more prevalent. As these solutions are deployed in SoCs with varying computational and power requirements, meeting the market needs for a wide range of automotive, consumer, industrial, and mobile applications can be a challenging task for both silicon IP providers and SoC companies.
The comprehensive Cadence® Tensilica™ AI platform accelerates SoC developers to design and deliver their solutions in applications based on their KPIs and requirements. The platform includes three AI product families optimized for varying data and on-device AI requirements to provide optimal power, performance, and area (PPA) coupled with a common software platform. These deliver the scalable and energy-efficient on-device to edge AI processing that is key to today’s increasingly ubiquitous AI SoCs. Cadence’s comprehensive AI platform spanning the low-, mid-, and high-end classes is built upon the highly successful, application-specific Tensilica DSP architecture.
These three AI product families are AI Base, AI Boost, and AI Max:
* AI Base includes Tensilica HiFi DSPs for audio/voice, Vision DSPs, and ConnX DSPs for radar and communications, combined with AI instruction-set architecture (ISA) extensions.
* AI Boost offers companion neural network engine, initially the Tensilica AI NNE 110 engine, scales from 64 to 256 GOPS and provides concurrent signal processing and efficient inferencing.
* AI Max includes the turnkey Tensilica AI NNA 1xx accelerator family—currently including the Tensilica AI NNA110 accelerator and the NNA 120, NNA 140, and NNA 180 multi-core accelerator options—which integrates the AI Base and AI Boost products. The multi-core NNA accelerators can scale up to 32 TOPS, while future NNA products are targeted to scale to 100s of TOPS.
* Plus a common AI software is designed to accommodate all workloads and markets streamlines product development, enabling easy migration as design requirements evolve. Tensilica AI products can run all neural network layers, including but not limited to convolution, fully connected, LSTM, LRN, and pooling.
View Comprehensive IP platform for edge to on-device AI full description to...
- see the entire Comprehensive IP platform for edge to on-device AI datasheet
- get in contact with Comprehensive IP platform for edge to on-device AI Supplier
Block Diagram of the Comprehensive IP platform for edge to on-device AI

AI platform IP
- 64-Bit AI Platform for AI-enabled Edge SoCs
- IP platform for intelligence gathering chips at the Edge
- Quad core IP platform with integrated Arm security subsystem
- IP Platform for BLE 5.2 IoT Sensors
- IP Platform for AI-Enabled IEEE 802.15.4 Smart Sensors
- IP Platform for BLE 5.2 and 802.15.4 IoT Sensors