64-Bit AI Platform for AI-enabled Edge SoCs
The TritonAI 64 platform helps organizations future-proof their environments from continual change by delivering a flexible design using 8-to-32-bit integer-based, for high-performance AI inferencing at the edge today. The platform includes three scalable technologies developers can easily configure to address a broad range of AI use cases and computational requirements:
MIPS® 64-bit SIMD engine
WaveFlow™ dataflow engine
WaveTensor™ processing engine
The TritonAI 64 platform delivers varying performance levels by incorporating additional compute elements from each of the three technologies in a modular and linear fashion. Designers can configure each of the three modules as needed to address the performance needs for varying AI use cases.
The platform will automatically update with each software iteration to ensure customers’ environments keep pace with rapidly evolving AI requirements.
Features
- Mature IDE & tools
- Integrated driver layer for technology mapping
- Linux support
- Abstract AI framework via the Wave Runtime (WaveRT) API
- Optimized AI libraries for:
- CPU/SIMD/WaveFlow/WaveTensor
- TensorFlow-Lite Build support
- MIPS 64-BIT SIMD ENGINE:
- MIPS64r6 instruction set architecture
- 128-bit SIMD/FPU for INT/SP/DP ops
- Virtualization extensions
- Superscalar 9-stage pipeline w/SMT
- Caches (32KB-64KB), DSPRAM (0-64KB)
- Advanced branch prediction and MMU
- Multi-Processor Cluster:
- 1-6 cores
- Integrated L2 cache (0-8MB, opt ECC)
- Power management (F/V gating/CPU)
- Interrupt control with virtualization
- 256b native AXI4 or ACE interface
- WAVEFLOW DATAFLOW ENGINE:
- Configurable IMEM and DMEM Sizes
- Overlap of communication & computation
- Compatible datatypes with WaveTensor
- Integer (Int8, Int16, Int32)
- Wide range of scalable solutions (2-1K tiles)
- Future-proof for all AI algorithms
- Flexible multi-dimensional tiling
- Supports signal and vision processing
- WAVETENSOR PROCESSING ENGINE:
- Configurable MACs, accumulation & array size
- 4x4 (64 MAC) and 8x8 (512 MAC) base tiles
- Up to 32 tiles per slice & up to 32 slices per array
- Slices up to overlap of communication & computation
- Supports int8 for inferencing
- Application Programming Kit (APK):
- Allows developers to parse and execute appropriate AI tasks using WaveFlow and WaveTensor acceleration engines. These engines are controlled in a heterogeneous programming environment managed by unifying WaveRT API platform. This software-centric approach abstracts the AI use case, framework and algorithms from the dedicated silicon executing the code, allowing developers to exploit algorithmic parallels for faster performance.
Benefits
- Flexible support for a broad range of AI use cases
- Efficient execution of current AI CNN algorithms
- Easily configurable allowing customers to scale performance to meet changing AI algorithmic needs and use cases
- Comprehensive, future-proof, licensable IP platform
- Support for 8- to 32-bit integer-based inferencing
Block Diagram of the 64-Bit AI Platform for AI-enabled Edge SoCs

View 64-Bit AI Platform for AI-enabled Edge SoCs full description to...
- see the entire 64-Bit AI Platform for AI-enabled Edge SoCs datasheet
- get in contact with 64-Bit AI Platform for AI-enabled Edge SoCs Supplier