Wave Computing’s customizable TritonAI™ 64 platform combines a triad of powerful technologies in a single, licensable solution that enables efficient artificial intelligence (AI) at the edge. The scalable platform offers a flexible and efficient way for system on chip (SoC) developers to incorporate AI inferencing capabilities into their edge computing designs.
The TritonAI 64 platform helps organizations future-proof their environments from continual change by delivering a flexible design using 8-to-32-bit integer-based, for high-performance AI inferencing at the edge today. The platform includes three scalable technologies developers can easily configure to address a broad range of AI use cases and computational requirements:
MIPS® 64-bit SIMD engine
WaveFlow™ dataflow engine
WaveTensor™ processing engine
The TritonAI 64 platform delivers varying performance levels by incorporating additional compute elements from each of the three technologies in a modular and linear fashion. Designers can configure each of the three modules as needed to address the performance needs for varying AI use cases.
The platform will automatically update with each software iteration to ensure customers’ environments keep pace with rapidly evolving AI requirements.