Edgecortix Announces Sakura AI Co-processor Delivering Industry Leading Low-Latency and Energy-Efficiency
TOKYO, April 25, 2022 -- EdgeCortix® Inc., the innovative fabless semiconductor design company with a software first approach, focused on delivering class-leading compute efficiency and latency for edge artificial intelligence (AI) inference; unveiled the architecture, performance metrics and delivery timing for its new industry leading, energy-efficient, AI inference co-processor.
Today, at the prestigious TechInsights' Linley Spring Processor Conference, EdgeCortix officially introduced its flagship energy-efficient AI co-processor (accelerator), branded SAKURA™. Designed entirely in their Tokyo based development center, the company announced that SAKURA is produced by TSMC on the popular 12nm FinFET technology and will be available as low-power PCI-E based development boards to participating companies of the EdgeCortix Early Access Program (EAP) in July of 2022.
"SAKURA is revolutionary from both a technical and competitive perspective, delivering well over 10X performance/watt advantage compared to current AI inference solutions based on traditional graphics processing units (GPUs), especially for real-time edge applications.", said Sakyasingha Dasgupta, CEO and Founder of EdgeCortix, "After validating our AI processor architecture design with multiple field-programmable gate array (FPGA) customers in production, we designed SAKURA as a co-processor that can be plugged in alongside a host processor in nearly all existing systems to significantly accelerate AI inference. Using our patented runtime-reconfigurable interconnect technology, SAKURA is inherently more flexible than traditional processors and can achieve near optimal compute utilization in contrast to most AI processors that have been developed over the last 40+ years."
SAKURA is powered by a 40 trillion operations per second (TOPS), single core Dynamic Neural Accelerator® (DNA) Intellectual Property (IP), which is EdgeCortix's proprietary neural processing engine with built-in reconfigurable data-path connecting all compute engines. DNA enables the new SAKURA AI co-processor to run multiple deep neural network models together, with ultra-low latency, while preserving exceptional TOPS utilization. This unique attribute is key to enhancing the processing speed, energy-efficiency, and longevity of the system-on-chip, providing exceptional total cost of ownership benefits. The DNA IP is specifically optimized for inference with streaming and high-resolution data.
Key industrial segments where the SAKURA performance profile is ideally suited include: transportation / autonomous vehicles, defense, security, 5G communications, augmented & virtual reality, smart manufacturing, smart cities, smart retail and robotics, all markets that require low power, low latency AI inference.
The company announced that SAKURA will also be available to customers for purchase in multiple hardware form factors, and the underlying IP can be licensed in conjunction with EdgeCortix's software stack for customers designing their own proprietary semiconductors.
Key SAKURA features include:
- Up to 40 TOPS (single chip version) and 200 TOPS (multi-chip version).
- PCI-e Device TDP @ 10W-15W.
- Typical model power consumption ~5W.
- 2x64 LPDDR4x – 16 GB.
- PCIe Gen 3 up to 16 GB/s bandwidth.
- Two form factors – Dual M.2 and Low-profile PCIe.
The DNA processing engine within SAKURA delivers:
- Over 24K MACs in single core @ 800 MHz.
- Optimized for INT8 and batch size 1.
- Runtime-reconfigurable data-path.
- Relatively large on-chip memory at 20 MB.
- Maximizes compute utilization exploiting multiple degrees of parallelism defined by software.
- Extreme low-latency (< 4 ms) on demanding workloads, e.g., object detection models (Yolo-v3, Yolo-v5), multiple stream video analytics, neural network-based point-cloud data processing, etc.
Fig: EdgeCortix SAKURA chip die photo along with AI inference efficiency comparison to the leading GPU for edge use-cases, on the Yolov3 object detection benchmark. Current leading GPU SoC at 32 TOPS and 30W TDP vs EdgeCortix SAKURA at 40 TOPS and 10W TDP. SAKURA deliver over 10X power-efficiency advantage. All data normalized to baseline of Yolov3 608x608 and with batch size 1.
The company also officially announced the open-source release of their MERATM compiler software framework effective immediately. MERA enables seamless acceleration of today's increasingly complex and compute intensive AI workloads, allowing software engineers to use SAKURA, as well as leading FPGAs powered by DNA IP and other 3rd party silicon, as drop-in replacements for CPUs or GPUs. Without leaving their comfort zone of standard frameworks like PyTorch, TensorFlow and ONNX, software engineers can leverage MERA to easily port models currently running on GPUs, seamlessly to the SAKURA accelerator, without any re-training.
About EdgeCortix Inc.
EdgeCortix is a fabless semiconductor design company focused on enabling energy-efficient edge intelligence. It was founded in 2019 with the radical idea of taking a software first approach, while designing an artificial intelligence specific runtime reconfigurable processor from the ground up using a technique called "hardware & software co-exploration". Targeting advanced computer vision applications first, using proprietary hardware and software IP on existing processors like FPGAs and custom designed ASIC, the company is geared towards positively disrupting the rapidly growing AI hardware space across defense, aerospace, smart cities, industry 4.0, autonomous vehicles and robotics.
|
Related News
- EdgeCortix Expands Delivery of its Industry Leading SAKURA-I AI Co-processor Devices and MERA Software Suite
- BrainChip Introduces Lowest-Power AI Acceleration Co-Processor
- AONDevices Introduces Breakthrough Super Low-Power, Low-Latency, Customizable Edge AI Speech Enhancement
- SoftBank and EdgeCortix Partner to Jointly Realize Low-latency and Highly Energy-efficient 5G Wireless Accelerators
- Expedera Introduces Its Origin Neural Engine IP With Unrivaled Energy-Efficiency and Performance
Breaking News
- TSMC September 2024 Revenue Report
- Crypto Quantique teams up with Attopsemi to simplify the implementation of PUF technology in MCUs and SoCs
- Intel, TSMC to detail 2nm processes at IEDM
- SensiML Expands Platform Support to Include the RISC-V Architecture
- MIPI Alliance Announces OEM, Expanded Ecosystem Support for MIPI A-PHY Automotive SerDes Specification
Most Popular
- Deeptech Keysom completes a €4M fundraising and deploys the first “no-code” tool dedicated to the design of tailor-made processors
- Bluetooth® V6.0 Channel Sounding RF Transceiver IP Core in 22nm & 40nm for ultra-low power distance aware Bluetooth connected devices
- Secure-IC unveils its Securyzr™ neo Core Platform at Embedded World North America 2024
- LDRA Announces Extended Support for RISC-V High Assurance Software Quality Tool Suite to Accelerate On-Target Testing of Critical Embedded Applications
- Electronic System Design Industry Posts $4.7 Billion in Revenue in Q2 2024, ESD Alliance Reports
E-mail This Article | Printer-Friendly Page |