Omnitek achieves world-leading CNN performance per watt in a midrange programmable device.
Omnitek achieves world-leading CNN performance per watt in a midrange programmable device.
BASINGSTOKE, UK. April 16th, 2019 – Omnitek today announced immediate availability of a new Convolutional Neural Network, delivering world-leading performance per watt at full FP32 accuracy in a midrange SoC FPGA.
Optimised for the Intel Arria 10 GX architecture, the Omnitek Deep Learning Processing Unit (DPU) achieves 135 GOPS/W at full 32-bit floating point accuracy when running the VGG-16 CNN in an Arria 10 GX 1150. The innovative design employs a novel mathematical framework combining low-precision fixed point maths with floating point maths to achieve this very high compute density with zero loss of accuracy.
Scalable across a wide range of Arria 10 GX and Stratix 10 GX devices, the DPU can be tuned for low cost or high performance in either embedded or data centre applications. The DPU is fully software programmable in C/C++ or Python using standard frameworks such as TensorFlow, enabling it to be configured for a wide range of standard CNN models including GoogLeNet, ResNet-50 and VGG-16 as well as custom models. No FPGA design expertise is required to do this.
Roger Fawcett, CEO at Omnitek, commented “We are very excited to apply this unique innovation, resulting from our joint research program with Oxford University, to reducing the cost of a whole slew of AI-enabled applications, particularly in video and imaging where we have a rich library of highly optimised IP to complement the DPU and create complete systems on a chip.”
FPGAs are being adopted as the platform of choice for many intelligent video and vision systems. They are ideally suited to Machine Learning applications due to their massively parallel DSP architecture, distributed memory and ability to reconfigure the logic and connectivity for different algorithms. To this latter point, Omnitek’s DPU can be configured to provide optimal compute performance for CNNs, RNNs, MLPs and other neural network topologies which exist today and more importantly, the as yet unknown algorithms and innovative optimisation techniques that will exist in future due to the significant research in this field.
More information is available at www.Omnitek.tv/DPU .
About Omnitek
Omnitek is a world leader in the design of intelligent video and vision systems based on programmable FPGAs and SoCs. Through the supply of expert design services with highly optimised FPGA intellectual property cores covering high-performance video / vision and AI / machine learning, Omnitek can provide cost-optimised solutions to a broad range of markets. To complement this business Omnitek also designs and manufactures a comprehensive suite of video test & measurement equipment.
|
Related News
- Omnitek Demonstrates Highest Performance Convolutional Neural Network on an FPGA
- Ambiq Micro Achieves World-Leading Power Consumption Performance with TSMC 40ULP Technology
- Neural Network Inference Engine IP Core Delivers >10 TeraOPS per Watt
- Altera FPGAs Achieve Compelling Performance-per-Watt in Cloud Data Center Acceleration Using CNN Algorithms
- Mid-Range FPGAs Reach the Next Power and Performance Milestone for Edge Compute Systems
Breaking News
- Arm Holdings plc Reports Results for the Third Quarter of the Fiscal Year Ended 2025
- Why UCIe is Key to Connectivity for Next-Gen AI Chiplets
- CHERI Protects Memory at the Hardware Level
- Axiomise Launches footprint, Area Analyzer for Silicon Design
- Ceva-Waves Wi-Fi 6 IP Powers WUQI Microelectronics Wi-Fi/Bluetooth Combo Chip
Most Popular
- Intel Halts Products, Slows Roadmap in Years-Long Turnaround
- Mirabilis Design Adds System-Level Modelling Support for Industry-Standard Arteris FlexNoC and Ncore Network-on-Chip IPs
- CoMira Solutions unveils its new 1.6T Ethernet UMAC IP
- Accellera Board Approves Universal Verification Methodology for Mixed-Signal (UVM-MS) 1.0 Standard for Release
- intoPIX Unveils Cutting-Edge AV Innovations at ISE 2025
![]() |
E-mail This Article | ![]() |
![]() |
Printer-Friendly Page |