AI Startup Deep Vision Powers AI Innovation at the Edge
LOS ALTOS, Calif., November 19, 2020 – Deep Vision exits stealth mode and launches its ARA-1 inference processor to enable the creation of new world AI vision applications at the edge. The processors provide the optimal balance of compute, memory, energy efficiency (2W Typical), and ultra-low latency in a compact form factor, making it the definitive choice for endpoints such as cameras, sensors, as well as edge servers where high compute requirements, model flexibility, and energy efficiency is paramount.
“Today’s complex AI workloads require not only low power but also low latency to deliver real-time intelligence at the edge,” said Ravi Annavajjhala, CEO of Deep Vision. “No more making tradeoffs between performance and efficiency. Developers now have access to higher accuracy outcomes and rich data insights, all on one processor.”
Groundbreaking High-Efficiency Architecture
Deep learning models are growing in complexity, and driving increased compute demand for AI at the Edge. The Deep Vision ARA-1 Processor is based on a patented Polymorphic Dataflow Architecture, capable of handling varied dataflows to minimize on-chip data movement. The architecture supports instructions within each of the neural network models, which allows for optimally mapping any dataflow pattern within a deep learning model. Keeping data close to the compute engines minimizes data movement ensuring high inference throughput, low latency, and greater power efficiency. The compiler automatically evaluates multiple data flow patterns for each layer in a neural network and chooses the highest performance and lowest power pattern.
With its simultaneous multi-model processing, The Deep Vision ARA-1 Processor can also effectively run multiple models without a performance penalty, generating results faster and more accurately. With a lower system power consumption than Edge TPU and Movidius MyriadX, Deep Vision ARA-1 processor runs deep learning models such as Resnet-50 at a 6x improved latency than Edge TPU and 4x improved latency than MyriadX.
Software-Centric Approach Breaks Down Complexity Barriers
Deep Vision’s software development kit (SDK) and hardware are tightly intertwined to work seamlessly together,
ensuring optimal model accuracy with the lowest power consumption. With a built-in quantizer, simulator, and profiler, developers have all the tools needed to support computationally complex inference applications’ design and execution. The process of migrating models to production without extensive code development has historically been challenging. Deep Vision’s SDK also allows for a frictionless workflow, which results in a low code, automated, seamless migration process from the training model to the production application. The SDK reduces expensive development time by dramatically increasing productivity and reducing overall time to market.
Paving the Path for New Markets
The Deep Vision ARA-1 processors are designed to accelerate neural network models’ performance for smart retail, robotics, industrial automation, smart cities, autonomous vehicles, and more. Deep Vision is currently in POCs with customers in a variety of these industries.
Pricing and Availability
The processor offers developers great flexibility in hardware integration, with three form factors including high-speed USB and PCIe interface options. The Deep Vision ARA-1 processors are now shipping. For pricing and availability, please contact sales@deepvision.io.
About Deep Vision:
Founded by Dr. Rehan Hameed and Dr. Wajahat Qadeer in 2015, Deep Vision enables rich data insights to better optimize real-time actions at the edge. Our AI inference solutions deliver the optimum balance of compute, memory, low-latency, and energy efficiency for the demands of today’s latency-sensitive AI-based applications. Deep Vision has raised $19 million and backed by multiple investors, including Silicon Motion, Western Digital, Stanford, Exfinity Ventures, and Sinovation Ventures. www.deepvision.io
|
Related News
- Xilinx Introduces Kria Portfolio of Adaptive System-on-Modules for Accelerating Innovation and AI Applications at the Edge
- Cadence Expands Tensilica IP Portfolio with New HiFi and Vision DSPs for Pervasive Intelligence and Edge AI Inference
- BrainChip and Edge Impulse Offer a Neuromorphic Deep Dive into Next-Gen Edge AI Solutions
- BrainChip Showcases Edge AI Technologies at 2023 Embedded Vision Summit
- Arteris IP Licensed by Axelera AI to Accelerate Computer Vision at the Edge
Breaking News
- Keysight, Synopsys, and Ansys Deliver Radio Frequency Design Migration Flow to TSMC's N6RF+ Process Node
- Siemens collaborates with TSMC on design tool certifications for the foundry's newest processes and other enablement milestones
- Leveraging Cryogenics and Photonics for Quantum Computing
- Kalray Joins Arm Total Design, Extending Collaboration with Arm on Accelerated AI Processing
- Credo at TSMC 2024 North America Technology Symposium
Most Popular
- Huawei Mate 60 Pro processor made on SMIC 7nm N+2 process
- Silicon Creations Reaches Milestone of 10 Million Wafers in Production with TSMC
- GUC provides 3DIC ASIC total service package to AI/HPC/Networking customers
- Analog Bits to Demonstrate Numerous Test Chips Including Portfolio of Power Management and Embedded Clocking and High Accuracy Sensor IP in TSMC N3P Process at TSMC 2024 North America Technology Symposium
- Alphawave Semi: FY 2023 and 2024 YTD Trading Update and Notice of Results
E-mail This Article | Printer-Friendly Page |