Google and Movidius to Enhance Deep Learning Capabilities in Next-Gen Devices
SAN MATEO, Calif. –January 27th, 2016 – Movidius, the leader in low-power machine vision for connected devices, today announced that it is working with Google to accelerate the adoption of deep learning within mobile devices. As part of the agreement, Google will source Movidius processors alongside the entire Movidius software development environment. In turn, Google will contribute to Movidius’ neural network technology roadmap.
This agreement enables Google to deploy its advanced neural computation engine on Movidius' ultra-low-power platform, introducing a new way for machine intelligence to run locally on devices. Local computation allows for data to stay on device and properly function without internet connection and with fewer latency issues. This means future products can have the ability to understand images and audio with incredible speed and accuracy, offering a more personal and contextualized computing experience.
“What Google has been able to achieve with neural networks is providing us with the building blocks for machine intelligence, laying the groundwork for the next decade of how technology will enhance the way people interact with the world,” said Blaise Agϋera y Arcas, head of Google’s machine intelligence group in Seattle. “By working with Movidius, we’re able to expand this technology beyond the data center and out into the real world, giving people the benefits of machine intelligence on their personal devices.”
Google will utilize Movidius’ latest flagship chip – the MA2450. The MA2450 is the only commercial solution on the market today with the performance and power-efficiency to perform complex neural network computations in ultra-compact form factors. The MA2450 is the most powerful iteration of the Myriad 2 family of vision processors, providing a series of improvements over the first-generation Myriad 2 VPU announced last year, the MA2100.
“The technological advances Google has made in machine intelligence and neural networks are astounding. The challenge in embedding this technology into consumer devices boils down to the need for extreme power efficiency, and this is where a deep synthesis between the underlying hardware architecture and the neural compute comes in,” said Remi El-Ouazzane, CEO, Movidius. “Movidius’ mission is to bring visual intelligence to devices so that they can understand the world in a more natural way. This partnership with Google will allow us to accelerate that vision in a tangible way.”
As the companies continue their collaboration, more details will become available.
|
Related News
- Synopsys Showcases EDA Performance and Next-Gen Capabilities with NVIDIA Accelerated Computing, Generative AI and Omniverse
- BrainChip and Edge Impulse Offer a Neuromorphic Deep Dive into Next-Gen Edge AI Solutions
- NVIDIA and Arm Partner to Bring Deep Learning to Billions of IoT Devices
- Altek License CEVA Imaging and Vision DSP for Deep Learning in Mobile Devices
- Fifth-Generation CEVA Imaging & Vision Technology Simplifies Delivery of Powerful Deep Learning Solutions on Low-Power Embedded Devices
Breaking News
- Credo at TSMC 2024 North America Technology Symposium
- Cadence Reports First Quarter 2024 Financial Results
- Rambus Advances AI 2.0 with GDDR7 Memory Controller IP
- Faraday Reports First Quarter 2024 Results
- RAAAM Memory Technologies Closes $4M Seed Round to Commercialize Super Cost Effective On-Chip Memory Solutions
Most Popular
- GUC provides 3DIC ASIC total service package to AI/HPC/Networking customers
- Omni Design Technologies Joins Intel Foundry Accelerator IP Alliance
- Faraday Partners with Arm to Innovate AI-driven Vehicle ASICs
- Semiconductor Capacity Is Up, But Mind the Talent Gap
- Efabless Announces the Release of the OpenLane 2 Development Platform, Transforming Custom Silicon Design Flows
E-mail This Article | Printer-Friendly Page |