How Will Deep Learning Change SoCs?
Junko Yoshida, EETimes
3/30/2015 00:00 AM EDT
MADISON, Wis. – Deep Learning is already changing the way computers see, hear and identify objects in the real world.
However, the bigger -- and perhaps more pertinent -- issues for the semiconductor industry are: Will “deep learning” ever migrate into smartphones, wearable devices, or the tiny computer vision SoCs used in highly automated cars? Has anybody come up with SoC architecture optimized for neural networks? If so, what does it look like?
E-mail This Article | Printer-Friendly Page |
Related News
- How Will 5G Advanced Change RF Design?
- Harvard Researchers Select Flex Logix's Embedded FPGA Technology To Design Deep Learning SoCs
- Reading the tea leaves: How deep will EDA losses go?
- Neurxcore Introduces Innovative NPU Product Line for AI Inference Applications, Powered by NVIDIA Deep Learning Accelerator Technology
- Syntiant's Deep Learning Computer Vision Models Deployed on Renesas RZ/V2L Microprocessor
Breaking News
- Credo at TSMC 2024 North America Technology Symposium
- Cadence Reports First Quarter 2024 Financial Results
- Rambus Advances AI 2.0 with GDDR7 Memory Controller IP
- Faraday Reports First Quarter 2024 Results
- RAAAM Memory Technologies Closes $4M Seed Round to Commercialize Super Cost Effective On-Chip Memory Solutions
Most Popular
- GUC provides 3DIC ASIC total service package to AI/HPC/Networking customers
- Omni Design Technologies Joins Intel Foundry Accelerator IP Alliance
- Faraday Partners with Arm to Innovate AI-driven Vehicle ASICs
- Semiconductor Capacity Is Up, But Mind the Talent Gap
- Efabless Announces the Release of the OpenLane 2 Development Platform, Transforming Custom Silicon Design Flows