HDCP 2.3 Embedded Security Modules on DisplayPort/USB Type-C
Machines can see, hear and analyze thanks to embedded neural networks
Youval Nachum, CEVA
embedded.com (February 21, 2018)
The potential applications around artificial intelligence (AI) continue to grow on a daily basis. As the power of different neural network (NN) architectures are tested, tuned and refined to tackle different problems, diverse methods of optimally analyzing data using AI are found. Much of today’s AI applications such as Google Translate and Amazon Alexa’s speech recognition and vision recognition systems leverage the power of the cloud. By relying upon always-on Internet connections, high bandwidth links and web services, the power of AI can be integrated into Internet of Things (IoT) products and smartphone apps. To date, most attention is focused on vision-based AI, partly because it is easy to visualize in news reports and videos, and partly because it is such a human-like activity.
For image recognition, a 2D image is analysed – a square group of pixels at a time – with successive layers of the NN recognizing ever larger features. At the beginning, edges of high difference in contrast will be detected. In a face, this will occur around features such as the eyes, nose, and mouth. As the detection process progresses deeper into the network, whole facial features are detected. In the final phase, the combination of features and their position will tend toward a specific face in the available dataset being identified as a likely match.
E-mail This Article | Printer-Friendly Page |
|
Ceva, Inc. Hot IP
Related Articles
- Can You See Using Convolutional Neural Networks?
- Neural Networks Can Help Keep Connected Vehicles Secure
- The realities of developing embedded neural networks
- How Low Can You Go? Pushing the Limits of Transistors - Deep Low Voltage Enablement of Embedded Memories and Logic Libraries to Achieve Extreme Low Power
- Pyramid Vector Quantization and Bit Level Sparsity in Weights for Efficient Neural Networks Inference