Industry Expert Blogs
Running LSTM neural networks on an Imagination NNAWith Imagination Blog - Jesus Garza, ImaginatonJan. 08, 2021 |
Speech recognition has become more relevant in recent years: it enables computers to translate spoken language into text. It can be found in different types of applications, such as translators or closed captioning. An example of this technology is Mozilla’s DeepSpeech, an open-source speech-to-text engine, which uses a model trained by machine learning techniques based on Baidu’s Deep Speech research paper. We are going to provide an overview of how we are running version 0.5.1 of this model, by accelerating a static LSTM network on the Imagination neural network accelerator (NNA), with the goal of creating a prototype of a voice assistant for an automotive use case.
Related Blogs
- Digitizing Data Using Optical Character Recognition (OCR)
- The design of the NoC is key to the success of large, high-performance compute SoCs
- Self-Compressing Neural Networks
- Mitigating Side-Channel Attacks In Post Quantum Cryptography (PQC) With Secure-IC Solutions
- Reduced Operation Set Computing (ROSC) for Flexible, Future-Proof, High-Performance Inference