ALISO VIEJO, CA -- Ferbuary 23, 2016 -- BrainChip Holdings Limited (ASX: BRN), developer of a revolutionary new Spiking Neuron Adaptive Processor (SNAP) technology that has the ability to learn autonomously, evolve and associate information just like the human brain, is pleased to report that it has achieved a further significant advancement of its artificial intelligence technology.
The R&D team in Southern California has completed the development of an Autonomous Visual Feature Extraction system (AVFE), an advancement of the recently achieved and announced Autonomous Feature Extraction (AFE) system. The AVFE system was developed and interfaced with the DAVIS artificial retina purchased from its developer, Inilabs of Switzerland. DAVIS has been developed to represent data streams in the same way as BrainChip's neural processor, SNAP.
- Capable of processing 100 million visual events per second
- Learns and identifies patterns in the image stream within seconds -- (Unsupervised Feature Learning)
- Potential applications include security cameras, collision avoidance systems in road vehicles and Unmanned Aerial Vehicle's (UAV'S), anomaly detection, and medical imaging
- AVFE is now commercially available
- Discussions with potential licensees for AVFE are progressing
AVFE is the process of extracting informative characteristics from an image. The system initially has no knowledge of the contents of an input stream. The system learns autonomously by repetition and intensity, and starts to find patterns in the image stream. BrainChip's SNAP learns to recognize features within a few seconds, just like a human would when looking at a scene. This image stream can originate from any source, such as an image sensor like the DAVIS artificial retina, but also from other sources that are outside of human perception such as radar or ultrasound images.
In traditional systems, a computer program loads a single frame from a video camera and searches that frame for identifying features, predefined by a programmer. Each section of the image is compared to a template until a match is found and a percentage of the match is returned, along with its location. This is a cumbersome operation.
An AVFE test sequence was conducted on a highway in Pasadena, California for 78.5 seconds. An average event rate of 66,100 events per second was recorded. The SNAP spiking neural network learned the features of vehicles passing by the sensor within seconds (see Figure 1). It detected and started counting cars in real time. The results of this hardware demonstration shows that SNAP can process events emitted by the DAVIS camera in real time and perform unsupervised learning of temporally correlated features.
AVFE can be configured for a large number of uses including surveillance and security cameras, collision avoidance systems in road vehicles and Unmanned Aerial Vehicle's (UAV'S), anomaly detection, medical imaging, audio processing and many other applications.
Peter van der Made, CEO and Inventor of the SNAP neural processor said, "We are very excited about this significant advancement. It shows that BrainChips neural processor SNAP acquires information and learns without human supervision from visual input. The AVFE is remarkable and capable of high speed visual perception and learning, that has wide spread commercial applicability."
About BrainChip's Autonomous Visual Extraction System
In the present AVFE method, the AFE network is connected to an image sensor. The image sensor that we are using is a DAVIS artificial retina from Inilabs in Switzerland that was developed by Dr. Tobi Delbruck. Inilabs make an assortment of sensors that are a perfect match for the BrainChip SNAP technology. Their sensors output precision timed spikes rather than data.
The DAVIS Dynamic Vision Sensor is an artificial retina that has an AER (Address Event Representation) interface, the same interface that is used in the SNAP Technology. The AER bus has become an industry standard. Rather than outputting frames of video, each pixel outputs one or more spikes whenever the contrast changes. A contrast change can be caused by movement. Spike events are transmitted over the AER bus at a maximum rate of 50 million events per second. The BrainChip SNAP technology can process 100 million events per second.
The AVFE network autonomously learns to identify objects moving through its vision field. The second labelling neural network is trained to start counting these objects. A video of the Visual Feature Extraction will accompany the Milestone 3 release which the company anticipates making during the 1st quarter of 2016.
Peter van der Made continued; "The Autonomous Visual Feature Extraction system enables BrainChip to expand its commercial efforts, including the Fortune 500 companies we have been in communications with as well as a number of small and medium size enterprises. For that purpose, BrainChip is broadening its marketing base by forming commercial ties with a number of companies, such as Applied Brain Research. These will assist BrainChip making the transition from ground-breaking technological invention to a commercial enterprise."
About BrainChip Holdings Limited
BrainChip Holdings Limited (ASX: BRN), located in Aliso Viejo, CA, has developed a revolutionary new Spiking Neuron Adaptive Processor (SNAP) technology that has the ability to learn autonomously, evolve and associate information just like the human brain. The technology is fast, completely digital, and consumes very low power, making it feasible to integrate large networks into Smart phones and devices, something that has never been possible before. Additional information is available by visiting www.brainchipinc.com.