Jan. 22, 2026, Jan. 22, 2026 –
Anyone exploring technological advances in artificial intelligence (AI) will inevitably encounter spiking neural networks (SNNs) — the next step toward energy‑efficient real‑time AI. The difference from conventional neural networks is striking: while standard artificial neurons continuously output values, SNN neurons fire only when critical thresholds are exceeded, sending electrical impulses (spikes) through the network. This event‑driven mode of operation saves both energy and computation time, making it a compelling option for certain use cases. Yet few companies are willing to discard their painstakingly trained deep neural networks (DNNs) and start from scratch. The pressing question, therefore, is: How can the proven knowledge embedded in DNNs be transferred into SNNs? This is precisely where current research comes in.
Deep neural networks (DNNs) are found in almost every AI application. They can be tailored to hardware and solve tasks with precision and simplicity. Yet wherever greater energy efficiency or speed is required, spiking neural networks (SNNs) clearly have the edge. "The advantages are obvious. At the same time, we see that a chicken‑and‑egg problem is still slowing down the practical use of SNNs. Without suitable, brain‑inspired hardware they cannot run efficiently. But conversely, ready‑to‑use SNNs are needed first to unlock the potential of such hardware. That’s why we are tackling this challenge from both sides," explains Michael Rothe, Group Manager Embedded AI at Fraunhofer IIS.
On the hardware side, there has already been a major hardware breakthrough. Fraunhofer IIS and Fraunhofer EMFT have developed the SNN accelerator SENNA — a programmable neuromorphic chip designed to process low‑dimensional time‑series data. With 1,024 artificial neurons, it operates directly with spike‑based input and output signals and can analyze data streams within nanoseconds. Paired with the complementary software development kit, SNN models can be seamlessly deployed onto the chip.
However, developing such SNN models calls for entirely new approaches. “From a theoretical perspective, SNNs are already quite advanced, but established methods for building them are still not available in practice. Many companies have only recently, and often painstakingly, built up AI expertise and developed robust DNN models. Our guiding question was: How can we build a reliable bridge from DNNs to SNNs, so that the speed and energy efficiency of SNNs can be harnessed without costly new investments?” explains Rothe.
The EU‑funded project MANOLO provided the ideal framework for addressing such ambitious questions. 18 European partners are working together to develop algorithms and tools that make AI more energy‑efficient and powerful. With respect to SNNs, this is fundamental research, since they are notoriously difficult to train. Traditional training approaches, in which a neural network back‑propagates its errors layer by layer and adjusts the weights, cannot be applied here without modification. This is mainly because, in SNNs, information is transmitted as individual, precisely timed spikes that cannot be continuously adjusted. If a DNN already exists, the most efficient way to obtain a precisely functioning SNN today is therefore by transforming the DNN into an SNN.
There are several approaches to generating SNNs from DNNs. One method — DNN‑to‑SNN knowledge distillation — stands out for its accuracy and efficiency. The core idea is that a traditionally trained DNN serves as the teacher model, with its knowledge distilled into a student model, the SNN, via a specially developed training algorithm. This allows the performance and knowledge of a DNN to be transferred directly to the SNN. The approach requires comparatively little training effort, as distilling a DNN simplifies SNN training. Overall, training a distilled network requires far less data and fewer training runs than a new SNN developed from scratch. As a result, the rollout of SNN‑based products is accelerated and development costs are reduced.
That’s not all: Hardware and algorithms must be closely aligned in spiking neural networks to ensure they work together efficiently. That’s why the MANOLO project team is focusing on implementing knowledge distillation in a hardware‑aware way. “Previous approaches were not designed to account for the parameters of the chip on which an SNN will eventually run. As a result, models may fail to fully leverage the chip’s computing power or, worse, may be incompatible with the hardware. With our hardware‑aware DNN‑to‑SNN knowledge distillation approach, we address this gap and offer a straightforward way to generate precisely tailored SNNs from existing DNNs,” explains Sebastian Karl, project manager at Fraunhofer IIS. The developers consider not only the number of neurons and synapses available on a chip, but also the wiring — that is, the physical connections between spiking neurons. The goal is to design the SNN architecture so that densely interconnected neurons are arranged as close together as possible on the chip. This enables spikes to move between neurons faster, with minimal loss and improved energy efficiency.
In the MANOLO project, the team is applying this approach to develop an SNN that detects anomalies in audio signals. A conventional DNN is first trained to reliably recognize unusual audio patterns. The researchers then distill this knowledge into an SNN optimized for deployment on the SENNA chip. The industrial use case behind this: A neural network that can identify faults and their causes from machine sounds — or warn of impending component failures — helps prevent unplanned downtime, cut maintenance costs, and boost system availability. The team is also working to extend this method to data streams in communication systems. As an example, anomalies and interference can be detected and automatically compensated for, ensuring better signal quality and higher throughput.
Whatever the application, the experts at Fraunhofer IIS aim to significantly lower the barriers to adopting SNNs. To get started, companies need the right development tools. The next step is to integrate knowledge distillation as an additional functional module in the SENNA software development kit (SDK). Until now, the SDK has covered the entire process from defining an SNN to generating the final bitstream for SENNA. With hardware‑aware knowledge distillation, companies will soon be able to turn an existing DNN into a ready‑to‑deploy SNN model. This means they won’t need deep expertise in SNN architectures or chip design.