There's no doubt that AI is driving every agenda - whether it's hardware, software, automation, or anything else. This was clear at last week's 'AI Chrysalis Chronicles' panel discussion at the end of the Silicon Catalyst Spring 2025 Portfolio Company Update at the Computer History Museum, in Mountain View, CA - where the panelists debated the energy demands posed by AI, but how they were optimistic that silicon innovation would come to the rescue.
www.eetimes.com, May. 19, 2025 –
But in what form will that rescue come? Well, here in Antwerp, Belgium, this week, the CEO of imec, Luc Van den hove, will tell the ITF World 2025 conference that AI’s future hinges on hardware innovation, and that could very likely come in the form of programmable AI silicon. He will explain that as we head towards agentic and physical AI, hardware is going to struggle to handle the diverse workloads in a ‘performant and sustainable way’. And the problem is that developing dedicated AI hardware takes significantly more time than writing algorithms, and the software guys will then have superseded the hardware guys by the time silicon comes out.
Quoting from his blog, Van den hove said, “To prevent bottlenecks from slowing down next-gen AI, we must reinvent the way we do hardware innovation.”
The challenge is that while simply adding brute compute power and data has done an excellent job for the first generation of large language models (LLMs), the transformation of generative AI toward reasoning models and workloads will become increasingly heterogeneous, and a one-size fits all approach using just brute force compute may not necessarily be the right solution to deal with a chain of various workloads
“This is because agentic AI, which focuses on decision-making and is highly relevant for medical applications, and physical AI, which focuses on emphasizing embodiment and interaction with the physical world for robotics and autonomous cars, require a myriad of different models. Each model serves a specific purpose and interacts with the others, forming an AI system that can combine large language models, perception models, and action models. Some models require CPUs, some GPUs, and others are currently lacking the right processors.”
Van den hove highlights what everyone is saying – that AI has already become a significant energy consumer, and the demand for resources is likely to continue growing with the arrival of next-gen AI.
He said, “The root of the problem is that AI is often running on a suboptimal compute architecture, consisting of suboptimal hardware components for the specific workload the algorithms need. Adding new, challenging workloads to the mix will cause AI-related energy use to rise exponentially. Making it even more challenging is the fact that AI workloads could change overnight, instigated by a new algorithm.”
Quoting the example of DeepSeek, he said algorithms move quickly, but hardware problems are time-intensive, taking several years to achieve even minor improvements, all while production is becoming increasingly complex and therefore expensive.
This means that developing a specific computing chip for each model can’t keep up with the unprecedented pace of innovation in models. He also talks about stranded hardware assets as the pace of software development is clearly faster than hardware development. He said, “Furthermore, the laws of economics are not playing in hardware’s favor either: there is a huge inherent risk of stranded assets because by the time the AI hardware is finally ready, the fast-moving AI software community may have taken a different turn.”
And while some companies may have deep pockets to develop their own custom AI chips, those are few and far between. Hence the answer lies in innovative silicon hardware that is programmable, or as Van den hove puts it, “Silicon hardware should become almost as ‘codable’ as software is.” This means that the same set of hardware components should become reconfigurable, meaning you would only need one computer instead of, say, three to run a sequence of algorithms. “In fact, software should define silicon, which is a very different approach to current hardware innovation, which is a fairly stiff process.”
In the blog, Van den hove gives us an image of what that could look like: He said, “Picture it: rather than one monolithic ‘state-of-the-art’ and super-expensive processor, you would get different coworking supercells consisting of stacked layers of semiconductors, each optimized for specific functionalities, and integrated in 3D so memory can be placed close to the logic processing unit, thereby limiting the energy losses of data traffic. A network-on-chip will steer and reconfigure these supercells so they can be quickly adapted to the latest algorithm requirements, smartly combining the different versatile building blocks. By splitting up the requirements over different chiplets instead of designing a monolithic chip, you can combine hardware from different suppliers.”
He concludes, “With this reconfigurable approach, many more companies will have the ability to design their own hardware for specific AI workloads. It will boost creativity in the market, open possibilities for differentiation, and make hardware innovation affordable again. By agreeing on a universally accepted standard, like RISC-V, software and hardware companies are getting in sync and guaranteeing both compatibility and performance.
He adds, “AI’s future hinges on hardware innovations. And given the vast impact of AI on all societal domains, ranging from in silicon drug design to sensor fusion for robotics and autonomous driving, it is probably not exaggerated to state that our very future hinges on it.”