Design & Reuse

Industry Expert Blogs

Inside A Car's Digital Brain: MCUs, the Engine Powering SDV Innovation

Wael Fakhreldin, End-Market Director of Automotive Processing - GlobalFoundries
May 22, 2025

Software-defined vehicles (SDVs) have taken center stage as part of the automotive industry’s digital revolution. Think of SDVs as smartphones on wheels. And at the center of it all – but not so visible – are microcontroller units (MCUs) — the “digital brains” of today’s cars. These tiny but powerful chips, ranging from the size of a fingernail to a grain of rice, control everything from essential systems like brakes, to ambient lighting.

So, to break it all down, we turned to Wael Fakhreldin, GF’s Director of Automotive Processing, about the trends shaping the next generation of vehicle architectures, the evolving demands on MCUs, and how GlobalFoundries is co-innovating with its automotive customers to push the boundaries of what’s possible.

Let’s start big picture — what are some of the biggest automotive challenges you’re seeing that are pushing the need for new architecture?

Consumers want their in-vehicle experience to be as seamless as using the smartphone in their hands, and automakers are on a mission to make driving as safe as possible. But, what does that mean? In traditional vehicle architectures, each new feature would require its own hardware, creating significant complexities and costs for OEMs to integrate multiple different control units. The push for smarter and more connected vehicles demands new architecture, empowering drivers to access new features instantly, without the need for costly hardware upgrades.

Now, these pressures have spurred the acceleration toward SDVs that can rapidly deliver new features to keep pace with consumer expectations and market demand for new functionalities, without requiring physical updates. This approach relies on versatile, scalable high-performance compute platforms, enabling seamless over-the-air updates. They are key to overcoming these challenges.

What do you see as the most promising vehicle architectures today, and how do you think they’ll evolve over the next few years?

SDV zonal architecture is raising the standard for efficiency and performance, and it’s doing so in multiple ways. For instance, it is: 1) consolidating control & processing features based on their physical location in the vehicle, rather than their function and 2) significantly reducing wiring complexity and overall vehicle harness weight. In recent years, cross-domain zonal architectures have gained traction because they align well with the physical layout of modern vehicles. Each zonal controller manages different vehicle domain functions – from body and comfort to chassis control, or gateways – and different zonal controllers are connected via a fast ethernet up to 10Gbps, which acts as the vehicle’s backbone.

Central compute architecture is not expected to be broadly adopted until 2030, but adoption is already underway. This approach introduces a high-performance compute cluster, which orchestrates almost all vehicle functions, routing data from various sensors and components through aggregators to a central processing unit. Semiconductor innovations, such as automotive chiplets, will be critical in advancing these architectures, providing the high-performance capabilities required to support the on-demand software updates that consumers now expect.

There’s lots of talk about updating SDV architectures to drive more efficiency. What demands are zonal architectures placing on microcontroller units (MCUs) to deliver from a computing standpoint?

SDVs are only as smart as the architecture behind them. Simply put, they cannot reach their full potential without Zonal Controller Units. These controllers consolidate vehicle functions by physical cluster, rather than by feature. This shift is putting a big spotlight on MCUs and pushing the boundaries of what MCUs need to deliver — process faster, connect more devices, and support emerging features like AI at the edge.

  • More Compute: MCUs are being pushed to handle far more real-time computing. As chipmakers seek out arrays of different of logic cores, which run at higher operating frequencies. They are moving to more advanced nodes and bringing different digital cores’ virtualization concepts allowing MCUs to manage multiple tasks flexibly and efficiently.
  • Input/Output Overload: Modern vehicles have over 90 smart sensors, 800 sensors and loads, requiring zonal MCUs to have denser digital and analog I/O offerings to handle this level of data processing.
  • Memory That Can Keep Up: Embedded non-volatile memory (eNVM) has to get both bigger and faster. Bigger up to 32MB and more, and faster to support the fast-switching digital cores without adding any latencies or risking bottlenecks.
  • Faster Communication: The volume of data in cars continues to surge, and MCUs need to move that data — fast. Technologies like high-speed Ethernet and Serializer-Deserializer (SerDes) interfaces are becoming standard to guarantee reliable communication between zones and between zones and complex vehicle sensors.

GF’s advanced chip technologies like 12LP+ MRAM and 22FDX MRAM are helping chipmakers support the complex processing of today’s cars with fast, power-efficient vehicle controllers.

As zonal compute rises, the analog tasks shift to the vehicle’s edges. Now, auto chipmakers are bundling these functions – motor controllers, audio amps, even communication interfaces into single MCUs on GF’s 130BCD or 55BCD technology platforms.

Speaking of the edge, AI acceleration in vehicles is skyrocketing. How is AI at the edge reshaping the requirements of automotive MCUs in SDVs?

AI at the edge is powering many newer features – battery lifetime extension, in-cabin sensing, voice recognition, intelligent motor control. This approach reduces complexity, costs and power consumption, but it also enhances user privacy by processing information locally, rather than on the cloud. In instances like these where AI acceleration is not happening in central computers or ZCUs, end nodes require embedded AI acceleration to execute these functionalities.

With this evolution, some MCUs are adapting by offering specialized IPs primed for AI acceleration: graphics processing units (GPUs) to run complex mathematical models, language processing units (LPUs) for language and communication models, and digital signal processing (DSPs) to transform real-time signals. For more intensive workloads, MCUs are also integrating faster, more reliable interfaces to connect with external memory, ensuring they can process high volume data required for advanced AI applications.

 

Click here to read more ...