The convergence of open RISC-V architectures, real-time hypervisors, and efficient embedded AI frameworks marks a transformative moment in critical space system design.
Aug. 05, 2025 –
As modern space missions evolve in complexity, the role of software onboard spacecraft is undergoing a dramatic transformation. Spacecraft are no longer limited to basic telemetry and remote control. Today, onboard computing must support autonomous decision-making, intelligent data reduction, and rapid responses to unforeseen conditions—all while operating under strict constraints on power, size, and reliability. Artificial intelligence and machine learning (AI/ML) are the technologies driving this shift, but implementing them in the harsh, resource-constrained environment of space demands a new breed of embedded computing.
At the center of this evolution lies a fully European initiative combining open-source hardware, certified real-time software, and efficient AI frameworks. It’s a future built on RISC-V processors, safety-critical hypervisors, and edge-optimized AI engines—all integrated into a secure and flexible technology stack designed for space. This article explores how this fully European ecosystem is shaping the future of space autonomy.
The importance of AI and ML in space cannot be overstated. Whether it’s analyzing high-resolution satellite imagery in real-time, enabling autonomous orbital maneuvers, or detecting faults before they escalate, intelligent onboard software is a game-changer. Traditional approaches depend heavily on ground control for data interpretation and command decisions, which introduces latency and inefficiency, especially for deep-space missions.
AI/ML shifts the paradigm. Onboard inference engines can analyze data locally, extract actionable insights, and respond instantly without waiting for instructions from Earth. This local autonomy not only improves mission agility but also significantly reduces the volume of data that needs to be transmitted, saving precious bandwidth and power.
However, these benefits come with significant challenges. AI applications are often compute-intensive, require consistent performance, and must be robust to faults. In space, these systems must operate under radiation exposure, power constraints, and isolation from maintenance or updates. This requires not just advanced algorithms but also a new generation of embedded platforms.
The initiative explored here centers on a collaboration between Sysgo and Klepsydra Technologies, two European companies dedicated to delivering high-assurance embedded solutions. The core elements of their platform are Sysgo’s PikeOS real-time operating system and hypervisor, Klepsydra AI’s edge inference engine, and a family of RISC-V-based processors, including the REBECCA chip under active development.
PikeOS brings to the table a MILS (multiple independent levels of security and safety) architecture. This enables the separation of software components with different criticality levels, ensuring that an experimental AI module cannot interfere with life-critical navigation software, for instance. Each component runs in its own secure partition, which can include standard operating systems like embedded Linux (ELinOS), or bare-metal real-time applications. This separation is crucial for ‘freedom from interference’, simplifies certification, and increases resilience against both cyberthreats and runtime faults.
Klepsydra AI complements this foundation with a lean, pipeline-optimized framework for running deep learning models on constrained embedded hardware. By using a combination of data pipelining and parallel processing, Klepsydra delivers inference performance without the computational overhead typical of general-purpose libraries like TensorFlow Lite, making it viable for space-bound processors.