Design & Reuse

Industry Articles

From Theory to the Field: Why Side-Channel Protection Defines Post-Quantum Security

- PQSecure Technologies
September 15, 2025

Introduction

Around the world, governments, industries, and technology providers are preparing for the migration to post-quantum cryptography (PQC). Standards bodies such as NIST in the United States and ENISA in the European Union are defining the algorithms and guidance that will underpin future digital security. These standards cover both classical algorithms (AES-256, SHA-2/3, ECDH, ECDSA, RSA-3072/4096) and newly approved PQC algorithms such as ML-KEM (Kyber), ML-DSA (Dilithium), LMS, and XMSS. Together, they form the foundation for secure communications, authentication, and hardware protection in the coming decades.

The migration is already on a timeline: new systems are expected to begin adopting CNSA 2.0 algorithms by 2027, legacy systems must phase out older cryptography by 2030–2031, and the transition to full quantum-resistant cryptography should be complete by 2035. Similar planning is taking place across Europe under ENISA’s recommendations for quantum-safe migration, reflecting a shared global recognition that preparation cannot wait. This schedule highlights both urgency and inevitability—yet it also reveals a critical gap. Mathematical security is only one piece of the puzzle. Without strong side-channel protections, PQC systems may be compromised long before quantum computers arrive.

When Software Defenses Aren’t Enough

At first, many developers attempted to secure PQC algorithms entirely in software—writing constant-time code, applying masking at critical points, adding noise, shuffling operations—hoping theory and best practices would suffice. But the defenses began to crack under real-world scrutiny.

Constant-time coding, while conceptually appealing, can be undermined by compiler optimizations that reorder or replace code with faster but leaky variants. Masking, a long-standing countermeasure, falters in PQC systems when values switch between Boolean and arithmetic domains. Polynomial arithmetic in ML-KEM and rejection sampling in ML-DSA, when masked, still leak during domain conversions—sometimes allowing adversaries to recover secrets with only thousands of traces.

Noise injection and shuffling attempt to obscure the signal, but advanced correlation power analysis (CPA), bolstered by machine learning classifiers, can often demodulate the leakage. Once thought to require hundreds of thousands of traces, attacks under these protections now succeed with far fewer. Microarchitectural leakage compounds the problem: software runs on real processors with caches, branch predictors, speculative execution—none of which were designed with side-channel security in mind. Even “constant-time” code can reveal secrets through timing or cache footprint.

And the performance costs of these defenses are steep. A heavily masked ML-DSA signing may run 10 × to 50 × slower. In latency-sensitive systems—satellites, radios, command links—this is untenable. Worse, longer execution times present attackers with more data to capture for side-channel analysis. The real lesson: software defences are fragile and expensive, and they cannot be the final line of defence.

Lessons from Classical Cryptography That Do Not Directly Transfer to PQC

Side-channel protection is not a new challenge. In classical cryptography, attacks on RSA, ECC, and AES forced designers to adopt practical countermeasures: blinding for RSA, unified formulas for ECC, masking and shuffling for AES. These methods, while not perfect, were often sufficient to raise the bar for attackers.

But migrating to PQC requires more than simply reusing these techniques. The algorithms are structurally different, and the leakage landscape has changed.

  • Algorithmic complexity: PQC schemes such as ML-KEM and ML-DSA rely on polynomial arithmetic, number theoretic transforms, and rejection sampling. These operations introduce new and less understood leakage points compared to modular exponentiation or elliptic curve multiplication.
  • Masking limitations: While Boolean or arithmetic masking worked well for AES and ECC, PQC often requires both simultaneously. Switching between domains is itself a source of leakage not present in classical cryptosystems.
  • FO transform vulnerabilities: KEMs that use the Fujisaki–Okamoto transform allow adversaries to submit chosen ciphertexts, probing side-channel responses in ways not possible against classical schemes.
  • Randomness sensitivity: Classical algorithms like ECDSA were broken when nonces were weak. PQC signatures such as ML-DSA are even more fragile: leakage of partial randomness during signing may enable key recovery with surprisingly few traces.
  • Hardware scaling: In AES or ECC, countermeasures scaled across process nodes. In PQC, leakage patterns shift more dramatically from 65 nm to 7 nm, demanding revalidation at every node.

The key point: the migration to PQC cannot rely on the classical cryptography playbook. Protecting PQC requires rethinking leakage models, countermeasure design, and validation practices from the ground up.

AI-Assisted Side-Channel Attacks: A New Frontier

Researchers have recently demonstrated that the fragility of software defences is worsened by AI-assisted side-channel attacks. A notable case is the attack described in ePrint 2023/1587, "A Single-Trace Message Recovery Attack on a Masked and Shuffled Implementation of CRYSTALS-Kyber" (ML-KEM). In this work, even a masked and shuffled implementation of ML-KEM running on an embedded device leaked enough information in a single power trace to allow recovery of the message (plaintext).

The attack worked by profiling hardware leakage with deep neural networks trained on known inputs and keys, and then applying this trained model to recover secrets from one observed trace. This is a striking example of how machine learning can collapse what used to require thousands of traces into one, rendering traditional protections ineffective.

The success of such attacks underscores that even carefully designed software countermeasures—masking and shuffling included—can be bypassed by adversaries equipped with modern AI classifiers.

Why AI-Assisted Side-Channel Attacks Are Not Straightforward in Hardware

While AI attacks are devastating against fragile software implementations, they are not as straightforward in hardware. Hardware leakage is inherently noisier and less consistent, originating from parallel datapaths, deep pipelines, and simultaneous switching of signals. The same operation can produce highly variable traces due to routing, jitter, or process variation, making it far more difficult for neural networks to generalize across datasets.

Hardware countermeasures are also structurally different. Masked multipliers, domain-oblivious NTT units, randomized coefficient shuffling, and balanced logic change the leakage model itself, rather than just adding noise. AI excels at extracting correlations, but when the underlying signal is fundamentally randomized or flattened by design, the advantage diminishes.

Data availability is another constraint. In software environments, attackers can often run operations repeatedly to collect thousands of traces. In deployed hardware systems—HSMs, secure enclaves, or satellites—opportunities may be limited to a handful of traces, often under adversarial conditions. Training effective AI models in this environment is significantly harder.

Finally, hardware designers now test against AI as part of validation. Companies like PQSecure integrate machine-learning-based distinguishers alongside classical CPA and TVLA tests, ensuring that their IP resists not only statistical attacks but also neural-network-assisted ones.

For these reasons, while AI-assisted side-channel attacks are a breakthrough in attacking software implementations, replicating the same success against hardened hardware IP is significantly more difficult.

Large Language Models and PQC Hardware IP

Large Language Models (LLMs) are also beginning to play a role in cryptographic hardware engineering, but their capabilities are limited. At present, LLMs are not capable of independently developing fully secure IP cores for PQC algorithms or classical cryptography. Designing secure hardware requires expertise in mathematics, side-channel resistance, fault-injection resilience, and formal verification—areas where LLMs cannot yet guarantee correctness or compliance.

That said, LLMs can be helpful for smaller and more controlled tasks. They can assist in generating small cryptographic building blocks such as modular arithmetic functions, writing testbenches, or producing documentation and compliance templates. In these cases, LLMs act as productivity tools rather than autonomous designers, and their outputs must always be validated by domain experts.

In the future, as LLMs become more specialized and integrated with formal verification frameworks, they may play a larger role in PQC hardware design. For now, however, they should be treated as assistants rather than engineers, especially when building PQC IP cores where side-channel security and correctness are mission-critical.

Why Hardware Protections Are Still Challenging

Securing implementations in hardware (ASIC or FPGA IP) amplifies resistance, but it comes with its own engineering hurdles. Masking ML-KEM polynomial arithmetic and ML-DSA rejection sampling requires hybrid Boolean/arithmetic masking. Converting between domains remains leakage-vulnerable. Silicon leaks through physical phenomena: glitches, EM coupling, routing imbalance can circumvent theoretical protections. Formal verification may confirm constant-time code, but empirical leakage needs extensive lab testing (TVLA, CPA/EMA). Advanced process nodes change leakage patterns. Defences must be re-tuned per node (65 nm → 5 nm). And AI-augmented adversaries remain a looming threat, requiring designers to stay ahead of evolving attack models.

How Many Traces Are Enough? Why Systems Must Be Conservative

Practical attacks teach a stark lesson. On FPGA implementations of ML-DSA, secret key recovery succeeded with ~55,000 traces, even with shuffling. On unprotected AES, successful key extraction needed only a few hundred traces. The AI-driven ML-KEM attack succeeded with one trace.

For long-term security, the bar must be far higher. The expectation should be resistance against millions of traces—even AI-augmented ones. Security evaluations must include TVLA, higher-order leakage testing, AI-based analysis, and DPA/EMA under realistic adversary models. A system that survives 50,000 traces—or a single AI-augmented trace—is not secure.

Side-Channel Requirements in Security Certifications

Side-channel protection is not just a best practice—it is increasingly mandated by security certifications and assurance frameworks used worldwide. For high-assurance cryptographic hardware, compliance with standards such as FIPS 140-3, ISO/IEC 19790, Common Criteria (CC), and NIAP requires demonstrable resistance to leakage attacks at higher security levels.

FIPS 140-3 defines four security levels for cryptographic modules. At Level 3, requirements include strong physical and logical protections against key extraction, including resistance to side-channel leakage. At Level 4, the highest level, modules must demonstrate resilience not only to power and EM side-channel analysis but also to environmental and fault-injection attacks. For PQC hardware, this means formal validation through Test Vector Leakage Assessment (TVLA), Differential Power Analysis (DPA) tests, and potentially AI-assisted evaluation.

ISO/IEC 19790 mirrors FIPS 140-3 at the international level, while ISO 17825 specifically defines testing methods for non-invasive attack mitigation. These include power, EM, and timing-based attacks. PQC IP targeting global markets must align with both U.S. and ISO standards to ensure interoperability and exportability.

Under Common Criteria, the AVA-VAN family (Vulnerability Analysis) defines evaluation levels that include assessment of side-channel threats. Higher Evaluation Assurance Levels (EALs) require rigorous testing against advanced attackers, including fault induction and power analysis. Products targeting AVA-VAN.5 must withstand attacks by highly skilled adversaries with significant resources—precisely the conditions expected in nation-state contexts.

NIAP, which oversees the U.S. adoption of Common Criteria, has increasingly emphasized side-channel testing in Protection Profiles (PPs) for smartcards, secure enclaves, and hardware roots of trust. PQC implementations that seek NIAP approval for government or defense deployment will be expected to meet these benchmarks, including proof of masking, hiding, and shuffling countermeasures.

In practice, this means that for systems requiring FIPS 140-3 Level 3 or 4 or CC AVA-VAN.4/5, side-channel protection is not optional—it is mandatory. The bar for leakage tolerance is extremely low: even a detectable first-order leakage can disqualify a product. For PQC hardware IP vendors, this translates to a need for continuous validation, certified lab testing, and design practices that anticipate certification audits.

PQSecure’s Capabilities for CNSA and NIST Algorithms

PQSecure Technologies delivers U.S.-based, side-channel-hardened IP cores that cover the full range of CNSA 2.0 and NIST-approved algorithms, including ML-KEM, ML-DSA, LMS, XMSS, SLH-DSA, AES-256, SHA-2/3, and legacy RSA/ECDSA/ECDH. Each implementation is designed with integrated countermeasures such as hybrid Boolean and arithmetic masking, shuffling, and hiding techniques, validated through TVLA testing and real-world DPA/EMA attack campaigns. To ensure resilience against physical manipulation, PQSecure incorporates fault-injection protections that guard against clock, voltage, and EM glitching, while formal verification frameworks like Cryptol, SAW, and Jasmin guarantee constant-time behavior at both the code and microarchitectural level. The IP portfolio is available in multiple performance tiers—Tiny and Compact variants for IoT and tactical edge nodes, as well as Balanced and Performance versions optimized for datacenter systems, HSMs, satellites, and command infrastructure—allowing integrators to match side-channel security with size, weight, power, and throughput requirements. Critically, all PQSecure solutions are developed and maintained within the United States, aligning with the CHIPS Act and NIST CBOM guidelines to provide trusted supply-chain assurance.

Conclusion

As the world migrates to PQC standards under CNSA 2.0, NIST, and ENISA guidance, side-channel protection has become the dividing line between theoretical and operational security.

  • Software-only protections are fragile, especially in the face of AI-assisted attacks.
  • Classical countermeasures cannot be directly transplanted into PQC without redesign.
  • Hardware protections are mandatory but technically challenging.
  • Certification frameworks such as FIPS 140-3, ISO 17825, Common Criteria AVA-VAN, and NIAP make side-channel protection a requirement, not an option.
  • Trace resilience targets must be conservative—measured in millions, not thousands.

PQSecure delivers CNSA- and NIST-approved cryptographic IP with integrated side-channel and fault-injection protections, formally verified, TVLA-tested, and AI-aware. For PQC deployments to truly deliver long-term assurance, side-channel security cannot be a feature—it must be the foundation.