News Highlights:
www.arm.com, Sept. 10, 2025 –
AI is no longer a feature, it’s the foundation of next-generation mobile and consumer technology. Users now expect real-time assistance, seamless communication, or personalized content that is instant, private, and available on device, without compromise. Meeting these expectations requires more than incremental upgrades, it demands a step change that brings performance, privacy and efficiency together in a scalable way.
That’s why we’re introducing Arm Lumex, our most advanced compute subsystem (CSS) platform, purpose-built to accelerate AI experiences on flagship smartphones and next-gen PCs.
Lumex unites our highest performing CPUs with Scalable Matrix Extension version 2 (SME2), GPUs and system IP, enabling the ecosystem to bring AI devices to market faster and deliver experiences from desktop class mobile gaming to real time translation, smarter assistants, and personalized applications.
We are enabling SME2 across every CPU platform and by 2030, SME and SME2 will add over 10 billion TOPS of compute across more than 3 billion devices, delivering an exponential leap in on-device AI capability.
Partners can choose exactly how they build Lumex into their SoC – they can take the platform as delivered and leverage cutting-edge physical implementations tailored to their needs, reaping time to market and time to performance benefits. Alternatively, partners can configure the platform RTL for their targeted tiers and harden the cores themselves.
Lumex and our simplified naming conventions across the Arm portfolio were announced earlier this year.
The platform combines:
The SME2-enabled Arm C1 CPU cluster provides dramatic AI performance gains for real-world, AI-driven tasks:
This leap in CPU AI compute enables real-time, on-device AI inference capabilities, providing users with smoother, faster experiences across interactions like audio generation, computer vision, and contextual assistants.
So what does this mean in real world use cases? SME2 can deliver a whole new level of responsiveness and efficiency. For example, our Smart Yoga Tutor demo app saw a 2.4x boost in text-to-speech, meaning users get instant feedback on their poses, all without draining battery life. Together with Alipay and vivo, we achieved 40% reduction in the time it takes for LLM response for interaction with the user, proving SME2 is delivering faster real-time generative AI on-device.
SME2 isn’t just about speed; it’s also unlocking AI-powered capabilities that traditional CPUs can’t match. For example, neural camera denoising now runs at over 120fps in 1080p or 30fps in 4K, all on a single core. That enables smartphone users to capture sharper, crystal-clear images even in the darkest scenes, allowing for smoother interactions and richer experiences on everyday devices.
Unlike cloud-first AI, which is constrained by latency, cost, and privacy concerns, Lumex brings intelligence directly to the device where it’s faster, safer, and always available. SME2 is being embraced by leading ecosystem players including Alibaba, Alipay, Samsung LSI, Tencent and vivo.
Lumex offers partners the freedom to balance peak performance, sustained efficiency, and silicon area in products ranging from high-end smartphones and PCs to emerging AI-first form factors: