Design & Reuse

Dnotitia Revolutionizes AI Storage at SC25: New VDPU Accelerator Delivers Up to 9x Performance Boost

Nov. 20, 2025 – 

FPGA-based solution outperforms traditional CPUs in semantic search, offering drop-in compatibility for Milvus and FAISS.

St. Louis, MO –  Dnotitia Inc. today unveiled the FPGA prototype of its Vector Data Processing Unit (VDPU) at Supercomputing 2025 (SC25), demonstrating a paradigm shift in AI storage performance. The company announced that a single server equipped with four VDPUs can replace the throughput of nine dual-socket CPU servers, marking a significant leap in efficiency for semantic search and Retrieval-Augmented Generation (RAG) workflows.

Breaking Hardware Bottlenecks

As AI models increasingly rely on RAG pipelinesㅡexemplified by Google’s recent Gemini API updates like the File Search Tool for fully managed RAG—the bottleneck has shifted to data retrieval. Dnotitia’s VDPU addresses this by offloading intensive Vector Database (VDB) workloads from CPUs or GPUs to dedicated hardware. Internal tests show that a single VDPU card matches the vector processing power of six high-performance server CPUs, dramatically reducing operational costs.

Seamless Integration & Strategic Collaboration

Following the roadmap announced at SC24, Dnotitia has proven the VDPU’s capability to significantly increase throughput across AI storage architectures. A key advantage revealed at SC25 is compatibility: the VDPU supports not only Dnotitia’s own ‘Seahorse’ database but also major open-source platforms like Milvus and FAISS.

Building on this compatibility, Dnotitia is actively expanding its ecosystem through partnerships with global AI storage vendors and Cloud Service Providers (CSPs). The company aims to integrate VDPU technology directly into large-scale storage infrastructures, ensuring that these systems can deliver the highly accurate and lightning-fast semantic search capabilities required for next-generation AI workloads.

Roadmap to ASIC

The VDPU specializes in parallelizing graph traversal and vector similarity calculations, optimizing data delivery to GPUs and NPUs during LLM inference. Dnotitia confirmed that the ASIC version of the VDPU is scheduled for release in the second half of 2026, promising enhanced memory capacity and power efficiency for hyperscale environments.

CEO Perspective

“The era of manual directory search is ending,” said MK Chung, CEO of Dnotitia. “We are moving to a time where AI retrieves precise data to generate new value. This requires an AI-native infrastructure, and VDPU will serve as the core engine empowering our partners—from storage vendors to CSPs—to build the world's most efficient AI storage systems.”

Software Ecosystem Expansion

Alongside hardware innovation, Dnotitia continues to strengthen its software ecosystem. The company recently updated Seahorse Cloud, its SaaS platform. Following the launch of the all-in-one Version 1.0 in April, Version 2.0 was released in August, adding advanced RAGOps and AgentOps capabilities based on the Model Context Protocol (MCP) to streamline AI agent development.