Design & Reuse

CoMira Announces One of the Industry’s First Commercially Available UALink IP Solutions for Next-Generation AI Scale-Up Networks

Dec. 11, 2025 – 

CoMira, a leading provider of high-speed Ethernet and AI interconnect IP, today announced one of the industry’s first market-ready UALink IP solutions. The new scalable IP family delivers a full Transaction Layer, Data Link Layer, and Physical Layer implementation of the UALink 200G 1.0 specification, enabling next-generation AI networks with the high bandwidth, low latency, and deterministic performance required for large-scale training and inference workloads.

Product Overview

The CoMira UALink IP is a configurable, multi-SKU portfolio that delivers full Transaction Layer, Data Link Layer, and Physical Layer implementations of the UALink 200G 1.0 specification. Designed for next-generation GPU and accelerator clusters and AI Scale-Up fabrics, the architecture scales from 200G up to 800G per instance through flexible channel, lane, and datapath configurations, and supports multi-Terabit aggregate performance through multi-instance deployments. Built on industry standards including IEEE 802.3ck/df/dj and LLRSFEC, the solution provides the high bandwidth, low latency, and deterministic behavior required for large-scale training and inference systems. Multiple customers have already selected CoMira’s UALink IP for their next-generation accelerator and interconnect silicon programs, marking one of the earliest commercial deployments of the UALink 200G 1.0 specification. These early engagements confirm the readiness, interoperability, and commercial maturity of CoMira’s full-stack UALink solution as the industry moves toward open, multi-vendor scale-up interconnects.

Key features

  • Full TL/DL/PL implementation of the UALink 200G 1.0 specification.
  • Transaction Layer including UPLI transaction packing, address caching, credit-based flow control, and TL FLIT data poisoning for advanced traffic optimization.
  • Deterministic Data Link operation enabled by pacing, link-level replay, and FLIT buffering for high-load reliability.
  • Integrated PCS and SerDes control with near-end SerDes loopback and flexible lane remapping for robust bring-up and interoperability.
  • Compliance with IEEE 802.3ck/df/dj standards and support for RS-FEC and LLRSFEC to ensure high link integrity.
  • Scalable bandwidth architecture supporting up to 800G per instance with flexible lane and channel configurations supporting R4/R2/R1 modes to align with diverse PHY and system design constraints.
  • Programmable error injection for comprehensive FEC and PCS validation across lab and silicon bring-up.
  • Flexible FCS handling, including generation, checking, and stomping, to accelerate debug and verification workflows.
  • AXI-based programming interface integrated with a unified interrupt controller for streamlined control and monitoring.
  • UART buffering for diagnostics, low-level management, and system-level debug processes.

The CoMira UALink IP architecture is one of the first fully realized, parameterized platforms aligned to the open-standard. By tightly integrating Transaction, Data Link, and Physical layers, the architecture delivers deterministic latency and predictable throughput under massive concurrency, offering a low-risk implementation path for system-level designers.

The rapid growth of AI training and inference workloads is driving an industry-wide shift toward larger accelerator pods and tightly coupled compute fabrics. As models scale to trillions of parameters and cluster sizes expand into the thousands of accelerators, traditional interconnect solutions face increasing pressure to deliver predictable latency, high concurrency, and robust flow control under demanding traffic patterns. UALink emerges in this context as an open, multi-vendor scale-up standard designed to enable efficient, high-bandwidth communication between heterogeneous accelerators within a single pod.

CoMira’s UALink IP directly addresses these requirements by providing a full-stack, standard-aligned interconnect platform that integrates seamlessly into AI servers, accelerator cards, switch ASICs, chiplet-based compute modules, and custom AI SoCs. The architecture supports a wide range of deployment scenarios—including AI training clusters, inference engines, HPC workloads, and next-generation accelerator baseboards—where deterministic performance and interoperability are critical to system-level efficiency. Multiple customers are already adopting CoMira’s UALink IP for their next-generation designs, validating its applicability across diverse system topologies and confirming the growing demand for open, scalable UALink-based solutions in data center and hyperscale environments.

Contact us

The CoMira UALink IP portfolio is available now for licensing, supporting immediate integration into next-generation AI accelerator, switch ASICs, XPUs and high-performance compute designs. For more information, please contact us at sales@comira-inc.com.