The vDMA-AXI IP Core implements a highly efficient, configurable DMA engine specifically engineered for Artificial Intelligence (AI) optimized SoCs and FPGAs that power tomorrow’s virtualized data centers. The vDMA-AXI IP is intended to be used as a centralized DMA allowing concurrent data movement in any direction, and is particularly suited for many-core SoCs such as AI and ML processors. The vDMA-AXI IP Core is based on a novel architecture that allows hundreds of independent and concurrent DMA channels to be distributed among a number of Virtual Machines (VMs) or host domains without sacrificing on performance and resource utilization. The vDMA-AXI IP is optimized to deliver the highest possible throughput for small data packet transfers, which is a common weakness in traditional DMA engines. The vDMA-AXI IP can optionally be attached externally to Rambus’ PCIe controller IP for PCIe 5.0 with AXI interconnect for a scalable enterprise class PCIe interface solution for compute, network, and storage SoCs.