Flex Logix Announces InferX™ High Performance IP for DSP and AI Inference
MOUNTAIN VIEW, Calif. – April 24, 2023 – Flex Logix® Technologies, Inc., a leading innovator in DSP & AI inference IP and the leading supplier of eFPGA IP, announced today the availability of InferX™ IP & software for DSP and AI inference. InferX joins EFLX® eFPGA as Flex Logix’s second IP offering. It can be used by device manufacturers and systems companies that want the performance of a DSP-FPGA or a AI-GPU in their SoC, but at a fraction of the cost and power. The company’s EFLX eFPGA product line has already been proven in dozens of chips with many more in design from 180nm to 7nm with 5nm in development.
“By integrating InferX into an SoC, customers not only maintain the performance and programmability of an expensive and power-hungry FPGA or GPU, but they also benefit from much lower power consumption and cost,” said Geoff Tate, Founder and CEO of Flex Logix. “This is a significant advantage to systems customers that are designing their own ASICs, as well as chip companies that have traditionally had the DSP-FPGA or AI-GPU sitting next to their chip and can now integrate it to get more revenue and save their customer power and cost. InferX is 80% hard-wired, but 100% reconfigurable.”
The end user benefit is more powerful DSP and AI in smaller systems, lower power and lower cost. With InferX AI, users can process megapixel images with much more accurate models like Yolov5s6 and Yolov5L6 to detect images at smaller sizes/greater distances than is affordable now.
The InferX Advantage
InferX DSP is InferX hardware combined with Softlogic for DSP operations, which Flex Logix provides for operations such as FFT that is on-the-fly switchable between sizes (e.g. 1K to 4K to 2K); FIR filters of any number of taps; Complex Matrix Inversions 16×16 or 32×32 or other size; and many more. InferX DSP streams Gigasamples/second, can run multiple DSP operations, and DSP operations can be chained. DSP is done on Real/Complex INT16 with 32-bit accumulation for very high accuracy. With InferX DSP you can integrate DSP performance that is as fast or faster than the leading FPGA at 1/10th of the cost and power, while keeping all of the flexibility to reconfigure almost instantly. One example is InferX DSP with <50 square millimeters of silicon in N5 can do Complex INT16 FFTs at 68 Gigasamples/second and switch instantly between FFT sizes from 256 to 8K points. This is faster than the best FPGA available today at a fraction of the cost, power and size.
InferX AI is InferX hardware combined with the Inference Compiler for AI Inference. Inference Compiler takes in a customer’s neural network model in Pytorch, Onnx or TFLite formats, quantizes the model with high accuracy, compiles the graph for high utilization and generates the run time code that executes on the InferX hardware. A simple, easy-to-use API is provided to control the InferX IP. With InferX AI, customers can integrate AI Inference performance that is as fast or faster than the leading edge AI modules at 1/10th of the cost and power, while keeping all of the flexibility and the ability to run multiple models or change models on the fly. InferX AI is optimized for megapixel batch=1 operations, and the inference compiler is available for evaluation. As an example, with about 15 square millimeters of silicon in N7, InferX AI can run Yolov5s at 175 Inferences/second: this is 40% faster than the fastest edge AI module, Orin AGX 60W.
InferX technology is proven in 16nm and production qualified and will be available in the most popular FinFet nodes.
InferX hardware is also scalable. Its building block is a compute tile that can be arrayed for more throughput. For example, a 4 tile array is 4x the performance of a 1 tile array. The InferX array with the performance the customer wants is delivered with an AXI bus interface for easy integration in their SoC.
For general information on InferX and EFLX product lines, visit our website at this link. For more information under NDA, qualified customers can contact us at this link.
About Flex Logix
Flex Logix is a reconfigurable computing company providing leading edge eFPGA and AI Inference technologies for semiconductor and systems companies. Flex Logix eFPGA enables volume FPGA users to integrate the FPGA into their companion SoC, resulting in a 5-10x reduction in the cost and power of the FPGA and increasing compute density which is critical for communications, networking, data centers, microcontrollers and others. Its scalable AI inference is the most efficient, providing much higher inference throughput per square millimeter and per watt. Flex Logix supports process nodes from 180nm to 7nm, with 5nm in development; and can support other nodes on short notice. Flex Logix is headquartered in Mountain View, California and has an office in Austin, Texas. For more information, visit https://flex-logix.com.
|
Related News
- Flex Logix Pairs its InferX X1 AI Inference Accelerator with the High-Bandwidth Winbond 4Gb LPDDR4X Chip to Set a New Benchmark in Edge AI Performance
- Flex Logix to speak at the 2021 AI Hardware Summit on Optimizing AI Inference Performance
- Flex Logix Announces EFLX eFPGA And nnMAX AI Inference IP Model Support For The Veloce Strato Emulation Platform From Mentor
- Flex Logix Discloses Real-World Edge AI Inference Benchmarks Showing Superior Price/Performance For All Models
- Flex Logix Boosts AI Accelerator Performance and Long-Term Efficiency
Breaking News
- Arteris Joins Intel Foundry Accelerator Ecosystem Alliance Program to Support Advanced Semiconductor Designs
- SkyeChip Joins Intel Foundry Accelerator IP Alliance
- Siemens and Intel Foundry advance their collaboration to enable cutting-edge integrated circuits and advanced packaging solutions for 2D and 3D IC
- Cadence Expands Design IP Portfolio Optimized for Intel 18A and Intel 18A-P Technologies, Advancing AI, HPC and Mobility Applications
- Synopsys and Intel Foundry Propel Angstrom-Scale Chip Designs on Intel 18A and Intel 18A-P Technologies
Most Popular
- QuickLogic Delivers eFPGA Hard IP for Intel 18A Based Test Chip
- Siemens collaborates with TSMC to drive further innovation in semiconductor design and integration
- Aion Silicon Joins Intel Foundry Accelerator Design Services Alliance to Deliver Next-Generation Custom SoCs at Scale
- TSMC Unveils Next-Generation A14 Process at North America Technology Symposium
- BOS Semiconductors to Partner with Intel to Accelerate Automotive AI Innovation
![]() |
E-mail This Article | ![]() |
![]() |
Printer-Friendly Page |