FPGA comes back into its own as edge computing and AI catch fire
By Efinix, Inc. (May 20, 2021)
The saturation of mobile devices and ubiquitous connectivity has steeped the world in an array of wireless connectivity, from the growing terrestrial and non-terrestrial cellular infrastructure and supporting fiber and wireless backhaul networks to the massive IoT ecosystem with newly developed protocols and SoCs to support the billions of sensor nodes intended to send data to the cloud.
By 2025, the global datasphere is expected to approach 175 zettabytes per year. What’s more, the number of connected devices is anticipated to reach 50 billion by 2030. However, the traditional distributed sensing scheme with the cloud-based centralized processing of data has severe limitations in security, power management, and latency — the end-to-end (E2E) latencies for ultra-reliable low-latency communications found in 5G standards are on the order of tens of milliseconds. This has led to a demand to drive data processing to the edge, disaggregating computational (and storage) resources to reduce the massive overhead that comes with involving the entire signal chain in uplink and downlink transmissions. This, in turn, increases the agility and scalability of a network.
New advances in machine learning (ML) and deep neural networks (DNNs) with artificial intelligence promise to provide this insight at the edge, but these solutions come with a huge computational burden that cannot be satisfied with conventional software and embedded processor approaches. Additionally, the design of hyper-specialized, application-specific ICs (ASICs) are failing as shrinking process geometries drive the cost of development and production out of the realm of edge devices. Moreover, the lack of reconfigurability of ASICs severely limits any potential system upgrades. Traditional FPGA approaches are typically too expensive and power-hungry for the densities demanded in new-generation edge applications.
E-mail This Article | Printer-Friendly Page |
|
Related Articles
New Articles
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
- Timing Optimization Technique Using Useful Skew in 5nm Technology Node
- Streamlining SoC Design with IDS-Integrate™
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- PCIe error logging and handling on a typical SoC
- UPF Constraint coding for SoC - A Case Study