A highly efficient form of adaptive computing is emerging that results in lower power consumption, reduced silicon area, higher performance and lower cost than FPGA-based reconfigurable-computing methods the needed attributes for next-generation system designs.
Reconfigurable computing (RC) instantiates a static algorithm of homogeneous elements in an FPGA and runs it for an extended length of time: seconds, minutes, hours or days. RC chips are based on FPGA architectures, and FPGAs were designed and optimized as gate array replacements. By comparison, in adaptive computing, dynamic algorithms are directly mapped onto dynamic hardware resources, resulting in the specific hardware engine needed for a particular task or problem, and thus the most efficient use of silicon.
Where power consumption isn't a major consideration, RC is ideal for wall-socket applications, such as basestations and supercomputing, that can afford arrays of these e xpensive ($500 to $5,000) devices. The cost can be considerable because several indeed, often thousands of FPGAs go into an RC board.
Adaptive computing is not FPGA based, and requires a new approach to how CMOS silicon is used. It instantiates any number of heterogeneous algorithmic elements in hardware and runs them for as little as one clock cycle, rather than the thousands to millions of clock cycles common in RC.
FPGAs have an area penalty of 125 to 1 meaning 125 actual transistors are needed to deliver one usable transistor. The capacitance penalty is about 1,000 to 1, because an FPGA has many more wires and transistors than an ASIC. Many interconnects extend across the die, with multiple tap points, creating a large amount of capacitance. In turn, the FPGA consumes greater power.
Reconfiguration time is proportional to the number of FPGA configuration bits. The more configuration bits, the longer it takes for a device to reconfigure. FPGAs on the drawing board are e xpected to consume upward of 10 Mbits/second of configuration data. Additionally, because FPGAs are fine-grained and predominantly designed to replace gate arrays, they are not able to change very rapidly.
The majority of designers using RC must rely on conventional ASIC design tools based on such hardware-description languages as Verilog, VHDL or SystemC. This proves to be a poor methodology, leaving the designer to frequently resort to troublesome manual layout due to the inordinate number of FPGA wiring interconnections. Productivity of an engineer using an HDL is 10 times worse than for a software design methodology using a high-level language, such as C. Few engineers are sufficiently proficient at programming these RC devices.
RC is based on FPGA structures with a plethora of interconnects for the hundreds of thousands of tiny, one-gate elements, or blocks. This causes a huge interconnection overhead, resulting in large silicon area, high capacitance, low speed, high power consumption and high cost.
In adaptive computing, interconnects are better matched for the requirements of the problem being solved. This means there are substantially fewer interconnects (hundreds), so the interconnection network is small and scalable. The result is low interconnection overhead, which means smaller silicon area, substantially lower capacitance, higher speed, low power consumption and lower cost.
Paul Master, vice president and chief technology officer, and Fred Furtek, principal engineer, QuickSilver Technology Inc., San Jose, Calif.