Using programmable processor array chips for algorithmic-intensive tasks
By Patrick Fuller and Dr. Sam Jenkins, picoChip April 5, 2005
Although it has found many uses in signal processing, the Fast Fourier Transform (FFT) has taken on even more importance as a fundamental part of the algorithms used for communications protocols based on Orthogonal Frequency Division Multiplexing (OFDM). In the wireless space, OFDM is used in the newer forms of IEEE802.11 wireless local area network, in the IEEE 802.16-2004 (WiMAX) specification for metropolitan area networking, and it has been proposed as the basis for successors to 3G cellular communications systems (or even as an option for increased data rates for a future version of UMTS, which will cause a strange terminological situation: "the OFDM mode of WCDMA"). In the wired area, OFDM is referred to as Discrete MultiTone (DMT) and is the basis for the ADSL standard.
The common strand in all these systems is a demand for high-speed FFT processing. Often, time-to-market pressure drives vendors to release products that comply with early versions of a standard that have the flexibility to be upgraded to the final version using just a simple software update. For example, 802.16e has recently shifted from a constant FFT size (256 for OFDM, 2048 for OFDMA) to a scalable PHY, with the FFT size shifting for different channel bandwidths with a maximum of either 1024 or 2048 being argued. This demands a solution that can be implemented on programmable silicon. Many algorithms and architectures exist for implementing the FFT. Very often, however, the use of a software-programmable platform may mandate the use of an algorithm that is optimised for software processing even though it may be slower or less power-efficient than a hardware-oriented design. Changing the FFT size for a traditional, block-based implementation also poses significant issues to the overall system performance due to scheduling considerations.
Click here to read more ...