By Nat Seshan, Courtesy of EE Times
Aug 29 2005 (9:00 AM)
Power consumption in many digital signal processor applications is just as significant as performance-and just as important for the designer to understand. DSP applications fall into two large groups with respect to power: line-powered systems, such as those used for communications infrastructure, and portable systems powered by batteries. Although performance requirements tend to dominate in the first group, there are power constraints that cannot be ignored. And with battery-operated applications, performance must always be evaluated relative to power. Since portable apps have multiple operating modes, they also tend to be more varied in their power requirements.
DSPs and other processors consume power in three ways: through active operation, by clocking circuitry, and through transistor leakage. Each offers potential for reducing power consumption. Possible measures include optimization of manufacturing processes for either speed or power reduction; use of transistors with different switching threshold voltages, to conserve power in noncritical paths; static and dynamic scaling of voltage and frequency; voltage and clock domains that allow parts of the device to be disabled or unclocked when not in use; peripherals integration and two-level cache memory architectures that reduce off-chip accesses; and processing functionality supplied by DSP and RISC cores, as well as specialized coprocessors.
- Know the power and thermal budgets of your infrastructure application. Since these budgets represent worst-case scenarios, maximum performance can only be achieved within the given power supply and cooling constraints of the system. High-end DSPs designed for infrastructure equipment continue to shrink semiconductor geometries, enabling processing innovations and higher operating frequencies that support more channels in the same space. Higher performance consumes more power in the same die area, however, so it becomes increasingly important to dissipate heat efficiently.
- Understand the power requirements of the different operating modes of your portable application. Standby modes need to conserve power during waiting periods, while active modes may benefit from the use of different types of processing elements. An application like a cell phone, with long wait times, is better served by a processor that minimizes leakage current. An MP3 player, on the other hand, requires a device that reduces active power consumption. A multimode system such as a digital still camera justifies the use of a multicore processor that combines DSP signal processing and RISC control.
- Understand how your application may transition between power modes so that you can plan accordingly for current transients in your power supply design. Understanding the power needed for each mode of operation, as is advised above, helps you get this information.
- Consider whether your system needs all of its DSP functionality all the time. Gated clock and voltage domains save power by disabling unused chip functions.
- Consider a device with more on-board memory, if your application can benefit and price budget permits, to reduce the need for costly off-chip accesses. Off-chip access occurs at a higher voltage and drives a larger capacitive load, and the access latency reduces performance.
- Assume that a few published numbers tell the whole story of a DSP's power consumption. A vendor-supplied spreadsheet can help you analyze the power needs of your application much more accurately. Spreadsheets let you enter operational profiles for individual devices that detail which peripherals are in use, which standard algorithms are running, anticipated system I/O loading and other factors that affect power.
- Assume that "typical" numbers help determine a device's system fit. You have to plan for the worst-case device, and the semiconductor vendor should quote numbers to reflect this. Further, don't assume in an infrastructure system that some of your devices will be lower-power or higher-power. It can be possible for all the devices to have been processed at the stronger end of the process spectrum, leading them to all be in the higher-power range.
- Assume that just because a device is in the latest process, it is lower-power. Smaller-geometry processes often incur additional leakage power, though they also run at a lower voltage, thus lowering active power.
- Overlook frequency as a means of saving power. If a DSP is less than fully loaded, lowering the core's speed can save an amount of active power proportionate to the frequency reduction. Lowering the voltage is even more effective, since power consumed drops in geometric proportion to the voltage drop. DSPs that offer dynamic frequency-voltage reduction can cut back on unnecessary performance overhead during slack periods.
- Neglect the on-chip memory architecture. One of the biggest power drains, as well as one of the biggest sources of delay, lies in off-chip accesses. Two-level caches not only reduce off-chip access but also cut power consumption on-chip. Most core accesses hit the level-one data and program memories, which have less capacitance by virtue of being smaller than the L2 memory, making them fast while reducing power consumption per access. An L2 that can serve as either cache or direct-mapped memory adds programming flexibility.
Nat Seshan (email@example.com), distinguished member of the technical staff, Advanced Architecture and Chip Technology Team, Texas Instruments Inc. (Dallas)
See related image
Copyright 2005 © CMP Media LLC