Reducing power dissipation is a dominant concern in battery-backed applications such as cellular phones and PDAs. With a rising trend of system-on-chip die area- sometimes as much as 80 percent-being devoted to memory elements, design techniques for lowering active and standby power are becoming more critical to overall system power reduction.
Power dissipation in embedded SRAMs fall into two main categories: active power dissipation and standby power dissipation. Active power is described as power consumed during the standard operation of the memory and is typically measured in milliwatts per megahertz (mW/MHz).
The two factors that contribute to active power are the capacitive switching power and the feedthrough current. For SRAM designs specifically, a key element to capacitive switching is the constant discharge and precharge of the bit lines during memory accesses. Partitioning the memory array into multiple banks, or subarrays , will help to reduce the switching of these large bit line capacitances. Using a banked architecture makes it possible to easily decode the banks such that no capacitance will be switched in any unaccessed bank.
Divide and conquer
Memories also have a unique feedthrough current path from power to ground that is not common in typical CMOS logic. When a word line is enabled within the memory array, all bit cells attached to that word line conduct current through the access transistors. To reduce this feedthrough current, SRAM designers can divide the word line into multiple partitions.
Furthermore, they can control the timing of the word line signal so that it will be active for a minimum time. For example, a read cycle in a typical memory consists of decoding the address and enabling the correct word line. After an internal self-timed delay, a desired signal differential has accumulated on the bit lines. This differential is sensed and then buffered to the outputs.
By adjus ting the self-timing circuitry, the SRAM designer can tune the memory to operate as fast as possible by sensing a minimum differential. This same circuit can be used to tune the trailing edge of the word line pulse so that feedthrough current on the bit cells in unaccessed columns is reduced. The word line timing is slightly different for a write cycle. The starting edge of the word line pulse is tuned so that it rises only when the bit lines have been discharged and are ready for a write. Tuning the timing word line pulse differently for read and write cycles helps to minimize feedthrough current on the active row.
Stopping the leaks
Standby power dissipation for most CMOS circuits is due to leakage currents. With the increased leakage inherent in short-channel, low-threshold and thin-oxide devices at 0.13 micron, 90 nanometers and 65 nm, the techniques for standby power dissipation previously developed for battery-backed applications are becoming more relevant in other applications. In large SRAM designs, a major contribution to the total leakage is the sum of all bit cell leakages. Assuming the bit lines are precharged to high voltage, three devices will have subthreshold leakage from drain to source. For 0.18-micron processes, subthreshold current is the major source of leakage.
SRAM designers can combat subthreshold leakage in many ways. The first step is to reduce the bit cell leakage by back-biasing the NMOS devices of the memory array. When using this standby technique on the array, a positive potential exists between the source and bulk, increasing the threshold for the transistor, thereby reducing subthreshold leakage in the two leaky NMOS devices. A key advantage of the array standby technique is that the memory can still be accessed with only a small penalty to the overall performance.
The accompanying figure illustrates standby power reduction using array standby techniques vs. no-leakage techniques. A similar leakage-reduction technique can be applied to all dev ices in the memory, including the periphery logic. This complete-standby technique provides even more subthreshold leakage reduction. However, the performance of all devices in the memory is slowed. Therefore, additional control logic must be used to actively back-bias the memory during idle periods, while removing the bias during activity.
However, these techniques do not specifically address gate leakage, which is becoming a significant source of leakage in typical conditions. To reduce gate leakage, the SRAM designer can lower the voltage level of the design when the memory is idle. But there are problems with this method. One is the power consumption of switching the power-supply node capacitance. Therefore, the designer must only enable this power-down mode if the memory will be idle long enough for an actual power saving. Another problem is that the lower voltage limit is determined by the stability of the bit cell. The chip designer cannot lower the voltage past the stability point of the bit cell contents.
SRAM designers can isolate the power mesh of the memory from the periphery logic. Thus, when the SRAM is put into a power-down mode, the voltage level of the periphery logic can be minimal, while the voltage level of the memory elements can be high enough to retain the contents. This yields a maximum power reduction with the array in the standby mode and the periphery in power-down mode.
A variety of design techniques can lower active and standby power and therefore reduce power dissipation. The best mix of techniques is dependent on which aspects of power dissipation you are trying to reduce and what technology you are designing in.
Jeremy Brumitt is senior design engineer and Cameron Fisher is director of design engineering at Virage Logic Corp. (Fremont, Calif.).
See related chart