As the industry moves towards 16nm technology adoption, the demand for greater functionality AND extended battery life has created a perfect storm. Customers want more for less on two fronts: more battery life (less power consumption) for less cost. When further constrained by the demand for environmental friendly technology, this challenge reaches crisis proportions, affecting both low power and high performance applications market segments, including the trending Internet of Things (IoT), Wearables , sensors, and other hot markets. One creative approach to subduing these conflicting forces recognizes this power crisis as applied to memory intensive devices, whether for high performance or low power applications and particularly with high duty cycle SRAM.
As the percentage of memory incorporated into System–on-Chip (SoC) devices exceeds 50% of active die area [source], the power management challenges become more acute. By 2017,the device area occupied by memory will exceed 70% placing even greater constraints on product designers to manager power effectively.
While the impact of power management will be felt throughout the industry, the two pathfinder applications identified by sureCore, Limited, a leading edge, low-power SRAM IP company, occupy the two power management extremes.
The first use case sensitive to power consumption covers new devices addressing IoT remote sensing applications. This case is characterised by small physical size, low cost, and the ability (ideally) to operate without batteries; energy instead being supplied from renewable sources such as light, heat, or mechanical energy and the circuit’s power needs ‘scavenged’ or ‘harvested’ from these sources. As many of the IoT applications are designed around small, wireless connected, low-cost sensors nodes, the power challenge becomes a system level problem; trading the ability to store and process information locally against the power demands to enable wireless communication to a more computationally intelligent, centralised node.
The second use case technology is in high performance, computationally intense applications such as graphics processors, network processors, or network routing engines. In these applications, the challenge stems from on-chip power dissipation generating excess heat that must be transferred away from the silicon component to increase reliability and operational efficiency. The impact on reliability is well documented but the commercial impact is less well understood and can be split into three separate benefits; operational savings, capital savings, and environmental savings.
The operational savings are realised by the direct equipment energy reduction plus the energy reduction through heat management. Although the second term in the operational costs varies greatly, depending upon equipment scale and complexity, a reasonable heat management solution for, say, a rack mount server / network processor could be about 50-60% of the heat energy generated. That is, for every 10Watts of heat generated, it takes around 5-6 Watts to manage the heat away.
Then there's the matter of capital savings. Effective heat management requires equipment space, material, and real estate. Reducing power consumption means smaller heatsinks, fans, power supplies, and chassis components. When scaled to a typical server farm, this translates into higher density (or smaller) racks culminating in a reduction of floor area for a given performance metric. The drive towards cloud computing and storage has resulted in major compute farm operators (Google, Apple etc.) locating their new facilities where low cost electrical power and environmental conditions are favourable. Iceland, for example, has been identified as an attractive location given its cheap electrical power and low ambient temperature.
The third benefit is environmental. According to Jeff Monroe, head of Verne Global, a data centre company in Iceland, "The data centre industry now is on par with the airline industry as far as the carbon footprint." As environmental concerns and the increasing costs of energy take hold, organisations will seek to offset their energy consumption through the adoption of energy reduction technologies such as those invented by sureCore; the so-called Green Computing initiatives.
So the driving question becomes how to accrue these cost savings? sureCore has conducted a root and branch analysis of power consumption in next generation SoCs. Looking at the SRAM as a source for power savings, the company has designed low-power SRAM IP that can achieve up to 50% power savings at comparable performance when compared with the best existing solution. sureCore’s development team focus on the challenges from a system, block, and schematic level to deliver the best available power savings. We want the industry to continue along the power savings path and this blog is the first in a series that will look at low power challenges in large SoCs. What challenges are you seeing? Share your power experiences, comments and challenges in any and all applications. It is through this dialog that we will ultimately subdue the power management storm.