Ludovic Larzul - Vice President of Engineering, EVE
EETimes (6/4/2012 10:00 AM EDT)
Power has pushed performance out of the spotlight. Yes, speed still matters, and, especially in mobile applications, we haven’t yet reached that desktop level of “good enough.” So there’s more work to do to make mobile devices faster, even as more duties are piled on.
But people won’t buy a phone that they have to recharge every couple hours, no matter how fast it is. So it’s fair to say that power has to be consulted before performance gets to have its way. And the search for power savings has now affected every level of the design hierarchy. It used to be pushed down to the circuit designers, but low-level techniques, like the use of multi-VT transistors and clock gating, bring only moderate gains in the power struggle.
The real game is now at the system level, and this involves software as well as hardware. In particular, the ability to shut off parts of the circuit when not in use has become an important consideration. In fact, there’s been something of an attitude shift in some quarters: instead of starting with an “everything on” default, and turning things off when you can, start with an “everything off” assumption and turn on only what you need.
Regardless of which of these angles you take, you end up with power “islands” on the chip that can be on or off, and control of the power state can be set either by hardware or software (either low-level firmware or even an application). This reflects an endpoint in the ongoing trajectory away from monolithic power. And it adds new verification challenges, especially at the emulation stage, when an SoC is being validated before the actual chip circuitry may be complete.
Click here to read more ...