During the last two decades, CMOS technologists have used scaling to deal with ever-increasing system complexity and performance requirements. Digital abstraction allowed process technology to evolve virtually independent of intellectual-property (IP)-based system design. Cross-fertilization between the two worlds was minimal, and high-cost computing and low-cost consumer products could use not only the same process but, to a great extent, the same design technology. Below 130 nanometers, however, new problems arise with far-reaching consequences.
First, Moore's Law for power goes the wrong way for the conventional monoprocessor and, also in view of wearable computing, requires innovative architectures to reduce power consumption by several orders of magnitude.
Second, the "era of happy scaling" meets its physical boundaries: Static power consumption increases with each scaling node, affecting both high-performance and low-power app lication domains; interconnect delay increasingly impacts the transistor's electrical performance and kills the global synchronous concept; scaling requires a drop in supply voltage and makes analog circuits lose their voltage headroom; process tolerances increase and lithography demands much more regular layout structures.
This has resulted in a so-called physical gap, pulling apart CMOS scalers and IP creators. The physical gap comes on top of another gap, referred to as the architectural gap. The latter separates system designers, almost exclusively dealing with services, standards and scenarios, from platform designers, who have to cope with increasingly more complex hardware and software platforms they have to conceive.
At the same time, products move from expensive performance-driven processor components to cheap domain-specific system components (wireless, games, automotive, healthcare) that become systems- on-a-chip or microsystems-in-a-package. These systems expose high computational complexity but at low power consumption.
In this changing microelectronics landscape, continuation of scaling "as usual" becomes inadequate. Below 130 nm, materials alone can no longer compensate for the drawbacks of scaling, especially in view of static power consumption and wiring the transistors. Solutions proposed so far to solve the new challenges of scaling severely affect IP-block modeling and chip-architecture design. For example, interconnect delay can partially be compensated by leaving the synchronous-chip-circuits concept and moving toward globally asynchronous, locally synchronous architectures.
The power crisis-the inability to run power-efficient applications at low power consumption on single-instruction-set processors-can be relieved by using parallel architectures, thereby deeply affecting the programming paradigm. Tackling the leakage problem will largely impact architectures, compilers and middleware. The increasing regularity of structures, required by advanced lithograp hy, will influence memory density and performance and will impact library design and processor assignment.
Therefore, more and more it seems mandatory to rethink the complete process, from system design to process technology.
To obtain workable solutions in a timely manner, the physical and architectural gaps must be closed and the four worlds involved (system design, platform design, IP creation and CMOS scaling) must pull together and talk to one another. More than ever, there is an urgent need to reconnect design technology at the system level to deep-submicron physics again. One of the main endeavors will be to find the right people who can deliver the best of the four worlds, to find-or train-platform architects who understand the requirements of the system designers, who have the skill to design platforms but who, at the same time, are aware of scaling's deep-submicron effects.
Being aware of the problem is one thing, solving it is another. While exploring the ultimate limits of CMOS technology to reduce dynamic energy, much is being expected from software transformations and novel architectures. An example is IMEC's software-washing-machine concept: By improving the locality of data transfer and computing, by improving task concurrency and exploiting the dynamic nature of embedded software, an order-of-magnitude gain in power efficiency can already be obtained. A small step in the right direction but many more are needed.Hugo De Man is a senior fellow at IMEC, the Interuniversity Microelectronics Center (Leuven, Belgium).