Industry Expert Blogs
Designing for the Future - Managing the Impact of Moore's LawCadence IP Blog - Tom Wong, CadenceMay. 24, 2019 |
With Moore’s Law, the industry assumes that when you go from one geometry to the next finer node, you will have performance gains. All this is automatic. Chip designers have tried to leverage improvements in process technology to get performance improvements for many years now. Let’s examine whether or not this assumption is still valid.
When we were at more mature technologies, such as 90nm to 65nm, there may have been some observable and immediate benefits. But as you go down to 28nm and below, SoC performance is dictated more by interconnects (metal system) than transistor performance. You probably noticed that mainstream CPUs for PCs/laptops have hovered between 2GHz and 3GHz because Moore’s law scaling can no longer give you “performance gains” in terms of clock speed. Something else has to be done to get more performance than relying on process migration to the next finer node. This is when CPU designs went from single core to dual core to quad core. Also, running devices at a high clock rate can also get you into trouble with heat (thermal issues) and high packaging and cooling costs. Unless you are designing for servers in the datacenter, low power is the most important spec. Even for chips used in modern-day datacenters, they are not performance at all costs. After all, one of the major costs of running a datacenter is electricity (power). This is why you see mega-scale datacenters located near hydroelectric power plants, where the price of electricity is lower.