No More Cores, No Higher Frequencies, says Intel

No wonder Intel’s getting all fired up about its process technology – announcing both an acceleration in it process geometry roadmap a switch to Finfets at 22nm – the reason is: it has nowhere else to go now that both more speed and more cores are ruled out of the company’s technology roadmap.

No wonder Intel’s getting all fired up about its process technology – announcing both an acceleration in it process geometry roadmap a switch to Finfets at 22nm – the reason is: it has nowhere else to go now that both more speed and more cores are ruled out of the company’s technology roadmap.


Last week, Dylan Larson, director of the Xeon platform at Intel, said that the days of designing chips with higher frequencies and greater numbers of cores might be coming to an end.

It was leakage which put paid to higher speeds, and it’s the impossibility of spreading general purpose programming efficiently across more than four cores which has put the kibosh on more cores.

So all that’s left is increasing density. Larson said that Intel’s customers were looking at ‘extreme density’ – which, while Intel’s process lead continues, is the only way Intel can differentiate its processors from everyone else’s.

This is an exciting time for the semiconductor industry, the tectonic plates are shifting, the old order is passing – giving way to the new.


Comments

4 comments

  1. Thanks, Mike

  2. Amdahl’s Law still applies and so as you say adding cores to random tasks gives a diminishing return. What we have got better at is scheduling tasks. Modern OSs run multiple threads even on a single processor so we have gradually created parallelism in our working. However there is always the non-reducable single task which won’t ‘parallelise’. AMD were addressing this by allowing uneven speeds between their cores but since sacking their CEO I have no idea what they are up to. But I do think a single ‘dynamically adjustable up to lots of GHz’ modified 64 bit x86core with restricted functionality (no floating point, vector operations, etc) and a mix of a dozen or so normal x86 and Atom cores would keep most people happy.

  3. My understanding, very defiicient as it may be, Mike, is that no one has come up with a way of efficiently partitioning multiple tasks across multiple cores in a general computing environment – i.e. one where the tasks come in randomly and have to be spread across the cores so that they are all loaded fairly equally. That’s in contrast to special purpose computing environments where the cores are all programmed to do a task and don’t need to have the tasks spread between them in real time. In this latter case the more cores the merrier; In the former case – six is about max. Am I wrong? Simplistic? Or getting there?

  4. I think he was only talking about servers David.
    I’m happily using over 100 cores at the moment for a background simulation task that’s been running for several days already. Parallel computing has come on a long way recently, mainly thanks to NVidia. However Intel have their Knight’s Corner 50 core parallel processor on the way so Intel also definitely haven’t given up on this route.
    However pro-gamers (a surprisingly large market) have been screaming for a >5 GHz processor at whatever power it takes for years and Intel do need to up their game here with something a lot better than the Extreme Edition processors. Hopefully there’s a little skunkworks team (as Atom originally was) cooking up something using their 22nm technology.

Leave a Reply

Your email address will not be published. Required fields are marked *

*