Ron Wilson, IntelFPGA
The annual Hot Chips conference in Silicon Valley offers a reliable window into the architectural thinking of both CPU giants and exciting start-ups. This year proved to be no exception, as architects squared off against the limitations of physics and the demands of workloads, with special attention going to the trending task of the year, deep learning. Taken together, the papers could almost be read as a celebration of heterogeneous computing with hardware accelerators.
Naturally, much attention focused on the headline chips: server-class CPUs. AMD, IBM, and Intel each presented their current offering. Interestingly, AMD’s Epyc and Intel® Xeon® Scalable processors were based on cores—Zen and Skylake, respectively—presented at last year’s conference.
On the surface, the two cores are similar. Both offer four integer and two floating-point pipelines, along with address calculation and load-store units. Both load a wide-enough word from instruction cache to take in several of the shorter X86 instructions per cycle. The cores crack these complex, variable-length instructins into micro-operations, which they mark to indicate data dependencies and load into huge buffers. Out-of-order dispatch units then select micro-ops that are ready to execute and stuff them into available execution pipes.
The point of all this trouble and hardware is to extract every last drop of instruction-level parallelism from single-thread code. As we move to new process generations, transistors may get somewhat faster, but interconnect takes back much of the improvement, causing block-level maximum clock frequencies to level off. But each new node does provide a significant increase in transistor density. So processor core designers are lavishing transistors on circuits that can increase the effective number of instructions per clock on benchmarks. AMD, for instance, claims they are getting 50 percent more IPC on some codes with Zen compared to the IPC of their previous-generation core.
Click here to read more ...