When a conventional processor (core) cannot meet the needs of a target application, it becomes necessary to evaluate alternative solutions such as multiple cores and/or configurable cores.
The first commercial microprocessor was the Intel 4004, which was introduced in 1971. This device had a 4-bit CPU with a 4-bit data bus and a 12-bit address bus (the data and address buses were multiplexed through the same set of four pins because the package was pin-limited). Comprising only 2,300 transistors and with a system clock of only 108 KHz, the 4004 could execute only 60,000 operations per second.
For the majority of the three and a half decades since the 4004's introduction, increases in computational performance and throughput have been largely achieved by means of relatively obvious techniques as follows:
- Increasing the width of the data bus from 4 to 8 to 16 to 32 to the current 64 bits used in high-end processors.
- Adding (and then increasing the size of) local high-speed cache memory.
- Shrinking the size – and increasing the number – of transistors; today's high-end processors can contain hundreds of millions of transistors.
- Increasing the sophistication of processor architectures, including pipelining and adding specialized execution blocks, such as dedicated floating-point units.
- Increasing the sophistication of such things as branch prediction and speculative execution.
- Increasing the frequency of the system clock; today's high-end processors have core clock frequencies of 3 GHz and higher.
The problem is that these approaches can only go so far, with the result that traditional techniques for increasing computational performance and throughput are starting to run out of steam. In this article, we take a "50,000 foot" view of the hardware portion of the computing universe and introduce a wide variety of existing and emerging solutions, including the use of multiple processors and the concept of configurable (and reconfigurable) processors.
Click here to read more ...