By Bob Zeidman, Zeidman Technologies Jan 6 2006 (9:00 AM), Embedded.com
The next great revolution in computer architecture is certain to be multiprocessing, just as it has always been – always right around the corner.
It seems that multiprocessing is the pigeon and the computer scientist is the child chasing it, only to have it take flight right before capture. But maybe it really is within grasp this time, because of a number of enabling technologies.
One technology is networking, which allows programs and threads to be distributed over large networks. Large chunks of data can be transferred quickly, with respect to the total processing time, over the network between processors.
Another technology that is enabling multiprocessing is the incredible shrinking transistor. This means that the functionality that previously required a printed circuit board or several boards in an entire system can now be placed on a single chip. Thus is born the system on a chip or “SOC.”
Throughout the history of computer design, there has always been a tradeoff between processors and fixed hardware. Even the first computers were really fixed program machines that we would recognize today as finite state machines. Processors, which use modifiable stored programs to control functionality, are much slower and more expensive, in terms of hardware costs, size, and power consumption, compared to state machines. The advantage of processors is their flexibility and ease of use.
The line that is drawn by engineers regarding when and where to use processors versus fixed hardware has been continually moving in the direction of more processors. Mainframes had one programmable central processing unit with much surrounding fixed hardware to control peripherals. The microprocessor has changed hardware design such that small, inexpensive processors are now used to control just about every computer peripheral and most complex electronic devices.
Nowadays transistors are very cheap and very fast. The speed and cost disadvantage of a processor over a finite state machine is usually overwhelmed by the flexibility advantage. And SOCs perform very complex, high-level functions. As John Hennessey, founder of MIPS Technologies and president of Stanford University, has pointed out, writing a program on a chip to run a word processor is a lot easier than creating a state machine to do it.
For this reason, many chip vendors are encouraging the use of many small processors to replace many small state machines. One processor may control a serial port while another controls a USB interface and yet another performs error detection and correction on an incoming Ethernet packet.
Click here to read more ...