LOS ANGELES A paradox in system-on-chip design is emerging, according to a panel held Wednesday (June 7) at the Design Automation Conference. While more designs are expected to differentiate by relying on software rather than hardware, pressure on power consumption is forcing designers to reconsider the hardware option.
Speaking at the "Future Systems-on-Chip" panel, Bob Broderson, professor of electrical engineering at the University of California, Berkeley, said: "Software architectures are at least 100 times less efficient in power and area than hardware. That gap will increase."
Broderson said that a programmable DSP with one multiplier consumes 25 mm2, but the multiplier on its own takes up just 0.05 mm2.
"We are trying to time-multiplex the processor," Broderson said. "You have to add more and more memories to time-multiplex some more. Why do we do that if the hardware is so small?
"In a chip measuring 50 mm2, you could fit 2,000 adders or 200 multipliers," he said. "If we could use all of them, with a 25-MHz clock, which makes the design problem real easy, you would get 50,000 [milli]on operations per second at 100 mW."
Broderson continued: "That is a huge amount of parallelism. The question is how do we use that? The problem is trying to make C fit hardware design. That has really missed the point.
"You start with a parallel description, code it into C and you lose the parallelism," he said. "Something is wrong with this picture. You need to start with a parallel description of the algorithm and then map it directly."
While hardware is often the preferred choice, said Bob Colwell, chief technical officer for Intel's connected products division (Hillsboro, Ore.), "there is a limit to how closely tied you can make hardware to the application.
"Most differentiation will be seen in software," Colwell said. "That's not because it is a great solution but because it's hard to change hardware and stay on schedule."
Broderson said: "Everyone believes that software is much faster to implement than hardware. Why do people believe it? Because it doesn't have to work."
Danesh Tavana, vice president of engineering for Triscend (Mountain View, Calif.), said that software-oriented design massively reduced risk for many designers.
"You can get an incorrect product out there and then retrofit with the right software," said Tavana.
One way of reducing the massive performance overhead of moving functions to software lies in the use of reconfigurable logic. The wireless design group at UC Berkeley has designed a wideband CDMA architecture based on computing elements that can be rewired in different configurations to perform channel decoding and other DSP-oriented functions.
Mark Edwards, hardware manager for Cisco Systems' EWAN business unit (Research Triangle Park, N.C.), said that the reconfigurable approach was now being designed into some of his company's communications prod ucts.
"In communications applications, a lot of customization has to happen," Edwards said. "For some time, we have been saying how nice it would be to have embedded FPGA in ASICs. Some of the larger ASIC manufacturers have only been toying around with the idea. And the FPGA vendors only want to meet for coffee with the ASIC manufacturers".
Cisco has started to design chips that incorporate reconfigurable logic using a coarser level of granularity than FPGA implementations.
"We are trying to parallelize the packet-processing problem. There have been some impressive computational benchmarks with this technology," Edwards said.
"But we have a lot of algorithms in C" he continued. "There are eight million lines of code in IOS [Cisco's operating system]. Certain feature sets are important to different customers so we want to deploy certain C algorithms in parallel hardware. That is a process that will need a lot of automation. A lot of it is being done manually now and that's not a pleasant exper ience," Edwards said.
Chris Rowen, president and chief executive officer of Tensilica (Santa Clara, Calif.), said that configurable processors would lead to a change in the way that we view chip designs.
"SoC [system-on-chip designs] will become a sea of processors," Rowen said. "You will have 10 to maybe a thousand processors on a chip. The individual processors themselves are tiny, so you can do that. You end up with something that is almost the best of both worlds, between the flexibility of FPGAs and processors and hard-wired logic."
The problem then becomes one of designing architectures that minimize the on-chip memory resources and delays inherent in conventional processor designs. However, completely efficient techniques may not be needed.
Broderson said, "When you look at how far ahead of software this kind of architecture is, even if you screw up 10 times, you are still ahead. The thing is to decide how you can screw it up the least."
Broderson's group at UC Berkeley is working on an architecture that uses processing 'strips' to reduce the complexity of on-chip routing resources between processing elements.
"There will be feedback paths, but most of the data will be flowing in one direction," said Broderson.
Partitioning decisions will have to be steered by considerations other than just power consumption or flexibility.
"Hardware engineers at least have a record of designing things that are running in safety-critical environments," Colwell said. "Software has had some spectacular failures there. Software interlocks don't work the same way as hardware interlocks."
Colwell warned that there are other, hidden dangers of devolving large parts of the application to software control.
"Any visible processor becomes a target for various virus writers and other wackos. Anytime you can reprogram something, someone will and they won't be doing anything good," said Colwell.
Chris Edwards is a writer with Electronics Times, a sister publication of EE Times i n the United Kingdom.