EINDHOVEN, Netherlands Philips Research is spinning out a reconfigurable computing architecture it has kept under wraps since the early 1990s to an incubator company called Silicon Hive, which will develop and license synthesizable intellectual-property (IP) cores.
The technology will shoulder its way into a substantial crowd of reconfigurable architectures that are already scrimmaging to achieve commercial success.
Aiming at a range of applications, including automotive, communications, consumer equipment and data processing, Silicon Hive will license IP blocks based on reconfigurable accelerators. The blocks will effectively "replace multiple ASIC blocks" in system-on-chip (SoC) designs, Atul Sinha, chief executive officer of Silicon Hive, told EE Times.
The reconfigurable cores can keep an SoC "fully programmable, even after fabrication, throughout the product cycle," Sinha said. "It means system OEMs can keep their system a pplications knowledge in-house."
The goal of replacing hard-wired ASIC functional blocks with reconfigurable blocks is also being pursued by such reconfigurable-computing startups as Elixent Ltd., picoChip Designs, Pact XPP Technologies, Morphics Technology and QuickSilver Technology. Indeed, one market watcher counted more than 40 companies-mostly in the incubation stage-pursuing some form of reconfigurable-computing IP.
Not only is the market crowded, but there are formidable barriers to success, according to analyst Nick Tredennick, editor of the Guilder Technology Report. "This is probably not a near-term opportunity for anyone," Tredennick warned. "The industry is still too oriented toward instruction-set-based computing.
Silicon Hive considers its experience and roots within Philips Research an ace up its sleeve. "Lots of embodiments of previous versions of these technology building blocks have been built and used internally," Sinha said. "This is not experimental stuff."
Acceptance by customers means not only having a computing fabric that delivers higher performance at lower power but also presenting the IP so that it appears conventional to the software developers, Tredennick said. Ideally, the development team could simply treat the reconfigurable fabric as a magic box that executes C at very high speed, without having to understand the internal architecture.
That puts a premium on software tools, and that's another place where Silicon Hive claims to shine. Sinha said the company has "solved the paradoxical problem of high performance vs. low energy consumption" by applying its advanced compiler technology and its parallel-processor architecture.
The key to hiding the architecture, ironically, was "moving the complexity to the compiler," said Silicon Hive cofounder Bernardo Kastrup, who oversees product and technology strategy. The new architecture dramatically reduces control overhead and exposes all pipeline management to the instruction set. "We give a raw-reality view to th e compiler, and the compiler explicitly schedules all pipeline stages," Kastrup said.
To achieve scalability, the architecture removes full con-nectivity and employs point-to-point, partially connected networks. It also uses many local data-path memories to store data and local variables, thereby allowing scalability up to massive parallelism.
Beneath the compiler, the architecture retains the generality of an array of processing elements. This permits it to support "all styles of parallelism, including single instruction multiple data, multiple instruction multiple data, systolic and pure data flow," Kastrup said.
The IP is both design-time and run-time-configurable. Silicon Hive uses internal tools to compose an array of processing elements for a set of applications. The customer then configures the array for particular accelerator needs on the fly. The approach lets Silicon Hive generate "much more domain-specific products, upscale and downscale versions, which can be tuned by our customers," said Kastrup.
Physically, the architecture is a hierarchical array. At the lowest level are complex processing/storage elements (PSEs) containing a number of register files, a number of execution units and local memory, in combinations determined at design time. Configurable interconnect within the element allows the resources to be set up to resemble a very long-instruction-word processing engine or a flow-through computational data path.
At the next level, PSEs are grouped with a configuration memory and a controller to form cells that can switch seamlessly between instruction-driven and flow-through modes of operation. At the top level, a processor block comprises one or more cells in an array.
Silicon Hive is introducing two reconfigurable accelerator IP blocks next month. The first, comprising a single cell with a number of PSEs, is targeted at processing frame data such as channel and source codecs. The second, a stream accelerator, includes an array of several cells, each containing a single PSE. The streaming accelerator is best used for high-sample-rate computations in such applications as the baseband of wireless devices, including terrestrial and satellite radio, 3G handsets and basestations, and wireless LANs.
-Additional reporting by Ron Wilson.