During the past few years, various technologies have vied for dominance as the preferred next-generation computer interconnect. The resultant claims and counterclaims from various camps have confused more than enlightened the market, with the sheer amount of rhetoric overwhelming logical, point-by-point comparisons.
In order to make sense of the competing claims of the various contenders, one must first partition the field. Historically, it was easy to divide connectivity into three areas: on-board, in-chassis and between-chassis interconnects.
Today's contenders, though, have blurred these boundaries by claiming they can be used in all three areas. This versatility is largely due to the use of differential signaling and embedded clocking that perform well in both on-board copper and twisted-pair wires.
It is clear that our next-generation interconnects will make extensive use of this technology, which brings with it unprece dented increases in performance. This improved performance makes it possible to reduce the number of signals-and therefore, wires-required for device interconnection on or off board.
With that in mind, it is important not to lose sight of some basic principles of physics. All of the new technologies must abide by the same rules when sending data across copper traces or wires. As such, performance is really dictated by the number of connections, or wires, used rather than some secret sauce a particular standard claims to possess.
The same holds true for latency. Latency is usually dictated by how many layers of software are required for a simple message rather than some factor inherent in a particular technology.
One thing is clear, however: Companies have abandoned conventional parallel-bus architectures in favor of high-speed, reduced-signal connection strategies. Reducing the number of signals makes way for high-speed switching rather than legacy bus architectures.
Switched fabrics also improve system performance by allowing multiple concurrent transactions between noncompeting devices.
Legacy parallel-bus connectivity is neither cost-effective nor flexible enough for future system designs. The VME and PCI buses cannot cost-effectively be coaxed to meet next-generation design requirements. The shared bandwidth of a bus is at odds with the need to add multiples of processors, causing a degradation of bandwidth to each function as boards are added. High-performance systems require scalable bandwidth-that is, the ability to increase bandwidth as nodes are added to the system.
When building these high-end, embedded distributed-processing applications, system designers must juggle requirements of increased performance, scalability and availability while maintaining a close eye on life cycle, maintainability, cost and power budgets. Evaluation of interconnect platforms should be based on criteria such as functionality, flexibility, vendor and product availability, and the ability to leverage open, standards-based hardware and software.
One further distinction should be made when evaluating connection strategies: There are two common architectures that I will refer to as direct memory access (DMA) and mapped memory. Most systems today make use of both.
DMA-based architectures have embedded controllers that must be set up with source and destination addresses and byte counts and must be commanded to initiate a transfer. Often, an interrupt is used to signal completion of the transfer. As such, DMA devices have much larger overhead for short transfers.
Memory-mapped architectures present the system with an address range that can be written to or read from in order to transfer data between nodes. There may well be a DMA engine internal to a mapped-memory controller, but this engine is initialized in hardware and will initiate a transaction much faster than a CPU can load registers on an external device.
As an aside, it's worth noting that the various connectivity camps generate a massive amount of confusion when they make claims of low latency without associating their claim with a particular bridge or controller chip architecture.
Given the physics of pushing data across copper is much the same for all of the new technologies, the success of a new technology is not going to be based on its performance but, rather, by its acceptance and use in the marketplace. When I look into my crystal ball, I see two clear winners: Ethernet and PCI Express.
Since its conception in The Xerox Palo Alto Research Center in 1974, Ethernet has extended its networking reach to tens of millions of nodes worldwide. Ethernet is everywhere, its popularity fueled by its true vendor neutrality, reliability, ease of use and low implementation cost.
The combination of Ethernet with microprocessors has enhanced its ubiquity. Speed increases from 10 Mbits/second to 100 Mbits/s, and more recently to 1 Gbit/s (with 10 Gbits/ s on the near horizon), have attracted increasingly diverse and complex applications. The attraction for system designers is easy to understand. Ethernet offers the ultimate peace of mind: protection against obsolescence (it will always be around), forward and backward compatibility and economies of scale.
In a clustered computing environment, Ethernet combined with advanced high-performance switches allows sophisticated DMA-based switched-fabric networks to be configured.
Meanwhile, as PCI transitions from its golden years toward a well-earned retirement, Intel is preparing to hand off the bus baton to a stronger, faster and more agile successor-PCI Express. This ambitious spec promises to keep pace with increasingly powerful processors and I/O devices.
The evolution from a parallel to a point-to-point (switched) PCI serial architecture will deliver high bandwidth with the fewest number of signals. This allows higher-frequency scaling while maintai ning cost-effectiveness. Moreover, the PCI Express architecture appears to cater directly to performance and bandwidth-hungry, real-time applications.
PCI Express offers maximum bandwidth per pin, lowering cost and enabling small form factors. Unlike platforms designed to solve specific application requirements, PCI Express was conceived to provide universal connectivity for use as a chip-to-chip interconnect, an I/O interconnect for adapter cards, an I/O attach point to other interconnects and as a graphics attach point for increased graphics bandwidth.
Intel has kept an eye on the special needs of the communications industries, creating the Advanced Switching specification for PCI Express. The Advance3d Switching spec adds a transaction layer on top of the core PCI Express physical-layer specification to meet the needs of communication applications requiring mesh, star, multicast, multifaceted address space and other topologies and schemes. Advanced Switching for PCI Express offers everythi ng necessary to build large memory-mapped multiprocessor systems with switched-fabric connectivity.
AS is not a revolution but an evolution from the currently available StarFabric high-speed, memory-mapped in terconnect fabric. Single-board computer makers like Synergy Microsystems are currently delivering products with both Star Fabric and 1-Gbit/s Ethernet.
In summary, I believe Ethernet and PCI Express are two interconnect platforms sure to find long-term success.
Frank Phelan is principal design engineer at Synergy Microsystems Inc. (San Diego, Calif.).
See related chart