Backplanes: tough interconnect choices
Backplanes: tough interconnect choices
Backplanes: tough interconnect choices
By Melissa Heckman, Electrical Engineer, Justin Moll, Marketing Manager, Bustronic, an Elma company, Fremont, Calif., EE Times
October 28, 2002 (10:12 a.m. EST)
Amanufacturer in the embedded-computer industry must choose from more than 65 fabrics on a list that just keeps growing. Many of the switched-interconnect technologies use new, specialized backplanes. As a designer and manufacturer of high-performance backplanes, our choices at Bustronic are based on the technologies that are best for our markets, most compatible with our skills and best meet our customers' price, performance and availability requirements.
Bustronic is working with a number of these technologies, including AdvancedTCA (PICMG 3.x), StarFabric (PICMG 2.17 and 3.3), Ethernet (PICMG 2.16), GigaBridge, VXS (VITA 41) and Embedded Modular (VITA 34).
More than 100 participating companies in the PCI Industrial Computer Manufacturers Group support the AdvancedTCA specification and are building systems using it, geared for central-office and high-end communications. The core PICMG 3.x specification is not expected to be ratified un til late this year. However, many expect to see products in the next couple of months.
Although ATCA uses an 8U form factor and 280-mm depth, requiring some hardware modification, it is based on Eurocard mechanicals. The larger cards allow more space for more components, while the wider spacing between slots allows for taller components.
The speed depends quite a bit on the fabric that runs over the architecture. The chosen ZD connector, capable of handling speeds of 5 Gbits/second over standard FR-4 pc-board material, is relatively new, but many in the embedded-systems community are already familiar with it.
For 19-inch rack-mount systems, we believed that using ATCA in a 14-slot backplane with a dual-dual-star configuration would be best for carrier-class systems, in part because it allows the maximum number of slots in a 19-inch frame. In this configuration two sets of two fabric slots provide redundancy and offer multisegment and multifabric options.
For smaller systems, we settled on a mesh configuration in which each node acts as a fabric slot, interconnected to the others with point-to-point links. With a backplane with six slots or less, a mesh ATCA system can offer significant performance in a reasonably small space, without getting into large and expensive pc-board layer counts. In a mesh topology, the data rates and protocols do not depend on other data transfers in other slots. So, a mesh is highly scalable, forgoing latency and determinism problems.
As a developer of CompactPCI backplanes coming from the Eurocard community, the decision to adopt StarFabric was easy. The technology is fully compatible with PCI and CompactPCI hardware and software and will offer a migration to PCI Express Advanced Switching. StarFabric has bandwidth of 2.5 Gbits/s and 10 Gbits/s on the road map. It handles both asynchronous and isochronous traffic simultaneously and has low processor overhead and low cost.
The PICMG 2.17 specification for StarFabric in a CompactPCI form factor was ratified this summer. There has been a great deal of product development recently in StarFabric, which has been used in various communications, video, medical and surveillance applications.
Today, a 17-slot PICMG 2.17 development backplane is available. It allows testing of PICMG 2.17 system designs in various configurations. The backplane has 10 basic node slots, dual fabric slots, two node slots with H.110, a StarFabric system slot with cPCI and H.110, and two standard cPCI with H.110 slots. This topology allows the system designer to test the implementation of StarFabric, along with standard cards using the CompactPCI bus or the H.110 bus.
We also liked StarFabric's ability to integrate adapter cards that act as PCI-to-StarFabric bridges. Standard CompactPCI cards can be plugged into the StarFabric adapter card, which in turn is plugged into the backplane. The adapter card takes the cPCIbus traffic, serializes it and sends it across the backp lane in two 2.5-Gbit/s StarFabric links. Both 32-bit/33-MHz and 64-bit/66-MHz traffic can be converted via StarGen's SG2010 chip, which resides on the adapter card. Those cards will be useful in prototyping a StarFabric system.
Clearly, PICMG 2.16, with 10/100-Mbit Ethernet links, is a strong choice for those who want to use Ethernet as a traffic vehicle or a low-end control plane. The protocol is widely accepted and many PICMG 2.16 products have hit the market.
Many in the embedded-computer industry, however, claim that Gigabit Ethernet has higher processor overhead. In fact, using Gigabit Ethernet as an interconnect requires an estimated 1 GHz of processing power to run the protocol stack.
StarGen Inc., the developer of StarFabric, has stated that it believes Gigabit Ethernet has 21,000 interrupts per second to StarFabric's 1 interrupt. Also, the company said, one network message on Gigabit Ethernet can have 10,000 instructions (10 instructions/byte), while StarFabric has only a t otal of 20 instructions per message.
GigaBridge is a PCI-switching technology developed by PLX Technology Inc. It runs on a scalable, self-healing ring topology supporting OC-12 to OC-48 trunk speeds.
GigaBridge uses a 6.4-Gbit/s low-voltage differential signaling (LVDS) link interface. The GigaBridge ring has built-in high availability, with redundant counter-rotating rings running the fabric. While PICMG 2.16 increases the performance of the data plane only, GigaBridge enhances the perfor-mance of both the control plane and the data plane.
The GigaBridge development system consists of the chassis, backplane, bridge-enabled card and a switch module. The bridge-enabled card converts the PCI bus to the switched-PCI bus via a GigaBridge device, a cell-based fabric with independent PCI bus segments connected to each port.
Each GigaBridge device can drive up to four PCI slots and interoperate with other controllers as ports on the ring. Each device is linked via two 16-bit-wide, point-to-point, low-voltage-differential links clocked at 400 MHz. The developer's CompactPCI board plugs into the bridge-enabled card. In turn, the bridge-enabled card plugs into the backplane. The switched-PCI network is contained on the backplane.
Although the GigaBridge HA backplane is a new product, GigaBridge has been around for a few years and is in the second generation of silicon. The higher level of performance and PCI compatibility helped give us the incentive to support GigaBridge.
VITA 34 is similar to AdvancedTCA and also adopts an 8U x 280-mm form factor. It has aggressive concepts like metal shielding around each board and options for liquid-spray cooling. Developed by the VMEbus International Trade Association (VITA), the VITA 34 specification is one way the VME world is fighting to remain the architecture of choice in embedded computing. The specification has a way to go, but it is one to watch.
VITA 41, the so-called VXS backplane, takes the standard VME64 extensions backplane and adds a high-speed serial connector to the primary-out section. Designers will have the flexibility of plugging in standard VME64x parallel bus cards or integrating new payload and switch cards for parallel-bus and switch-fabric transports or for switch-fabric transports only.
The VXS spec allows for four differential serial pairs per direction link over primary-out, and supports up to two such ports on each VMEbus card. This standard seems to have a lot of interest and is backed by Motorola and many others in the VME community.
Infiniband is not backplane-focused. It is now expected to be used mostly in server cluster applications. Therefore, it didn't apply as much to us as a backplane manufacturer.
RapidIO is an interesting technology. Parallel RapidIO does not integrate a backplane, but Serial RapidIO does. Many industry experts believe that RapidIO may see some traction in the military, DSP and PowerPC applications. From this backplane manufacturer's point of view, we had to make some tough choices. For us, a migration to PCI Express Advanced Switching through StarFabric was a bit more compelling. However, we will keep our options open.
Copyright © 2003 CMP Media, LLC | Privacy Statement