Programmable network processing blade needed in switching platform
By Bruce Hunter, Network Processing Marketing Manager, Motorola Computer Group, Tempe, Ariz., EE Times
February 5, 2003 (5:09 p.m. EST)
Over the last few years, equipment manufacturers have increasingly demanded complete hardware platforms, such as VME or high-availability CompactPCI systems. As implemented in the multi-service switching platform, or MSP, such platforms enable systems developers to implement multiprotocol application platforms such as media gateways and radio network controllers by offering off-the-shelf platforms that feature integrated Ethernet networks (PICMG 2.16) for control traffic and a high-speed serial fabric (PICMG 2.20) for deterministic data traffic.
To be both flexible and cost effective, though, the multiservice switching platform needs to be designed with off-the-shelf data-plane boards that can provide a wide range of functions, including ATM, GigE, Frame Relay, and SONET/SDH I/O interfaces; the ability to interwork between multiple standard protocols and even proprietary protocols as well as classify and switch the different types of data traff ic.
The MSP should also be able to drive data across the PICMG 2.20 serial fabric interface in a deterministic way, and bridge traffic between the I/O interfaces, serial fabric interface, and PICMG 2.16 Ethernet network. Without adding their own proprietary blades, developers of such a platform should be able to add new features and protocols over the lifetime of the system, for example, by adding multi-protocol label switching (MPLS) capabilities to an ATM switch.
Offering all of these features looks like a daunting task, to say the least. Especially since the majority of data-plane blades that perform such functions have historically been based on application specific integrated circuits (ASICs), and have thus-as the name implies-been limited to specific uses. For example, an ASIC-based blade might interwork between ATM and SONET, but a separate blade would be required for ATM to Ethernet, or Ethernet to Packet over SONET (PoS) interworking.
As a result, building a multi-service switching platform using ASICs would have required a whole host of application-specific blades. Further, the long development time for ASICs would have presented a very significant time-to-market challenge. Finally, the ASIC approach would have resulted in a platform with very specific capabilities, and without the flexibility for customers to add their own value or without the extensibility for them to add features over time.
NPUs have become a popular alternative to hardware-only ASICs, because the programmability allows a number of advantages, including the ability to get new designs to market more quickly and increase a design's time in market by adding new features. Also, the flexibility of network processors makes it possible to create a common hardware platform that can be used in multiple applications. For example, a single NPU-based board might be used as an ATM to Internet Protocol (IP)-blade in one application while the same board could be used for IP to PoS translation in a different app lication, or for ATM classification and switching in a third application.
While several devices were evaluated, the C-5 NP from C-Port was the final choice to become the core element Packet Processing Resource Board (PPRB) in our implementation of the multiservices switching platform. It serves as a multiprotocol I/O blade, a switching engine, and the primary driver of data traffic across the PICMG 2.2 interface. Aside from the fact that the C-5's 5Gbps data rate fit well with the capabilities of the MXP architecture we liked it because it allows a number of different protocols to be run on a single physical blade and one blade to be used for multiple purposes. Also, we found that the C-5 has a configurable Switch Fabric Processor (SFP) that provided an excellent way to tie into the MXP's PICMG 2.20 high-speed serial fabric interface.
The software impact
Like ASICs, NPUs are primarily forwarding plane devices. As a result, they are often paired with general-purpose processors tha t handle the control plane and management plane stacks. These processors can be on board, in the system, or even in a different chassis. This is true of the C-5 NP as well, which uses a co-processor to load application code and configure the network processing elements, run control and management plane stacks, handle exception packets, and manage the routing and switching tables within the NP's table lookup unit.
In developing the PPRB, we found it necessary to port the C-Port host tools to our architecture. This included porting of the DCP shell, which allows the host processor to examine the registers within the individual channel processors; the package loader, which allows developers to load application code into the various portions on the C-5 NP; the table services, which allow the control stack running on the host processor to update the routing tables within the C-5's TLU; and the packet services, which allow the host processor to accept control and exception packets from the data stream wit hin the C-5.
To provide sufficient OS support, we used not only Wind River's VxWorks but a Linux version of the host application services as well. Even though the entire system has to run deterministically at line-rate, it is possible to use Linux on the host processor because it is primarily concerned with initialization, connection management, table updates, and exception handling. Offering Linux as well as VxWorks was designed to take advantage of the wide range of protocol stacks that are available and to offer customers the flexibility of the open source model.
On the C-5 NP side, we started with reference application code that was provided by C-Port. From that, code was developed that would segment the serial data stream into 64B packets. Code was also written to encapsulate those packets within a CSIX header, chosen because it was made popular by the Network Processing Forum (NPF) as a standard NPU interface. Further, CSIX is part of the specification for PICMG 2.20, allowing the same encapsulation scheme for the interface as for the serial fabric.
In fact, once segmented and encapsulated, the packets leave the C-5 NP over a Utopia-3 bus. They are then switched inside an FPGA onto the HT bus and into the co-processor. In the case of the serial fabric, packets leave the C-5 NP as if they are headed to the HT interface, but they get handled differently by the FPGA, which identifies them solely by their FPGA port address.
On the host side, we developed a driver that allows the co-processor to SAR to packets on its end. The driver is capable of handling any bit stream, but two higher-level interfaces were also created that are optimized for either ATM or IP traffic.