Partha Srinivasan and Rajashree Mungi, Parama Networks Inc.
Feb 08, 2005 (5:12 AM)
Telecommunication networks have evolved dramatically over the last few years. But architectures used to build the underlying network devices haven't changed much. VLSI technology and associated packaging has come a long way. Five years ago, 0.18-micron, 4-million-gate-count chips in a 35x35mm package with 1.27mm pitch were considered state-of-the-art. But today, the commercially available technology is leading us to sub-micron (90nm) levels, with gate count exceeding 10 million. Package sizes have advanced from 40x40mm to 50x50mm and the pitch has gone down from 1.27mm to 0.5mm. In short, the technology advances to enable dense new devices exist today.
As technology advances, the traditional integration strategy of packing neighboring components onto a single die would result in denser devices. But this still yields a non-optimal level of integration as basic system partitioning has not changed. On the other hand, a complete re-architecture of the system would result in maximal integration and would leverage the new VLSI technology to its fullest extent.
Advantages of a Centralized Approach
One such idea for re-architecture is the concept of "centralized architecture". Centralizing traditionally distributed functions into a single piece of silicon can result in fewer chips and substantial reduction in system complexity yielding faster time-to-market (TTM). The benefits of this approach include reduced cost, reduced time-to-market, enhanced reliability, power savings, and higher density. Let's see how this concept can be applied to an ADM-on-a-chip device.
SONET/SDH equipment prices suffered a precipitous decline in the 2000-2002 time frame caused by a combination of eroding margins and a rush of highly integrated components brought to market during that period. Those price declines have now leveled off, and most of the saving in equipment cost due to higher integration has already been realized. The initial cost for an OC-48/STM-16 ADM (add-drop multiplexer) appears to remain at the $10,000 level or above, even with further component price reductions. To reduce the cost of this equipment to the $5000 target sought by international OEMs, while enabling new multi-protocol services within the systems, we have to revisit the entire architecture and partitioning of the traditional ADM (Figure 1).
Figure 1: Diagram showing a traditional ADM architecture.
The requirements of equipment redundancy and survivable network topologies have been the primary drivers of ADM architectures. Traditional ADM designs have used a centralized redundant switch matrix, with traffic ports on individual interface cards. The interface cards performed section/line termination, while passing on the SPE data to the central redundant switch cards for grooming. This scheme is shown in figure 1, with each traffic card containing a framing device, some overhead handling capability, a backplane interface device, and clock recovery circuitry (PHY).
The traditional ADM architecture distributes the data path between the interface cards and the central switch card. The distributed nature of the data path results in a distributed control path, with software required on each of the interface card as well as the switch cards. This contributes to higher part count and a more complex control path leading to more software complex
difficult to implement due to the distributed nature of the transport overhead termination. All APS (automatic protection switching) software and DCC (data control channel) communication software involves keeping multiple distributed processors in sync. The application of VLSI integration has traditionally resulted in reducing the part count on individual interface cards. This approach has reduced the part count and cost, without addressing the issues of architectural complexity and duplication of effort. The engineering development cost of the resulting system has not been substantially altered by the application of an integrated Framer, PHY and Backplane device. Thus, mere application of VLSI integration without appropriate partitioning, does not allow us to truly reduce the cost structure of the ADM.
New Partitioning Needed
As a result, a new partitioning is clearly required, to fully leverage the advantages of VLSI integration. One approach is to use a centralized architecture.
Centralization involves pooling on one card all of the elements of the Sonet/SDH data path, including framing, overhead processing and switching. This process is shown in Figure 2.
Figure 2: Diagram showing a a centralized ADM architecture.
An ADM enabled with this Centralized architecture results in the following advantages:
- Cost reduction — This architecture removes the need for additional components such as backplane drivers, multiple processors for a multi-slot chassis-based system, or a complex control path implementation for a single control path processor implementation. With centralized architecture, the entire SONET data-path is implemented on a single piece of silicon. This reduction in number of devices required to build the ADM along with related reduction of control path devices reduces the ADM's cost considerably.
- Reduced TTM — Engineering development effort is considerably reduced as all framing and overhead processing functionality resides on a central card and interface cards are reduced to extremely simple optics carrier cards. Substantial reduction in software complexity of the system results in reduced TTM due to significant savings in the software design, implementation and test phase.
- Increased reliability — As there are fewer chips and fewer connections (IO between devices), the system reliability is substantially improved. The redundant central switch card concept can still be applied to the central card and the backplane would need dual star topology running traces to each of the two central cards from the interface cards.
- Enhanced performance — It is normally very difficult to meet the 50-ms delay requirement for scaling ring architectures like stacked bi-directional line-switched rings (BLSRs). Centralized architecture eliminates inter-card communication delays and enables dramatic reduction in protection switching times for traditional Sonet protection mechanisms. For a system using an ADM on a chip, pass-through node transit time has been measured at 1 ms, leading to 28ms switch time for unidirectional failure and at 18ms for bi-directional failure on a 16-node 1200km ring. This architecture also enables support for multiple stacked rings with sub-50ms recovery time with simultaneous failures. Centralized architecture also helps with unidirectional path-switched ring/sub-network connection protection (UPSR/SNCP) support. UPSR/SNCP protection times in reference systems built on this architecture have been measured at 4.8 ms for 48 simultaneous path failure events with approximately 1.5 ms of this attributabl e to the cross-connect algorithm. Along with traditional Sonet protection schemes, more efficient mesh architectures can be supported with equal ease using the AoC.
- Power savings — Reduced chip count and reduced connections between devices results in considerable power savings (50-70% compared to traditional distributed ADM architectures). Central office footprint is typically constrained by equipment physical volume as well as power density and the higher integration coupled with lower power allows for a higher capacity system in the same footprint.
Extending the Centralized Architecture to MSPPs
The explosion of new customer services brings new interfaces and protocols to the mix. Incumbents will gravitate to deploying these services over their legacy networks, hence pushing the need for mappings such as Ethernet over Sonet. In this case, these ADMs evolve into multi-service provisioning platforms (MSPP) and the same centralized architecture can be applied to these platforms. Based on where this multi-service access is required, MSPPs can be categorized into micro-MSPP, mini-MSPP or MSSP/MSA (Multi-Service Access - larger MSPPs connected to the converged core).
MSPPs are enabled by:
- Standards-driven changes like Virtual Concatenation (VCAT), the Dynamic bandwidth adjustment protocol called Link Capacity Adjustment Scheme (LCAS), and Generic Framing Protocol (GFP), which is a technique to map multiple protocols (e.g. Ethernet, Fiber channel, ESCON, FICON, DVB-ASI etc.) using a common mapping methodology in legacy Sonet/SDH and evolving G.709 based OTN.
- Technology advances like multi-rate multi-protocol pluggable (SFP) optics and multi-rate PHY devices enable universal I/O card designs for these MSPPs. Universal I/O cards can be configured to support either traditional client interfaces or data interfaces of choice.
To explore the applications of the combination of a SoC and centralized architecture, we look at two specific emerging MSPP applications. The first is customer located equipment (CLE), and the second is the Packet ADM which is a variation of the MSPP. CLE is gaining favor from carriers, which are looking for low cost equipment that they can install at a customer site to enable new services, connect legacy services and serve as a demarcation point in place of the traditional CSU/DSU.
The version of CLE that we will look at has been recently dubbed the micro-MSPP (Figure 3). It is typically a one-rack-unit (1U) box with a variety of service interfaces such as Ethernet, private line, and frame relay. The uplinks for the micro-MSPP are typically OC-3/12, but in certain cases will be Ethernet. In any case, the key functions of combining new and old services and providing a convenient demarcation point for carriers will drive the feature set.
Figure 3: Next-gen centralized architecture for a micro-MSPP.
The micro-MSPP is a natural place for the centralized architecture. The cost pressures at this end of the network are extreme with target selling prices in the $1000 range. This necessitates the need for high levels of integration and a reduced level of complexity. The single card implementation and moderate sophistication of the protocol processing required further argue for such an approach, where a large amount of the system functionality can be furnished by a single piece of silicon.
As the system grows in complexity, the centralized SoC architecture brings more leverage to the tasks of reducing cost, power consumption, and overall system complexity. Our second example of the packet ADM shown in Figure 4 will illustrate this. The packet ADM is a multiple card system with I/O cards and centralized function cards.
Figure 4: Diagram showing a next-gen packet ADM.
The data manipulation functions have all been centralized into a redundant set of processing cards. The SoC implementation is paired with a Network Processor via a SPI 4.2 interface. Notably the system has two backplane connections " one for TDM traffic and the other for data traffic. This construct allows a high degree of flexibility in the service handling capability of the system.
The alternative design with TDM-only backplanes and Ethernet mapping cards in I/O slots, limits the scalability of the existing ADM elements to deploy Ethernet or other data services. This leads MSPP designers to implement separate packet and TDM backplanes.
I/O interfaces are routed to the central packet/TDM switch card over two independent traces on the backplane and they are destined to two separate devices based on the type of the I/O interface, i.e., packet and TDM. The MSPP-on-a-chip (MoC) provides a SPI interface so that some of the packet traffic can be mapped to Sonet. With a high capacity channel between the MoC and the Packet processor, the system gains a large degree of flexibility. The incoming traffic can be any mix from mostly TDM to mostly data. The service mix is controlled by the mix of low cost I/O cards. In our example the Packet ADM is using an OC-n uplink to the carrier, with data traffic encapsulated in GFP/VCAT.
The new centralized architectural model, allows us to harness the rapid pace of innovation in VLSI integration to reduce the cost while simultaneously increasing functionality in a new generation of Sonet/SDH equipment. Without the new architecture, the impact of VLSI integration can not be leveraged to the fullest extent. The centralized architecture model can be applied not only to traditional ADM equipment, but also to the evolving MSPP market ranging from micro-MSPPs to high capacity core network MSTPs. SoC technology enables the emerging market for converged data/TDM transport equipment to fully benefit from the cost savings derived from a centralized architecture. These savings will enable lower transport infrastructure costs for carriers, improved margins for the equipment manufacturers, and lower cost services for next generation high bandwidth services.
About the Authors
Partha Srinivasan is the chief technical officer of Parama Networks. He holds a Ph.D. degree in Electrical Engineering from University of California, Santa Barbara and can be reached at firstname.lastname@example.org.
Rajashree Mungi is a principal architect at Parama Networks. She holds a M.S. degree in Electrical Engineering from Arizona State University, Arizona and can be reached at email@example.com.
Copyright © 2003 CMP Media, LLC | Privacy Statement
Click here to read more ...