Applications drive trend to serial I/O
Sidebar: Applications drive trend to serial I/O
By Khanh Le, EEdesign
October 31, 2002 (8:54 p.m. EST)
URL: http://www.eetimes.com/story/OEG20021031S0036
For years, parallel communication schemes offered clear advantages for moving data quickly from chip to chip, board to board or system to system. But when I/O clock-frequency rates passed 66 MHz, crosstalk and other signal integrity issues made managing bus loading and skew increasingly difficult for parallel I/O schemes. Parallel I/O could work - but only by applying significant engineering resources. Stringent specifications in the PCIX standard for rise and fall times, drive strengths, path delays and skews, for example, have proven so expensive that it has been adopted today only in high-end applications such as computer servers. In addition to cost, other aspects of today's high-speed data transfers fuel the trend to serial I/O. The number of bus bits, board layout complexity, pin count and power consumption all rise to impractical levels with parallel interconnect. At the chip level, shrinking feature sizes are making system-on-chip pins b oth scarce and expensive as more and more on-chip functions compete for the limited number of I/O pins. The high-speed serial I/O standards currently in use almost always span more than one data rate and communication scenario. Gigabit and 10-Gigabit Ethernet, Fibre Channel, InfiniBand and Serial-ATA compete in the board-to-board and box-to-box space (see figure below). PCI-Express, HyperTransport, Rapid IO and XFI are the choices for board-to-board and chip-to-chip communication. HSBI is an emerging backplane and board-to-board connection standard. Figure 1 - High-speed serial I/O standards are well established At transfer rates above 1 Gb/s, the advantages of serial I/O connections become even more obvious. When the signal-integrity issues created by the larger voltage swings of parallel connections limited cable and PCB trace lengths, the industry responded with Fibre Channel, which has proven to be an elegant solution. Comb ining clock and data in a single stream reduced the problem of bit-to-bit skew and saved on pin count. From a designer's perspective, high-speed serial interconnects introduce a new set of challenges that involve converting the serial data into parallel data for on-chip processing. The serializer-deserializer block, known commonly as SerDes, is the generic solution. Each serial I/O standard introduces variations to the block, of course, but it always performs high-speed data transmission over a serial link between two nodes. SerDes is defined as including the serialization, transmit, receive, and de-serialization functions, clock and data recovery. Although the trend to serial data links in high-performance systems is already clear, the pace of change is quickening. The 10 Gb/s per channel bit rate for OC-192 was achieved last year. The 6.375-Gb/s HSBI and 10 Gb/s XFI standards are on their way. As a result, we can expect more and more SerDes applications and the power consumed by the Ser Des block will become more and more important as a basic design decision.
Related Articles
- Embedded test offers unique value for serial I/O
- Hitless I/O: Overcoming challenges in high availability systems
- Why using Single Root I/O Virtualization (SR-IOV) can help improve I/O performance and Reduce Costs
- JESD204B vs. Serial LVDS I/F for wideband data converter apps
- Let's Get Intimated with IBIS
New Articles
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Synthesis Methodology & Netlist Qualification
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |