High-speed serial interfaces are proliferating in chips used in the metro communications application space. Various standards are developed around the evolving common methodology of implementing high speed I/O and millions of logic gates on the same monolithic IC. However, different standards also have different requirements and from a silicon design perspective. Creating a high speed I/O cell that meets the requirements of different standards becomes an attractive design proposition.
The "single-I/O-meets-multiple-standards" approach is fraught with pitfalls for those who neglect the details. For example, major hurdles must overcome when creating a single I/O for WAN/Metro line card interfaces including OC-48/STM-16 CML optical modules, SFI-4.2, SPI-5, SFI-5, GbE, VSR-4.3, Infiniband, and XAUI.
Only by understanding the differences among emerging high-speed interface standards and the tradeoffs involved in a common IO impl ementation will the system designer will better be able to choose the right device for his application.
Advantages from using a single I/O architecture for multiple standards are all the expected advantages normally encountered with IP re-use: a shortening of development and debug time, a shortening of verification time, and an acceleration of time-to-market for products that use the subject I/O architecture.
The advantages are not all "free," however - one of the first requirements to be addressed with any common-I/O strategy is the wide range of data rates that must be supported by a given I/O.
Basically, a given serial link can be modeled with three elements: the transmitter, a channel that propagates the signal, and a receiver:
The channel may be as simple as a pc board trace used to interconnect two chips (SFI-4.2, which is chip-chip, will have simple channels), or it may be much more complicated - for example, for a WAN backplane application the "channel" may h ave multiple lengths of pc board trace joined by connectors. For long-reach standards the channel may also have optics since long reach is required.
In an ideal system, the edges of a digital signal will always occur at integer multiples of the signal period. In a real system, the edges of a digital signal will occur in a distribution around the center point, which is the average period of the digital signal.
Jitter is defined as the variation in the edge placement of a digital signal. Three jitter components are usually specified: jitter generation, jitter tolerance, and jitter transfer. Jitter generation is the amount of jitter created by a device assuming the device's reference clock to be jitter-free. Jitter tolerance is the maximum amount of jitter a device can withstand and still reliably receive data. Jitter transfer is a measure of the amount of jitter transferred from the receive side of a device to the transmit side of a device.
Jitter requirements for high-speed I/ O standards vary widely. Deterministic jitter (DJ) is jitter generated by either insufficient channel bandwidth, leading to inter-symbol-interference, or by duty-cycle distortion, which leads to timing errors in data clocking. Random jitter (RJ) is usually assumed to have a Gaussian distribution and is generated by physical noise such as thermal noise. Sinusoidal jitter (SJ) is used to test the jitter tolerance of a receiver across a range of jitter frequencies and is not a jitter type that would be encountered in a deployed system.
Sinusoidal jitter is artificially injected into the receive side of a circuit to measure the performance of the receiver in the presence of the user-defined sinusoidal noise source. With this SJ technique, the receiver's jitter tolerance versus frequency can be measured. RJ is usually calculated as TJ-DJ -- so the amount of RJ is not usually explicitly defined; it is calculated from the amount of total jitter and the amount of deterministic jitter present.
Multiple approaches to meet the jitter requirements can be taken. Since many of these high-bandwidth interfaces use source-synchronous clocks, the jitter in the generated clock is of concern. Such systems benefit from using a high-quality crystal and PLL to generate the board clock used to clock most of the system logic, since clocks recovered from the received data usually have high jitter relative to a quality crystal oscillator.
Pre-emphasis may be applied to the output signals to ensure the received signal has a well-defined shape after the frequency-dependent deleterious effects of the channel are taken into consideration. PLLs required by the clock-and-data-recovery circuits in the receivers must be able to accurately track the input data. The receivers may also use equalization to reshape the received pulse and "open the eye" of the received signal.
The pre-emphasis and equalization techniques described above are methods of pulse-sh aping where the shape of the waveform is modified to "open-up" the eye diagram. Pre-emphasis is done by emphasizing the high frequency content of the output waveform and is done by the transmitter. Equalization is done by emphasizing the high frequency content of the input waveform and is done by the receiver. The emphasis on the high-frequency content is required since the channel frequency response is a low-pass response.
Pre-emphasis/equalization for different standards may not be compatible, however. For example, since the TJ spec for GbE is so large (0.749UI), pre-emphasis or equalization may be employed. But the pre-emphasis or equalization curve for GbE will not have any beneficial effect for the I/O when it is used in a XAUI application, since the XAUI data rate is almost 3x that of GbE. Now consider the minimum rise time and minimum fall time requirement of GbE: An I/O that adheres to the minimum rise and fall times of GbE will have an edge rate too slow for a faster standard, such a s XAUI.
One simpler common pre-emphasis technique is to temporarily increase the rail voltage of the transmitter for 0-1 or 1-0 transitions. With this technique the rise and fall times for the circuit are accelerated, since after the transition the output is allowed to "settle" to a voltage closer to the common-mode voltage for a continuous run of common symbols. This technique has the advantage of requiring minimal circuit area to implement, since it can be done using digital logic complex analog filters are not required.
An example differential IO architecture used by many CMOS differential circuits, The transmitter may be AC- or DC-coupled to the receiver. For DC-coupling, the transmitter output lines are directly connected to the receiver input lines - so any DC voltage on the transmitter output line is presented to the receiver input line. The common-mode voltage of a DC-coupled receiver will therefore vary as the common-mode voltage of the transmitter varies.
For an AC-coupled link, the transmitter output lines are connected to the receiver input lines through series capacitors, which serve as DC-blockers. An AC-coupled receiver can control its common-mode voltage, since the AC-coupling capacitor serves as a DC block - the transmitter cannot vary the common-mode voltage of the receiver. AC-coupling is possible because the maximum run-length (number of consecutive 1s or 0s) of the subject protocol is limited (the pattern must be DC-balanced). When the maximum run-length of a protocol is too large, AC-coupling is not possible.
The differential transmitter is paired with a differential receiver - however, while the differential transmitter architecture is relatively standardized there are many different differential receiver architectures in use. A DC-coupled example receiver architecture coupled to the differential transmitter.
The receiver architecture (based on OIF SxI-5) must be able to tolera te a range of common-mode voltages. For example, consider SFI-5, which allows VTXDD and VRXDD to vary by up to 10%. In this case, VTXDD may be 1.32V and VRXDD may be 1.08V. Also, the variance in ground potentials may be up to 50mV, so VRXSS may be 50mV less than VTXSS. In this case, the common-mode voltage of the inputs to the receiver will be very close to the rail voltage of the receiver (VRXDD), making the design of the DC-coupled receiver difficult. However, the transmitter design is simplified - since the common mode voltage of the data lines is pulled to the supply rail by RRXDD, VTXS will also be higher than in a design without this characteristic. The design of ISOURCE will therefore be easier, since the higher the potential difference across the current source the easier it is to keep the current source transistors operating in the saturated region.
To understand the benefits of the receiver architecture, consider the following:
- The output driver source voltage VTXS wil l be pulled high by RRXDD