Designing 5-Gbit/s Serdes: Steering Through a Road Filled with Potholes
Designing 5-Gbit/s Serdes: Steering Through a Road Filled with Potholes
By Ronald Nikel, TriCN, CommsDesign.com
March 27, 2003 (9:04 a.m. EST)
New product designs, almost of necessity, are incessantly pushing the performance envelope. Interfaces play a key role in helping designers knit together function blocks, to meet the overall system goals. As the performance of the individual component function blocks rises, the interfaces are challenged to keep pace. Figure 1: Block diagram of high performance link architecture.
System designs are currently being designed incorporating PCI Express, XAUI and other interfaces capable of operating in the multi-gigabit/second range; but these interfaces typically offer data rates topping out at slightly over 3 Gbit/s.
Some designers are now pushing the envelope, with goals of getting to the 5-Gbit/s range. However, reliably achieving these data rates is no simple development task.
In this article, we'll look at the key design issues developers will face when pushing serializer/deserializer designs into the 5 GHz range. The discussion will focus on circuit design and implementation as well as system packag ing and signal integrity. Let's start by looking at circuit design issues.
It's the Circuit Design that Matters
To break the barrier of 5-Gbit/s in CMOS technology, 0.13-micron technology will be required to be able to easily implement the design. 0.18-micron technology could be made to work, but it would be very challenging to make a robust design that delivers commercially acceptable yields from the fab facility.
Outside of CMOS technology, silicon germanium (SiGe), indium phosphide (InP), or gallium arsenide (GaAs) are good alternatives, although not cost effective for a large integrated device. Because of the popularity and cost advantages of CMOS, this article will focus on the use of that technology in 5-Gbit/s serdes designs.
There are many design challenges at the circuit level, and they can be broken into the following categories (Figure 1):
- Link protocol block
- Data serializer
- Driver and receiver
- Data recovery, deserializer and la ne de-skew
- Clock recovery and synthesis
Let's take a look at each of these categories in more detail, starting with the link protocol block.
1. Link Protocol Block
At 5 Gbit/s, it can be assumed that data will have to be encoded, to ensure clean eye patterns for data recovery (more on this later, in the signal integrity section). The typical encoding method will be 8B/10B encoded data packets. Assuming a simple 10:1 serialization ratio (where the bit rate on the serial portion of the link is ten times the parallel word rate), the link protocol engine and encoder technology need to operate at 500 MHz in order to keep up with a 5-Gbit/s line speed.
Alternatively, the data path width would need to increase in order to accommodate a lower core clock frequency. The challenge of us ing a wider data path is processing larger quantities of data per clock cycle, and properly maintaining the data flow. Often, the tradeoff is increasing buffer sizes and potentially adding small amounts of RAM to the controller to maintain the protocol stack. While this was a challenge at 2.5 and 3.125 Gbit/s, the problem only gets worse at 5 Gbit/s or greater channel speeds.
2. Data Serializer
Moving from 2.5- or 3.125-Gbit/s interfaces to those operating at 5 Gbit/s involves, in some cases, doubling the data transfer rate. Although, as mentioned earlier, migrating to finer line widths offers speed improvements, moving from a 0.18-micron process to a 0.13-micron process will not by itself come close to the 2X improvement in speed desired.
Therefore, innovations in a digital serializer (to implement logic operating at a cycle time of less than 200 ps) are needed, or novel implementations that greatly reduce internal frequencies of the serializer and use multiple phases of the lower -frequency clock are needed to achieve the much lower clock cycle times.
The less attractive strategy is to use a purely analog approach. While this will improve the upper frequency of operation, it certainly does not come without a cost or risk. The cost is in power consumption and circuit area. The risk is that the analog circuitry is difficult to debug and test, and is more susceptible to power supply noise.
3. Driver and Receiver Technology
At present, most serdes are using one of two types of signaling at data rates greater than 3.125 Gbit/s: current mode logic (CML) or a type of multi-bit signal encoding. Like many things in the design world, each signaling scheme has its pluses and minuses.
CML signaling is rather well understood and features can be added to improve the effects on eye pattern jitter through changes to both the driver and receiver. Driver-side pre-emphasis and receiver passive or active equalization circuits are common techniques implemented to improve ( raise) the upper frequency of operation. However, there is a point at which these enhancements may not be enough, given the complexity of the transmission media above 1GHz (for more on this, see the section on signal integrity).
Where CML signaling sends one bit per symbol, multi-bit signal encoding packs multiple bits into each symbol by encoding the multiple bits into unique voltage/current values on the lines. At the receiver, signal-processing techniques are used to convert these complex symbols back into a standard binary bit-stream.
The advantage of multi-bit signaling is thatfor a given data transfer ratea lower symbol rate can be used, compared to what would be needed for CML. The main challenge of multi-bit signaling is the loss of noise margin, due to the fact that this signaling method does not provide as much of a voltage step between the possible symbol values. (CML, being binary, can make use of the full voltage swing available for the channel. Multi-bit signaling breaks th e overall voltage range into several smaller steps, in order to encode the more complex symbols.) An additional challenge for multi-bit signaling is the high frequencies at which the signal processor on the receiver side must operate, in order to resolve the input signal into the correct digital values.
Regardless of which signaling method is used, the transfer jitter and power supply noise sensitivity of both types of circuits are major design problems that must be solved. Transfer jitter of the driver and receiver can be a major portion of the overall system-timing budget, and a direct driver of eye closure in the system.
4. Data Recovery, Deserializer, and Lane De-Skew
At data rates in excess of 5 Gbit/s, cycle time is less than 200 ps. This makes the task of accurately placing the recovered clock edge in the middle of the data eye extremely challenging. In most serial standards, eye closure at the input of the data recovery block can be as high as 140 ps, leaving less than 60 ps of open eye. Analog and digital implementations are challenged with maintaining good tracking of the line data clock and keeping the data signal locked to that clock, in order to recover the data.
Much like the serializer, the data path, once deserialized in an 8B/10B-encoded system, will be operating at very high frequencies. For example, 500 MHz clock frequencies are required for a 1:10 system.
Another challenge for a well-designed serdes is the framing of the 10B serial stream. The framing can be done in the link controller block, but this usually increases the amount of buffering required inside the controller.
The alternative to putting the framing function in the link controller block is to implement it in the data recovery deserializer block. However, the challenge of implementing a serial line framing logic that operates at 5 Gbit/s is a logic designer's nightmare. But, while it may be difficult, implementing in the deserializer block will save a lot of die area and power consumption i n the link controller, and improve its design by reducing the data processing burden on the overall system.
For multi-ported interfaces, lane de-skew challenges are similar to the problems of deserializing data. This de-skewing is done in the high frequency domain (i.e., at the 5-Gbit/s rate), and large amounts of skew will increase the amount of buffering required to properly de-skew individual channels in a multi-ported design.
5. Clock Recovery
Clock recovery at high frequencies puts a large burden on the system's ability to lock clock to the incoming data stream quickly and reduce overall cycle-to-cycle jitter in order to improve the jitter tolerance of the circuit. At 5-Gbit/s data rates, minimizing the cycle-to-cycle jitter will be the dominant issue in any type of design.
Noise on the power supply creates an additional issue for designers. This noise increases the likelihood that the recovered clock will have significant amounts of jitter, making jitter tolerance on the data stream even more difficult to implement. Designers will therefore want to make every effort to ensure that power supply noise is kept to an absolute minimum.
System Packaging and Signal Integrity
Now that we've laid out the circuit design issues engineers will face when building 5-Gbit/s backplanes, let's turn to packaging and signal integrity.
System packaging is one of the major stumbling blocks for designers. Lossy effects from packaging, printed circuit board (PCB), cables, and connectors reduce the overall signal quality and amplitudes. This problem can be mitigated by migrating to more exotic materials and packaging techniques, though all of these solutions are costly and not well suited to volume applications. Therefore, constraining the system design to reduce the transmission of these extraneous high-speed signals is even more important, as is working with interconnect and packaging manufacturers to qualify and bring to market lower-loss materials.
The much-overlooked pro blem of chip packaging can no longer be ignored. At multi-gigabit data rates, more effort must be put into packaging design, to solve the problems of power distribution to the chip with low parasitic resistance and inductance, in order to reduce voltage drop and AC ripple effects on the supply. These would otherwise translate into jitter within the system (Figure 2).
Figure 2: Comparison of eye patterns for a 3.125- and 5-Gbit/s interface over a 20-in. trace. Note increased closure of eye pattern in 5 Gbit/s interface due to lossy effects.
All of these system-packaging problems directly impact signal integrity. The most difficult system signal integrity problem is lossy interconnect effects. But a secondary issue is the need for impedance-controlled, well-coupled signaling environments, which yield better energy transfer curves throughout the system and also lead to a reductio n in electromagnetic interference (EMI) and common-mode noise in these high-frequency systems.
The challenge to reduce common-mode noise has always been present in high-speed differential systems, but with the frequency content of signals pushing into the 10 GHz range, the effect of common-mode noise is even more damaging on eye opening than with prior systems. By tackling the common-mode noise issue, the designer will directly address the issue of radiated EMI.
Breaking New Ground
Pushing high-speed serial interfaces to the 5 Gbit//s range is not a Quixotic quest. It can be done, but is certainly a non-trivial endeavor. One of the major decisions designers must make is an architectural one: whether to implement the framing operation in the link controller block, or, alternatively, inside the data recovery deserializer block. It is this author's contention that taking the latter approachwhile seemingly the tougher challenge, due to the 5 Gb/s serial data ratecan result in reduc ed overall circuit size and power consumption, for reasons discussed earlier.
Most designers will be employing 0.13-micron CMOS process technology, to take full advantage of the performance gains that it offers, compared to 0.18-micron processes. But, remember, the process "shrink" is not some "magic bullet," that alone can get a 2.5-Gbit/s interface to 5 Gbit/s.
Perhaps, more than anything else, pushing serial interfaces into the 5 Gbit/s range will bring the problems of lossy effects from IC packaging, boards, cables and connectors right to the fore. Designers need to take pains to ensure that (as one example) differential circuits adhere as closely as possible to the differential ideal. In other words, paths for such circuits should be routed together (i.e., the two signal lines should not take different routes, be of different lengths, or pass through differing numbers of gates).
Every effort should also be made to minimize impedance discontinuities, as the signal flows though the circuit(s ). Maintaining a uniform impedance will improve signal integrity, by minimizing impedance reflections. It will also result in lower radiated EMI, thus helping to meet another design challenge.
In the rarified realm of 5-Gbit/s serial interfaces, we find that working examples of such interfaces that exist today are proprietary. Designers will use these to connect function blocks within their own circuits. For these applications, it usually doesn't matter if the interface is not standards-based. Over time, we can expect to see interface standards emerge that can meet these data rates. Such a development will allow designers to more easily combine function blocks from a variety of sources.
About the Author
Ronald Nikel is the CTO of TriCN Inc. He received his M. Eng. in Electrical Engineering from Cornell University and can be reached at firstname.lastname@example.org.
Copyright © 2003 CMP Media, LLC | Privacy Statement