# Tutorial on PLLs: Part 2

Tutorial on PLLs: Part 2

James A. Crawford, Silicon RF Systems

May 06, 2004 (4:00 AM)

URL: http://www.commsdesign.com/showArticle.jhtml?articleID=19502356

In Part 1 of this article, we looked at a number of phase-locked loop (PLL) concepts ranging from continuous and sampled control systems to estimation theory based perspectives. Now, in Part 2, we will continue are examination of the PLL concept in the estimation theory sense by looking at maximum a posteriori (MAP)-based PLLs and the fundamental performance limits described by the Cramer-Rao bound. Our theoretic involvement will culminate with a Kalman filtering perspective for the PLL. The balance of this article will utilize the PLL concept in several real-world applications.

**MAP-based Estimators**

The MAP estimator form is used for the estimation of random parameters whereas the maximum-likelihood (ML) form is generally associated with the estimation of deterministic parameters. From Bayes Rule, we know that given an observation z that *Equation 1* is true.

Equation 1 below can be re-written in the logarithmic form as shown in *Equation 2* below.

This log probability may be maximized by setting the derivative with respect to θ to zero thereby creating the necessary condition that is shown in *Equation 3* below.

If the density p(θ) is not known, we are forced to ignore the second term in Equation 3, which leads naturally to the maximum-likelihood form as shown in *Equation 4* below.

Although the MAP and ML estimators are not the same in the strict sense, the MAP estimator takes on the form of the maximum-likelihood (ML) estimator in the absence of sufficient prior knowledge of θ.

The similarities between the minimum mean-square error (MMSE), ML, and MAP estimators should not go unnoticed. In the Gaussian noise case, the observed signal is given by:

in which v(t) is the noise and θ is the nonrandom parameter of interest, it can be shown that the ML estimate for θ is given by:

In the jointly Gaussian case where θ is assumed to be a random parameter having a variance of σ_{θ2}, use of Equation 3 leads to the result that:

These two results illustrate how similar the ML and MAP estimates can appear. The Fundamental Theorem of Estimation Theory^{22} states that the estimator that minimizes the mean-square error is given by:

It can also be shown that the best linear unbiased estimator (BLUE) form takes on the form of the weighted least-squares estimator given that the proper sample weighting is applied.

In Part 1 of this article, we found that the ML estimator for the signal phase utilized a gradient error metric that sought to drive any quadrature (or orthogonal) signal components to zero. This is not unlike the Orthogonality Principle in estimation theory, which stipulates that any residual estimation error should be orthogonal (i.e., uncorrelated) with the observations as:

More information on the fundamentals of estimation theory can be found in References 20, 21, and 22.

**Performance Limits From the Cramer-Rao Bound**

In the case of unbiased estimators for non-random parameters, the Cramer-Rao Bound (CRB) provides a lower bound for the estimation error variance achievable. One of the most important aspects of the CRB and bounds like it is that the difficulty of a given design objective can be very quickly judged by comparing a given requirement with the appropriate bound. In other words, it can be embarrassing to find out after already expending a great deal of time and effort on a problem that the requirement runs contrary to the laws of physics.

In the single-parameter case, the CRB is usually presented in two different equivalent forms as:^{20,21,22}

When multiple parameters are being estimated (e.g., amplitude, phase, frequency), the CRB takes the form of the Fisher Information Matrix. The interested reader should consult References 19, 20, 21, and 22 for additional information on this topic.

It is of interest to compare the MAP and PLL phase estimators in terms of some performance measures. In order to do this, we first obtain the variance of each estimator. As developed in Part 1, the steady-state first-order (and second-order) PLL phase estimator probability density is taken to be the Tikhonov probability density function that is given by:

where I_{0}() is the zeroth-order modified Bessel function of the first kind and α is the SNR within the PLL. The variance for the PLL estimator is given by:^{13}

These results and the linear result in which σ_{θ2} =α^{-1} are plotted for comparison purposes in **Figure 1**.

**Kalman Filtering**

The PLL can be cast as an extended Kalman filter and hence it is an approximate solution for the optimal nonlinear filtering problem.^{9}. The Kalman filter is a set of mathematical equations that provides an efficient recursive means to estimate the state x of a process, in a way that minimizes the mean of the squared error. The filter is very powerful in several aspects: it supports estimation of past, present, and even future events.^{10}

The Kalman filter addresses the general problem of trying to estimate the state x of a discrete-time controlled process that is governed by the linear stochastic difference equation of the form:

with measurements given by:

The A, B, and H matrices can be time-variable but are shown here as constant. The random __w___{k} and __v___{k} represent the process and measurement noise respectively, and they are assumed to be statistically independent with covariance matrices Q and R respectively. Vector __u___{k} is the input to the system.

The mean-squared filtered estimate of the system state at time k+1 is represented by , and it can be written in predictor-corrector form as shown in **Figure 2**.

^{10}

The similarities of the Kalman filter with other time-stationary results are striking. In the case of the best linear unbiased estimate (BLUE), its recursive structure may be written in an almost identical form except that we have no time prediction steps since we have access to no system state information for the BLUE. Its formulation is shown in **Figure 3**.

The recursive Kalman equations lend themselves very easily to implementation within a second-order digital phase-locked loop (DPLL). This has been done before in References 11 and 12.

Although the Kalman filter requires current information about the noise covariance in its execution through the Q and R matrices, it is particularly adept at improving the tracking performance in situations which are not time-stationary. As the structure content of the signal being tracked increases, the Kalman filter can deliver substantial performance gains over other methods that do not exploit state estimation.

**PLL Applications***1. RF Frequency Synthesis Using Charge-Pump Based PLLs*

The National Semiconductor line of Platinum frequency synthesizer devices revolutionized the world of frequency synthesis in the early 1990's by delivering unprecedented predictably-reliable phase noise performance in a highly-integrated low-cost device. These devices had a host of very nice features about them, but most notable was the low phase noise performance of the charge-pump phase detector along with the very low reference spur levels achievable. The excellent balance and leakage characteristics of the phase detector made it possible to implement a complete PLL with a very modest loop filter.

*1. Classic Type-2 Charge-Pump Implementation*

The circuit schematic for the classic type-2 fourth-order PLL is shown here in **Figure 4**. The current noise sources associated with each resistor are shown shunted across each respective resistor, and the reference-related and voltage-controlled oscillator (VCO) self-noise make up the remainder of the major noise contributors.

The normal approach that is taken to analyze this kind of system is to solve the nodal equations for the appropriate transfer functions algebraically.^{4}. A streamlined approach is taken here where the same nodal equations are used but the customary algebraic manipulations are avoided by using matrix methods. The matrix equation that describes the circuit in Figure 4 can be quickly written down in Laplace transform form as:

where G_{i} = (R_{i})^{-1} and the I_{j} represent the Johnson current noise sources associated with each resistor. Analysis tools like Matlab and Mathcad can be used to numerically solve this equation for the transfer functions of interest and for closed-loop noise performance quantities. The noise current for the j^{th} resistor is given as:

and all of the noise sources are assumed to be statistically independent.

The phase detector referenced phase noise floor for the National Semiconductor Platinum series devices is given by:

where L_{Floor} = -205/-210/-211/-218 dBc/Hz for the LMX2315/LMX2306/LMX2330/LMX2346 devices respectively. This model or another can be used for the reference noise level represented in Equation 15 by θ_{rn}. Leeson's model can be similarly used for the VCO self-noise term represented by θ_{vn} in Equation 15, recognizing that this noise contribution is frequency-dependent as given by:

in which F is noise factor, k is the Boltzmann's constant, T_{o}= 290 degrees Kelvin, P_{0} is the power extracted from the actual resonator in Watts, F_{c} is the VCO center frequency in Hz, and Q_{L} is the loaded resonator Q-factor. Additional terms are often added within the parenthesis to account for 1/f noise, etc., but these rarely survive the closed-loop action of the PLL and have consequently not been included here.

The transient response of the PLL to a step-change in phase or frequency can be similarly computed using numerical techniques. The approach taken here is to substitute the Laplace transform of a step-frequency error given by 2πΔ F/s^{2} in for θ_{vn} in Equation 15, and then compute the Fourier transform for θ_{VCO} at an equally-spaced grid of frequencies from which the inverse FFT provides the time-domain response. An example time-domain response is shown in **Figure 5**. The ensuing details for both of these example results have purposely been omitted for brevity whereas they can be found online.^{6}

Several design procedures are available for designing "optimal" PLL loop filters. Whenever the word optimal is used however, designers should ask the question, "Optimal with respect to what criteria?"

Some communication systems are primarily concerned with frequency error whereas others are concerned with phase error. If the wrong criteria is adopted, the design can often result in being more difficult than necessary. It is therefore very attractive to have an interactive tool that permits a simultaneous examination of both the time as well as output spectrum domains.

**Phase Noise Impact on Communication Systems**

Phase noise manifests itself in primarily two undesired phenomenon when it comes to wireless communications and frequency synthesis. Close-in^{1} phase noise interferes with coherence in the receiver and in the case of QAM signal constellation, causes adjacent constellation points to be received more likely in error when channel noise is present. (Amplitude noise at the output of a properly designed synthesizer should be 20 dB or more below the phase noise level, and is therefore rarely a consideration.)

In frequency-modulated systems, phase noise can be equivalently expressed as a residual FM noise and it similarly adds to confusion in the receiver as to which frequency was truly sent by the transmitter. Far-out phase noise degrades channel selectivity, adjacent channel occupancy, and receiver third-order intercept point due to reciprocal mixing. Only a few of the most common digital communication waveforms will be considered here due to space limitations.

The close-in phase noise impairment to (uncoded) bit error rate performance is most often computed using the Tikhonov probability distribution function for the noise. For large signal-to-noise ratio (SNR) arguments, numerical evaluation of the zeroth-order Bessel function can become problematic and it is more convenient to closely approximate this density function as:

in which σ_{θ2} is the variance of the phase noise process involved. This variance is normally calculated as:

where L(f) is the phase noise power spectral density of the local oscillators involved in rad^{2}/Hz, F_{S} is the symbol rate, and F_{L} is a lower frequency limit normally given as 0.01 F_{S} or thereabouts, depending upon the carrier-recovery baseband processing that may be present in the complete system. These definitions apply for a single-carrier system but need some additional enlargement in the case of multi-carrier systems like orthogonal frequency division multiplexing (OFDM). In the case of QAM-style digital modulations, the (uncoded) symbol error rate can then be computed as:

Some results computed in this manner may be found in Reference 8. The conditional symbol-error-rate formulas for coherent binary phase-shift keying (BPSK) and quadrature PSK (QPSK) performance are respectively given as:

The conditional symbol error rate relationships for other square-QAM signal constellations like 64-QAM can be found in Reference 7.

It should come as no surprise that BPSK shows little susceptibility to phase noise related performance loss as shown in **Table 1** since it is essentially an amplitude-based modulation type. Performance degrades significantly as the signal constellation size increases, culminating in sub-one degree rms phase noise being desirable for 64-QAM in order to avoid appreciable E_{b}/N_{o} loss.

**Table 1: QAM Uncoded Symbol Error Rate with Phase Noise**

*Click here for table 1*

In the case of carrier-recovery in which coherent demodulation is to be performed on QAM-type signals, the Costas loop has found wide-spread use as an unbiased low-variance practical solution. It can be shown that the Costas loops for BPSK and QPSK are equivalent to 2nd and 4th power non-linearities followed by a PLL. Block diagrams for the BPSK and QPSK Costas loops are shown in **Figure 6** and **Figure 7** respectively.

^{14}

^{15}

**Symbol Timing Recovery**

Symbol timing recovery is required in both wired as well as wireless systems and quite often employs PLL-like circuitry or algorithms. Timing errors in this process lead to (i) a loss associated with missing the correlation peak from the receive matched filter as well as (ii) additional inter-symbol interference (ISI) from time-adjacent data symbols. In order to properly design the time tracking loop, we must first know the conditional error probability associated with a static timing error so that it can be used in a manner very similar to that used in Equation 21.

In the example that we will consider, the transmit pulse shape is assumed to be a square-root raised-cosine pulse having an excess bandwidth parameter α of 0.50. The filter bandwidth time symbol time duration quantity for other filters used in this paper is referred to as BT. The Fourier transform for such a pulse is given by:

The receiver is assumed to use a continuous-time N=3 Butterworth filter as a close approximation for the ideal matched filter and its Fourier transform may be written as:

where ω_{c} is the -3 dB corner frequency in rad/sec. In the absence of noise, the individual pulse shape observed at the output of H_{rx}( ) may be directly computed by multiplying Equations 23 and 24 together in the frequency domain and performing an inverse FFT.

In the case where H_{rx} has BT= 0.50, this pulse shape is shown in **Figure 8**. In the absence of any timing error, the desired signal sample occurs coincident with the peak of the pulse as shown in Figure 8. ISI is clearly present as shown however, because sample values at time instants offset by +/-kT_{sym} are not all zero (k= nonzero integer). For random data, these nonzero adjacent symbol samples create data-dependent noise at the receiver's decision making hardware thereby reducing the signal eye-opening thereby degrading system performance.

Insight into the ISI matter can be had by considering all possible sequences of four-symbol sequences possible and the eye diagram that is observed at the receive filter output. Eye diagrams for the α= 0.50 and 0.40 cases are shown in **Figures 9** and **10** respectively. The ISI and eye-closure are substantially worse for the α= 0.40 case as shown.

The symbol error rate for random +/-1 data can be mathematically computed by recognizing that the decision statistic consists of three components: (1) the desired signal, (2) additive Gaussian noise, and (3) data pattern-dependent ISI. Characteristic function methods may be used to combine the effects of the ISI and Gaussian noise as described in Reference 16 leading to the symbol error rate expression with static timing error τ_{e} being given as:

where r(t) is the noise-free single-pulse shape at the receive filter output, σ^{2} is the variance of the Gaussian noise at the receive filter output, and C(ω) is the characteristic function of the ISI noise. This can be shown to be given by:

The variance quantity specified in Equation 26 can be found from the equivalent noise bandwidth of the receive filter as:

where N_{o} is the one-sided Gaussian white noise power spectral density.

The foregoing results were used to compute the effect of timing error on symbol error rate performance as a function of the receive filter product as shown in **Figure 11**. The optimal value for best performance is BT= 0.50 which leads to a performance loss of only about 0.25 dB at an input Es/No value of 9.6 dB which is quite remarkable.

The curves shown in Figure 11 can indirectly provide the needed conditional error probability like that used in Equation 21 thereby allowing the complete impact of imperfect PLL time-tracking behavior to be assessed.

In the context of hardware-based symbol timing recovery, many different types of timing error metrics are available, but one stands out in particular for very high-speed data applications in which most other detectors fall prey to meta-stability problems. This detector type was first patented by Hogge^{18} and is shown here in **Figure 12**. This detector is rather uniquely equipped for extremely high-speed data applications and in this figure, is shown being used within a type-2 third-order PLL structure.

**Wrap Up**

This two-part article has covered a wide range of PLL-related topics, but nonetheless space limitations precluded even mentioning other PLL-workhorses like Costas loops, joint timing and phase recovery PLLs, etc. If these articles succeeded in illustrating the versatile and pervasive aspects of the PLL concept, make a point in becoming more studious in PLLs. You will be glad that you did. **Editor's note: **To view Part 1, click here.

**References**

- J.A. Crawford,
*Frequency Synthesizer Design Handbook*, Artech House, 1994 - J.A. Crawford, "Thoughts on Charge-Pump Phase Noise", 1999, (http://www.siliconrfsystems.com/Papers/ChargePump.pdf).
- D. Banerjee, "PLL Performance", National Semiconductor
- D. Banerjee, "PLL Performance, Simulation & Design", 3rd Ed., 2003, National Semiconductor, (http://www.national.com/appinfo/wireless/files/Deansbook3.pdf).
- W.P. Robins,
*Phase Noise in Signal Sources*, Peter Peregrinus Ltd., 1982. - J.A. Crawford, "U10650 Type-2 4th-Order PLL Worksheet", 24 March 2004.
- J.A. Crawford, "Phase Noise Effects on Square-QAM Symbol Error Rate Performance, 2004, http://www.siliconrfsystems.com/Papers/Phase%20Noise%20Effects%20on%20Square%20QAM%20v1.pdf.
- R. Gilmore, "Specifying Local Oscillator Phase Noise Performance- How Good is Good Enough?", http://www.siliconrfsystems.com/Papers/U10236%20Phase%20Noise-%20How%20Good-%20Gilmore.pdf.
- P.O. Amblard et al., "Phase Tracking: What Do We Gain from Optimality? Particle Filtering Versus Phase-Locked Loops", March 2001.
- G. Welch, G. Bishop, "An Introduction to the Kalman Filter", March 1, 2004.
- P.F. Driessen, "DPLL Bit Synchronizer with Rapid Acquisition Using Adaptive Kalman Filtering Techniques", IEEE Trans. Comm. Sept., 1994.
- A. Patapoutian, "On Phase-Locked Loops and Kalman Filters", IEEE Trans. Comm., May, 1999.
- J.S. Lee, J.H. Hughen, "An Optimum Phase Synchronizer in a Partially Coherent Receiver", IEEE Trans. Aerospace and Electronic Systems, July, 1971.
- R.E. Ziemer, R.L. Peterson,
*Digital Communications and Spread Spectrum Systems*, Macmillan, 1985. - J.A. Bingham,
*The Theory and Practice of Modem Design*, John Wiley & Sons, 1988. - Ho, "Evaluation of Error Probability Including Intersymbol Interference", Bell System Technical Journal, Nov. 1970.
- J.A. Crawford, "U10651 SER with Timing Error", March 2004.
- C.R. Hogge, "A Self-Correcting Clock Recovery Circuit", IEEE Trans. Electronic Devices, December 1985.
- H. Meyr, M. Moeneclaey, S.A. Fechtel,
*Digital Communication Receivers Synchronization, Channel Estimation, and Signal Processing*, John Wiley & Sons, 1998. - M.D. Srinath, P.K. Rajasekaran,
*An Introduction to Statistical Signal Processing with Applications*, John Wiley & Sons, 1979. - H.L. Van Trees,
*Detection, Estimation, and Modulation Theory*, John Wiley & Sons, 1968. - J.M. Mendel,
*Lessons in Theory for Signal Processing, Communications, and Control*, Prentice-Hall, 1995. - A. Gelb,
*Applied Optimal Estimation*, The Analytical Sciences Corporation, 1996.

**About the Author****James Crawford** is the president/CEO and director of communication systems at Silicon RF Systems, Inc. Prior to this position, James served as the CTO of Magis Networks. He can be reached at jcrawford@siliconrfsystems.com.

*
*

### Related Articles

- Tutorial: Improving the transient immunity of your microcontroller-based embedded design - Part 2
- Tutorial on PLLs: Part 1
- Tutorial on Designing Delta-Sigma Modulators: Part 2
- Internal JTAG - A cutting-edge solution for embedded instrument testing in SoC: Part 2
- Securing the IoT: Part 2 - Secure boot as root of trust

### New Articles

### Most Popular

- Dynamic Memory Allocation and Fragmentation in C and C++
- Why using Single Root I/O Virtualization (SR-IOV) can help improve I/O performance and Reduce Costs
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Why a True Hardware PUF is more Reliable as RoT
- System Verilog Assertions Simplified

E-mail This Article | Printer-Friendly Page |