# How to evaluate test compression methods

EE Times: Latest News How to evaluate test compression methods | |

Rohit Kapur, T.W. Williams, Jennifer Dworak, and M. Ray Mercer (10/07/2004 6:02 PM EDT) URL: http://www.eetimes.com/showArticle.jhtml?articleID=49900252 | |

IntroductionFor the past five years, the cost of test has prevailed as the hottest topic in test. During this period, automated test equipment (ATE) has made a dramatic move towards low-cost design for test (DFT) testers, and EDA solutions have implemented DFT methods that significantly reduce test data volume and test application time. Numerous papers have been published on the cost of test and test compression, each of which demonstrates the possibility of marked improvement in reducing test data volume and/or test application time through the introduction of DFT structures on the chip [Bar01], [Jas99], [Mitra04], [Oh02], [Rajski02], [Sit02], [Sit04], [Wei97]. Now, with numerous test compression technologies at our disposal, we are left with the arduous task of evaluating and choosing the one that is best. In this paper, we consider the general DFT cost model proposed in [Wei97] and offer a calculated approach to help simplify the evaluation of these different test compression technologies. In this way, we can obtain good data on the relative merits of different compression techniques even if some of the parameters of the detailed model are unknown. All test compression methods start with a baseline of scan technology. A typical scan test involves a scan operation in which the flip-flops of the design are controlled and observed along with some stimulus to inputs, some measures on outputs, and some capture events. Compression technologies focus on optimizing the scan operation to reduce the amount of data stored on the ATE and thereby decrease the time it takes to load or observe the scan chains. This compression and scan optimization is achieved by adding logic before and after the scan chains to allow more scan chains to be controlled and observed through a small interface. The amount of DFT logic added for this purpose generally follows the trend shown in Figure 1. Decreases in test application time and test data volume are achieved through higher DFT area.
Figure 1 — Three dimensions, or aspects, of test compression methods.
For example, one would expect that as more specialized DFT logic is added, additional decreases in test-data-volume and/or additional decreases in test-application-time should be realized. Evaluating the technologies in each separate dimension makes it easy to form a metric for comparison. However, evaluating the technologies in light of all dimensions taken together makes it difficult to decide which method is best. We start with a general test economics model developed by Wei and coauthors [Wei97]. Since our goal is to compare test compression DFT methods against each other, the actual entities in the model that are uniform across all test methods can be factored out and ignored. In [Wei97], the cost of test (C
Figure 2 — Calculating the cost of test — the basis of comparison.
We can assume that the device-specific hardware cost is the same for all test compression methods if the number of terminals that need to be probed is maintained to be the same. Since this is the only fair way to compare the performance of different test compression methods, this aspect of the cost has no impact on relative comparisons. The cost of the tester itself is amortized over the life of the equipment, and its cost depends on the time a device utilizes the tester. We can analyze the equation given in [Wei97] to characterize the relationship of the test time to the cost of the tester as shown in Figure 3.
Figure 3 — Calculating the amortized cost of a tester over its lifetime.
In the cost equation above, R While the utilization of the tester is impacted by the test time of a device, the change in utilization of a tester — a result of the time it takes to test a single IC — is a small modulation of the number. For all practical purposes, the comparison of different compression methods is built around similar tester requirements, and the cost of a tester can be seen as a number that is proportional to the test time T
Figure 4 — Calculating the cost of testing increased silicon area.
In these equations, Q Without going into the details of every parameter, it can be generally seen that the additional cost of silicon is quadratic in the added silicon for the DFT(α However, if we assume that the defect density is very small, and thus the change in yield from the extra DFT area is negligible, we can ignore the quadratic term and consider the relationship between the additional cost of silicon and the additional silicon area (α Thus, we obtain the equation shown in Figure 5 for the additional cost of the silicon added for the compression:
Figure 5 — Added cost from extra silicon for DFT is proportional to the extra area.
As shown above, the DFT area for compression and test application time combine in an additive manner to the overall cost of test. Furthermore, under our assumptions, the cost of test has a linear relationship with the DFT area and the test application time. Thus, we can estimate the total cost of test for a chip with added compression DFT as:
Figure 6 — Total cost of test for design with DFT added for compression.
In this equation, k However, the general trend of more DFT area provides better (reduced) test application time. Thus, we can describe the effect of the compression DFT (versus no compression) on the overall cost as:
Figure 7 — Calculating the change in the cost of test as a sum of the change in the DFT area (versus no compression) and the change in the test application time.
The cost equation does not explicitly include the test data volume reduction. However, significant differences in test data volume reduction will show up in the term for increased test time. All that remains at this point is to find appropriate estimates for the two constants in the model. We can accomplish this by using data from existing designs and solving a system of linear equations or using linear regression analysis. Ideally, we would have historical test cost data from prior designs without additional compression DFT and from the same designs with various amounts and/or types of compression DFT. If we assume that knowledge of the overall cost, the test time, and the additional DFT area included is available for each design instance, we can use this information to find the best estimates for the constants k Even if data from different iterations of the same design is not available, it is still be possible to obtain information about the constants if different designs of similar character (such that our assumptions still hold) are used. We have presented a method for making practical and useful comparisons of different methods of compression DFT. Starting with the cost model described in [Wei97], we have made assumptions appropriate to the case of DFT for test compression. The resulting model is simpler and contains only a few constants, which can be estimated from historical cost data on previous designs. The final results will allow us to make reasonable and informed tradeoffs between the test time reduction and the silicon area overhead inherent in different compression techniques. [Bar01] C. Barnhart, V. Brunkhost, F. Disler, O. Farnsworth, B. Koenemann, and B. Keller, "OPMISR: The Foundation for Compressed ATPG Vectors", Proceedings of the International Test Conference, 2001, pp. 748-757.
[Jas99] A. Jas, K. Mohanram, and N. A. Touba, "An Embedded Core DFT Scheme to Obtain Highly Compressed Test Sets," Proceedings of the Asian Test Symposium, 1999, pp. 275-280.
[Oh02] N. Oh, R. Kapur, T.W. Williams, "Fast Seed Computation for Reseeding Shift Register in Test Pattern Compression," Proceedings of the International Conference on Computer Aided Design, 2002.
[Mitra04] S. Mitra, K. S. Kim, "X-compact: an efficient response compaction technique" IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vo. 23, no. 3, March 2004, pp. 421-432.
[Rajski02] J. Rajski et. al., "Embedded Deterministic Test for Low Cost Manufacturing Test," International Test Conference, 2002, pp. 301-310.
[Sit02] N. Sitchinava, S. Samaranayake, R. Kapur, M.B. Amin, T.W. Williams, "Dynamic Scan Chains," IEEE Computer, September, 2002.
[Sit04] N. Sitchinava, S. Samaranayake, R. Kapur, F. Neuveux, E. Gidarski, and T.W. Williams "Changing the Scan Enable during Shift," VTS 2004.
[Wei97] S. Wei, P.K. Nag, R.D. Blanton, A. Gattiker and W. Maly, "To DFT or Not to DFT?" Proceedings of the International Test Conference, 1997, pp. 557-566.
Thomas W. Williams is a Synopsys Fellow at Synopsys in Boulder, Colorado. Formerly, Dr. Williams was with IBM Microelectronics Division and served as manager of their VLSI Design for Testability group. Dr. Williams has received numerous best paper awards from the IEEE and ACM. He is the founder or co-founder of a number of workshops and conferences dealing with testing and was twice a Distinguished Visitor lecturer for the IEEE Computer Society. Jennifer Dworak will be joining Brown University as an Assistant Professor in January 2005. She graduated in May 2004 with a Ph.D. in electrical engineering at Texas A&M University. Her research interests include digital circuit testing and automatic test pattern generation, defective part level modeling and logic minimization. Dworak received a National Science Foundation Graduate Fellowship and co-authored a paper that won the Best Paper Award at the 1999 VLSI Test Symposium. M. Ray Mercer is a Professor of Electrical Engineering at Texas A & M University, where he holds the Computer Engineering Chair. His research interests are centered around computer engineering and include the computer-aided design of digital systems and design verification.
| |

All material on this site Copyright © 2005 CMP Media LLC. All rights reserved. Privacy Statement | Your California Privacy Rights | Terms of Service | |

### Related Articles

### New Articles

### Most Popular

- Dynamic Memory Allocation and Fragmentation in C and C++
- Why using Single Root I/O Virtualization (SR-IOV) can help improve I/O performance and Reduce Costs
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Why a True Hardware PUF is more Reliable as RoT
- System Verilog Assertions Simplified

E-mail This Article | Printer-Friendly Page |