# SOC: Submicron Issues -> Technique probes deep-submicron test

Technique probes deep-submicron test

By Peter Maxwell,

October 16, 2000 (3:40 p.m. EST)

URL: http://www.eetimes.com/story/OEG20001016S0048

*Peter Maxwell, Research and Development Specialist, Semiconductor Products Group, Agilent Technologies Inc., Santa Clara, Calif., Pete O'Neil, Senior Technology, Assessment Engineer, Agilent Technologies Inc., Fort Collins, Colo.*

The IDDQ testing method is one in which the steady-state, or quiescent (Q), value of the power supply current (IDD) is measured, and deviations from expected values indicate the presence of defects. IDDQ measurements are usually made by stepping the circuit through many functional or scan-based input test vectors to test various states.

Traditionally, if a circuit conducts more than a pre-set IDDQ threshold value, it fails. One of the difficulties is setting the threshold value. If the threshold value is set too high, then circuits that contain defects may be considered non-defective. If the threshold value is too low, then circuits that are free of defects may fail the IDDQ test.

Any given IDDQ measu rement consists of two components: defect current, which is the current drawn by the circuit due to defects within the circuit; and background current, which is IDDQ minus the defect current. As the scale of CMOS circuits is increasingly reduced to increase speed and density and to decrease cost, the background leakage current drawn by the CMOS circuits is increased. The scale of CMOS circuitry has reached levels where the magnitude of the background current is comparable to or even exceeds many defect currents. Therefore, it has become more difficult to determine whether a variation in IDDQ is due to a variation in background current or is due to a defect.

Process variations of the fabrication of electrical circuits further complicate the determination of the IDDQ threshold value. Two integrated circuits of the same design can draw different IDDQ values that are orders of magnitude apart for the same set of input test vectors, due to process variations between the two circuits.

To extend the useful life of the technique, Agilent Technologies has developed a new method of production IDDQ testing that does more than compare measured values of IDDQ against a single pass/fail threshold. The method is called "current ratios." It is based on the fact that when a large number of vectors is applied to a defect-free part, there is measurable variation in IDDQ values over the vector set.

In the technique, this variation is characterized by the ratio of the maximum current to the minimum current. Although absolute values of those currents vary by orders of magnitude due to process variations, the ratio is constant and applies to all defect-free die of any given design. A test procedure that uses this ratio is therefore self-scaling with respect to process variations. Calibration is achieved by measuring, for each die, the current drawn for one specific vector, chosen to be the minimum current vector for the particular design. The expected maximum current is then calculated using the predetermined ratio. The complete vector set is then run, and if the current on any vector exceeds the expected maximum, plus a guardband added to account for random fluctuations, then the die is rejected.

The key assumption made in finding defects is that the test vector (state) set is constructed so that it will cause large variations in the current produced by most defects. This occurs either to the defects being activated and deactivated during the test set, or by activating them in a variety of ways. Then defects will show up as large deviations from the maximum IDDQ predicted by the model.

This approach requires absolute measurement of IDDQ for only one vector. The rest of the vector set is run with the tester in comparison mode, rather than measurement. Testers are much more efficient at comparing signals than determining the value of one of the signals, so this represents a considerable time savings.

A maximum and minimum ratio is obtained using a statistical linear regression algorithm car ried out on many die, which have as wide a spread of currents as possible. Some of these die may be defective, and would show up as outliers in the regression operation. Consequently, the regression is an iterative process, where outliers are removed from the data until only defect-free devices remain. Once a design has been characterized using the regression technique, pass/fail acceptance limits must be set. Only one explicit IDDQ measurement is taken, and that is on the vector that has been determined-from characterization or from simulation-to be the one with the minimum IDDQ. The actual measured value for that vector is used to predict the expected maximum IDDQ for the die.

The remaining parameter to be determined is the uncertainty in this prediction. This is the margin by which the measured maximum must exceed Max before it can be confidently declared that a defect has been identified. It is needed because of random fluctuations in the IC process, even for die with approximately the same value of Min. We call this amount the "outlier margin" or outlier for short. The definition of the uncertainty we use here is three times the standard deviation of the residuals from the final regression done to determine the values of Slope and Intercept. This defines an upper acceptance threshold.

The calculation of that threshold is based on the assumption that the measured current for the assumed minimum current vector truly represents the minimum current across the vector set. However, if this vector happens to activate a defect, the measured current will be substantially higher, resulting in an upper threshold that is unrealistic. If the defect is state-dependent, then some subsequent vectors will not activate it, giving rise to a much lower current. This is taken into consideration by having a lower threshold, so that if any vector has a current below this value then the part is also rejected. The lower threshold is set by subtracting the outlier margin from the measured assumed minimum current. If this calculation results in a number less than zero, then zero is substituted.

Finally, an absolute maximum current must be set. The upper process limit of the subthreshold leakage current, together with the transistor count from the design, determines the maximum expected subthreshold leakage current for the design. If the measured minimum current exceeds this value, the part is rejected outright. Also, if the calculated value of the upper threshold is greater than the absolute maximum, then that value is substituted. The threshold calculations are based on the availability of a vector that draws the minimum, or close to the minimum, current. Typically, many vectors fall into this category, so the selection is not critical. The vector can be chosen using a simulation approach or by using further characterization data.

The following test procedure is used for each die:

- Measure the current using the predetermined minimum current vector.

- Calculate and set the two comparator levels T upper and T lower as described above; and

- Compare IDDQ for all other states to these two limits. If IDDQ is above the upper limit for any vector, the device under test (DUT) must contain a defect that was activated by the vector set, so the DUT is rejected.

If IDDQ is below the lower limit for any state, then the assumption that the current measured in step 1 was the minimum IDDQ, i.e., that state did not activate a defect, is wrong, and the part is rejected. A DUT will pass only if its IDDQ falls between the two limits for all vectors.

To compare the effectiveness of this strategy with the single-threshold technique, 124 die used for characterization on one design were studied. The range of currents was from 25 milliamps to 660 mA. The ratio technique rejected 14 of the die. To obtain an equivalent number of rejected die, a single threshold would have to be as high as 330 mA. Even though the total number of die rejected would then be the same, only three of the 14 were re jected by both methods. Many die with significant signatures were passed using a 330-mA threshold, whereas many "normal" die were rejected. Conversely, to detect the same parts that current ratios detect, the threshold would have to set so low that five times as many parts would be rejected, most of them perfectly good.

For a conventional (i.e., single-threshold) IDDQ test, a defect is detected when one or more vectors activate it. If all vectors activate the defect, then it will still be detected, provided the defect current exceeds the threshold. A signature-based approach is different. To ensure a defect is detected, there must be sufficient variation in current across the entire vector set. Normally this would be achieved by having some vectors that activate the defect and some that do not. The implication is that a much larger number of vectors is required than for single threshold. If only a small number of vectors is available, then although a large number of defects might be activated, there might be insufficient variation in the way they are activated, rendering the signature approach ineffective. The above poses problems if one is limited in the number of IDDQ vectors that can be applied, due to test time.

It is for that reason that Agilent has also adopted a high-speed measurement technique using custom probe card or load board circuitry. Using such a circuit, 1,000 or more IDDQ vectors can be applied in fractions of a second. Of the circuits for which the technique has been used to date, developers have been able to use all the vectors generated by the automatic test pattern generation (ATPG) tool. In the event that too many vectors are generated and/or the vectors will not fit in tester memory along with all the other tests, then some form of trimming the set will be required. This would need to take into account the activation/deactivation principle discussed above, rather than simply deleting the last few generated vectors.

The technique has been implemented on an Agilent 83000 F330 test system. During production test, the characterization parameters were input to a user procedure. For each DUT, the procedure measures the IDDQ for the previously selected minimum current vector. From that measurement and from the characterization parameters, minimum and maximum current limits were defined for the DUT. Those limits were then converted to voltage limits for the output of the custom IDDQ measurement circuitry. Finally the user procedure ran a functional test containing all the IDDQ vectors. During this functional test, the measurement circuitry output was asserted to be between the two calculated voltage limits, and a pass or fail result from the functional test equated to the part passing or failing the current signature IDDQ test.

IDDQ testing can be successfully carried out on chips with a background current that is orders of magnitude larger than that allowed by the traditional single-threshold approach. As the background current increases, the minimum defect current detectable with reasonable probability increases.

The limiting factors include this and the hardware limitations of the IDDQ monitor and tester. The comparator resolution in the tester translates to a current resolution of the system. This dictates the minimum defect current, which can be reliably detected, since it affects the value of the outlier margin. The method can be refined by making more measurements, ultimately by measuring IDDQ on every vector.

### Related Articles

- SOC: Submicron Issues -> Physics dictates priority: design-for-test
- SOC: Submicron Issues -> Deep signal integrity can be assured
- SOC: Submicron Issues -> Heading off test problems posed by SoC
- A Standard cell architecture to deal with signal integrity issues in deep submicron technologies
- Re-use versus re-synthesize: Preparing for deep submicron issues ahead

### New Articles

- MIPI Drives Performance for Next-Generation Displays
- Delivering timing accuracy in 5G networks
- Vital Ways to Prevent a Cyberattack
- Paving the way for the next generation audio codec for TRUE Wireless Stereo (TWS) applications - PART 4 : Achieving the ultimate audio experience
- A short primer on instruction set architecture

### Most Popular

E-mail This Article | Printer-Friendly Page |