Charlotte, N.C. - Chip-testing strategies continue to perplex the semiconductor industry as sub-100-nanometer designs become commonplace. The debate came into focus at the International Test Conference here last week, where the consensus was that one-time testing will no longer cut it. Instead, experts said testing must be done throughout the life of the chip, beginning with design and continuing through its deployment in an application and later in the field.
"No longer can we afford to test a chip once and forget about it," said Rubin A. Parekhji, an engineer with Texas Instruments India in Bangalore, and organizer of a panel of IC test experts. "Contrary to common belief, the need for periodic testing exists today and we may be missing failures during normal operation in dense chips."
Parekhji said techniques developed and deployed in mission-critical systems years ago must be incorporated into chips built in nanometer-process te chnology today. Among them are fault tolerance, automatic detection of faults and providing for self-repair of chips after deployment. Incomplete testing during manufacture is forcing the use of these techniques in today's designs, he added.
Nanometer ICs also face dual problems of design correctness during and after manufacturing. Solutions have been proposed based on design styles and margins as well as through design automation tools and techniques. However, after manufacturing, designs are only analyzed in terms of reliability. Errors due to so-called hard or soft failures require periodic testing in the field.
The problem grows as design geometries shrink, with a rising miscorrelation between expected behavior in simulation and actual behavior in silicon. Panelists agreed that testing techniques used in critical applications also need to be applied to complex chip designs.
Across food chain
"Test and packaging on some new chips is costlier than silicon," said panel moderator Rajesh Raina of Motorola Inc. The idea is to spread the cost throughout the design food chain, from testing of intellectual property (IP) to chips on up to the system level.
Phil Nigh of IBM Microelectronics said many of the same methods used to improve testing at the chip and package levels can improve reliability at the system level. "The applied methods will be similar to the direction of IC testing, such as memory built-in self-test and logic BIST using on-chip clock generation."
Peter Ehlig of Texas Instruments Inc. warned of the potential for field failures due to circuit degradation or event upsets. Nanometer transistors degrade under mechanisms like channel hot carriers, negative-bias transistor instability and particle defects or contamination in the oxide layers. "With six to 10 levels of metal in today's leading designs, we will have more metal migration, defective barriers and contamination as well," said Ehlig.
Previously, new process technologies have been roa d-tested in memory chips prior to their use in other semiconductor components. But with system-on-chip technology, "we find that all the components come out on the new processes at the same time," Ehlig said. "It is clear that the designs are being ramped to high-volume production before all aspects of the new processes are fully understood."
Both logic blocks and embedded memories require self-correcting intelligence, said Yervant Zorian of Virage Logic Corp. (Fremont, Calif.). "Today's very deep-submicron semiconductor technologies are reaching defect susceptibility levels that result in lowering the manufacturing yield and reliability, and hence lengthening the time-to-volume," said Zorian. "We need to test at the IP level, at the chip level and at the system level, and do it periodically during the course of the system's lifetime."