NEW YORKDesign-for-yield is becoming a goal for system-on-chip (SoC) designers as today's very deep-submicron semiconductor technologies of 130 nanometers and below are reaching defect susceptibility levels that result in lower manufacturing yields and reliability. Papers from Agere Systems Inc. and Virage Logic Corp. at the 39th Design Automation Conference last month proposed the use of "infrastructure IP" to build in yield considerations from a project's start.
Yervant Zorian, chief scientist at Virage Logic (Fremont, Calif.), provided background on the importance of designing for yield. In his DAC paper, part of a special track on "Designing SoCs for Yield Improvement," Zorian said that in addition to functional intellectual property (IP) cores, today's SoC designs necessitate embedding a special family of what he termed "infrastructure IP blocks." These are meant to ensure the manufacturability of the SoC and to achieve adequate levels of yield and reliability.
Meanwhile, the Agere team suggested in its DAC paper that embedded FPGA cores are an ideal host to implement the test-infrastructure IP for yield improvement in SoC devices.
In Zorian's view, infrastructure IP leverages manufacturing knowledge and feeds back the information into the design phase. In fact, every single phase in the IC realization flow affects yield and reliability, Zorian said. This includes the design phase, prototyping or production ramp-up, volume fabrication, test, assembly, packaging and even the postproduction life cycle of the chip.
"In order to optimize yield and reach acceptable TTV [time-to-volume] levels, the semiconductor industry needs to adopt advanced yield optimization solutions, which need to be implemented at different phases of the chip realization flow," said Zorian.
This role of infrastructure IP is similar to that of a building's infrastructure elements, such as wiring networks or plumbing, which are independent from the actual function of the building, he said.
Due to rising time-to-market pressures, foundries are often forced to start volume fabrication on a given semiconductor technology before reaching the traditional defect densities and, hence, the yield maturity levels necessary prior to volume production. As a result, improving yields as quickly as possible is an important factor in lowering costs and improving profitability. The yield learning curve can be considerably improved, Zorian said, if the yield optimization can start at the design stage.
This can be done by applying knowledge from the fabrication process into design. To optimize the design and obtain better yield, foundries should diagnose yield problems during the early development process, Zorian believes. Collecting manufacturing data, such as defect distribution data, is done using special infrastructure IP blocks called embedded process-monitoring IP.
The overall yield of a n SoC design relies heavily on the memory yield. Memory and, therefore, overall die yield can be improved even if intrinsic or native-memory yield is unsatisfactory, Zorian said. The yield challenge is addressed by offering memories with redundancy, but that step alone is not enough, he continued. The key to detecting defects in a memory and allocating the redundant elements lies in manufacturing know-how specifically, defect distributions. To achieve high optimized yield, the infrastructure IP must embody that know-how.
Improving yield and reliability means leveraging a number of yield feedback loops. Zorian advocated embedding several of the feedback loop components into the SoC design as infrastructure IP. Examples of infrastructure IP used in the yield feedback loops include embedded process monitor IP; embedded test and repair IP; embedded diagnosis IP; embedded timing IP; and embedded fault-tolerance IP.
For its part, the Agere team led by Miron Abramovici from Agere Systems (Murray Hill, N.J.) and colleagues from the University of North Carolina and Wright State University used reconfigurable logic for infrastructure IP to "create the infrastructure only when needed." Instead of having the infrastructure for test always present in the circuit, for example, it can be downloaded in the embedded FPGA only when wanted to test the SoC.
FPGA features such as reconfigurability and a regular structure mean FPGA testing can achieve features that are impossible for ASICs, the team argued. For example, FPGAs can be tested pseudo-exhaustively and faults can be very precisely located.
Two types of fault tolerance exist for FPGAs. The first, manufacturer-level fault tolerance, relies on providing spare resources that can be used to replace the faulty ones. The cost of this approach is additional area that's needed for the spare resources and a performance penalty introduced by the column selection hardware.
The sec ond approach, user-level fault tolerance, avoids these costs since it relies on the spares naturally available in an FPGA, where any application uses only a subset of the existing resources. Knowing the user circuit to be implemented in the FPGA, and the exact location of the faulty resources, the researchers contend that it's possible to modify the existing implementation (mapping, placement, routing) to replace the faulty resources with fault-free spares. The experimental results showed that in most practical circuits implemented in FPGAs, a large number of interconnect faults are compatible with the circuit; this happens because many interconnect resources are unused, even in circuits where more than 70 percent of the cells are pressed into service. Every such fault can be tolerated without reconfiguration, the team found.