System-in-package option aids reuse
By David Sherman, Vice President of Engineering, Alpine Microsystems Inc., Campbell, Calif., email@example.com, EE Times
March 26, 2001 (3:24 p.m. EST)
Designing new products is easier when you don't have to reinvent each detail from scratch. That's the guiding principle behind the current trend to compile and reuse libraries of intellectual property (IP). But for most applications, cell-based design with system-on-chip (SoC) has already plateaued in terms of price, performance and time-to-market and is now at the point of diminishing returns.
This trend is starting to contradict the philosophy that reusing proven design methodologies is a quick and cost-effective route to new generations of systems. If the approach taken to reusing the IP requires too much time to integrate and test, or if the system becomes too expensive to produce, a designer loses everything he or she has gained.
Even the first pass doesn't guarantee success. Often a design has to fit into a rapidly evolving niche of its own. Many designs can't wait for component updates; they need to get to market as s oon as possible. Solutions, therefore, must support rapidly evolving IP. A processor core might be revised every two years, but memory and peripheral cores change more frequently as process enhancements and standards evolve. Designers must plan for this flexibility. If a designer locks himself into too rigid an approach, updating the design might force an expensive and time-consuming restart from square one.
The problem is that conventional approaches to combining IP introduce unnecessary complexity and cost into chip design and development. Those complications can actually discourage reuse. The goal, after all, is to solve problems through IP reuse, not to create new ones.
SoC is the best known methodology for combining core intellectual property into new technology-but it's not the only way. The SoC approach has severe drawbacks that make it unsuitable for all but the highest-volume applications. These limitations include: the lack of component upgrade flexibility; the addition of unn ecessary levels of complexity that increase component cost, design time and test time; and increased restrictions on printed-circuit-board layout, performance and power dissipation.
Consider just the mask cost for a new SoC. Standard wafer pro-cesses are optimized for either logic or memory. A typical SoC mask set in 0.18 micron already costs $350,000, and that cost is forecast to double at every process node. For instance, at 0.10 micron the mask cost alone for a product with a volume of 100,000 units would be $10. Further, as processes shrink and designs become more complex, the design time for SoC is becoming much longer than the time required for designing the IP cores. This is because most SoC designs require a custom integration with unique, buried interfaces. Test access is compromised, and it is difficult to ensure that they will work together without interference. If digital-switching noise, for example, is injected into the substrate, it will be coupled into sensitive analog circuits. In terconnect bandwidth due to routing is less than the bandwidth of the IP blocks, complicating timing closure. The EDA tools available to predict and control those complex interactions are limited in scope, expensive and time-consuming to use.
The search for better alternatives has lead to an emerging generation of system-level integration, termed system-in-package (SiP). This approach uses IC processing technology to create high-density substrates capable of integrating multiple IP blocks with single-package components. The approach maintains design simplicity by allowing the creation of separately manufactured IP blocks and ensuring component upgrade flexibility.
The higher integration capacity of SiP reduces the number of components in the system. That reduces both the size and the routing complexity of the pc board. Designers can also reduce or eliminate many passive components, such as resistive terminators and local bypass capacitors. The SiP approach achieves the final assembly advantages of an SoC while retaining the design and manufacturing advantages of separately packaged parts.
With a SiP approach, the designer selects IP components that are optimized to minimize manufacturing costs and take advantage of the high-speed and high-I/O-density characteristics of the substrate. By partitioning functions like memory, logic or analog into a separate die wafer, fabrication becomes simpler, die size is reduced and wafer yield increases. IP handled as die can be fully tested and characterized, streamlining the design phase.
In addition to reducing die area through partitioning, I/O driver size and voltage can also be reduced for internal interconnect. Array pads allow a much larger number of I/O ports to widen buses and eliminate multiplexed duplex buses. Outside of the package, interfaces between dice in SiPs are also buried, but there is an important distinction. The s eparate dice that comprise the SiP can be tested by standard test flows. The only additional tests necessary for buried buses are Joint Action Test Group (JTAG) Extest, commonly used on state-of-the-art pc board designs. Production testing is also simplified, allowing the optimum test to be used on each block.
Another significant difference between SoC and SiP methodologies is the benefit that the latter technology brings to system-level interconnect. These include lower power and noise, which, in turn, allow higher operating frequency and higher bandwidth. For example, in Alpine Microsystems Inc.'s (Campbell, Calif.) SiP approach, the substrate is attached to the pc board using an area-array packaging technique. That provides a very low inductance path from the pc board to the integrated circuits and improves heat dissipation.
This methodology is based on a patented Microboard substrate, developed with integrated-circuit processing techniques that allow very fine line and via pitch. Th e Microboard substrate uses a copper-on-low-k dielectric interconnect that offers very high routing resources with high-speed and low-noise 50-ohm signal paths. Solder-bump technology provides ultralow inductance-less than 50 picohenries-connections.
Designers can preserve IP blocks intact by cloning duplex buses and defaulting them to unidirectional operation. Many I/Os are now available with programmable drive strength and voltage. This means that existing IP can achieve significant power and speed benefits by driving a lower swing signal, as long as off-module interfacing isn't required. Ideally, once a system designer has adopted a SiP approach, he or she can create co-designs that comprehend both on-die and off-die bus design to maximize benefits.
Once designers can treat off-die buses and on-die SoC buses similarly, they remove many of the barriers to architectural innovation. They can drive down cost by defining a bus that suitably intersects the routing density of both the SiP a nd the on-die density and then repartition the system. For instance, they can use that technique to shift a level of cache hierarchy from the processor die to a readily available, lower-cost SRAM die. Buses or groups of buses that are 512 bits wide fit easily in the area of interconnect density overlap for on-die and off-die buses.
The advantages of reusing IP are clear. Popular technologies like SoC actually raise complexity and cost for most applications. SiP is an attractive alternative that occupies a sweet spot between SoC and traditional, separately packaged parts.