John O'Boyle, Dir of Marketing, email@example.com,Jasleen Raisinghani,Business Development Mgr, Strategic Accounts, firstname.lastname@example.org, Cubic Solutions, a division of Samsung Semiconductor Inc., San Jose, Calif.
As we all know, the system-on-chip (SoC) market is complex both from the technology and the business points of view. Today, design cost and time-to-market are the key components influencing the decision-making process. Clearly, the basics of the technology are well-understood: smaller geometry devices mean higher "packing density," which evolves into more gates/transistors per square millimeter. Theoretically, this results in the deployment of a complete system-on-chip.
But let's define exactly what a "system" is. By some manufacturers' offerings, it seems that a system is just a big chip. Just about any semiconductor or design company can make a big chip and call it a system. A complete system, however, should perform some kind of processing. So we define our SoCs as chips that contain one or more embedded processors. This can be a DSP or a complex ARM core. The point is that they do the processing for the system-on-chip. The SoC does not rely on an external processor.
Naturally, because of the inclusion of a processor core, the SoC design needs a fairly advanced manufacturing process as well as the ability to combine various logical and analog functions and even to include the memory on the SoC.
For example, how does one match the transistor requirements for a certain logic device and also aptly design an efficient embedded memory structure? Is it feasible to have mixed-signal devices on the same chip? What issues must the design team consider in order to design and fabricate a new SoC? Also, what additional cores and interfaces should be integrated to support and enable the SoC deployment? Since there are so many complexities to resolve, why are more and more engineers leaning toward the SoC approach instead of a multiple-chip system?
SoCs are attractive to system designers because they allow for a reduction in development time and effort with an increase in predictability. Clearly, the single-chip solution provides lower manufacturing costs, less power consumption, smaller footprint and higher system reliability. These advantages, however, are only achieved with a skilled design team, quality tools, the availability of embeddable memory elements, logic and processor cores and a stable manufacturing process. And of course, no self-respecting SoC would be found in a simple 16-pin DIP. So it also needs an advanced high-density package like the SuperBGA.
In considering the embedding of a memory element into the SoC design, it is important to understand that there are some types of embedded memory that require a significant increase in the number of masks used to combine logic and memory on the same chip. More masks mean higher production cost and lower yield. This also means higher per-chip ret ail cost.
Is it possible to design an SoC that is too expensive for a given application? Yes. The combination of logic plus embedded memory on a single chip may ultimately cost more to implement than the same functionality in a multiple-chip solution.
It is imperative for the design/semiconductor company to have excellent manufacturing processes and leading-edge memory expertise.
It is prudent to be careful to limit the number of additional masks. However, when a comparison is made between the increased complexity of adding a second chip and its cost to the total cost of an embedded element, often the balance swings to the side of the single-chip solution.
Another factor to consider is that 4-Mbit DRAM is not available as a single chip. The smallest single-chip memory you can buy today is 16 Mbits. In cases where a designer needs 4 Mbits or less, the choice is either to buy a stand-alone memory chip and pay for more memory than is used or embed the ideal amount of memory.
At this point, the embedded memory may make both good economic sense and also good engineering sense. Each case is different. So, we really can't give specific guidelines. But when the SoC design calls for a small amount of memory, an embedded solution often provides the best resolution. But how does the designer decide? Lets consider an example:
Example 1. For a number of consumer electronics applications where the price pressure is high, it makes sense to integrate SDRAM/Flash functionality onto the chip. The principal reason is that the memory-intensive calculations are much faster. Also by sharing memory between different blocks there is the potential to reduce redundant memory and further reduce the size and cost of the chip. There are a number of key points to keep in mind in order to make this integration successful:
As with SRAM integration, being able to access all the key pins of the SDRAM memory block for testability is key to making merged memory and logic a success.
To m inimize tester time (which affects per-unit price), pad placement has to be given careful thought.
Today it is possible to have DRAM integration up to 16 Mbits, with support for 64 Mbits being introduced by Samsung shortly.
Today, no system is complete without an interface to the real world. The interface on an SoC is through analog-to-digital, digital-to-analog or mixed-signal solutions, phase-locked loops (PLLs), other various input and output circuits, and RF cores for radio and wireless applications.
Mixed-signal problems are now mostly a thing of the past. We've all heard the story that the best way to build an oscillator is to try not to build one and vice versa. But today the circuit-level understanding of the methodology needed to build a mixed-signal device is well-understood. Applying it in a monolithic chip, however, is not always a simple matter. The key point is to make sure that the library elements are matched to the process flow especially the mixed-sign al elements.
Additionally, the mixed-signal design may still need some intervention. Special attention and caution are required to isolate the digital and the analog portions of the chip.
Think about the "noisy" digital logic going rail-to-rail while the last stage in a 10-bit analog-to-digital converter is resolving a few millivolts of signal. Then you have an idea of the sensitivity of the problem. So, experienced designers will take special precautions, including grounded "guard rings" around the analog circuit, and multiple-power and ground taps into the active circuit area of the chip.
In the case of an on-chip RF device, the opposite problem exists. It is possible to cross-couple RF energy into combinatorial and sequential logic. In these cases, the designer may shield the RF portion of the chip with a layer of metal appropriately tied off or biased.
Beyond layout issues, mixed-signal performance at one time required multiple process flows on the same chip. It is still common to find CMOS and bipolar process flows combined into one expensive, power-hungry BiCMOS product. But today's advanced CMOS is perfectly capable of implementing virtually all the mixed-signal, RF and other solutions required, short of outright power drivers. We are now seeing complete CMOS wireless chips entering production in 0.13-micron CMOS flows. And some other RF designs are being done in 0.18-micron CMOS. But how can we make the right choice? Lets look at an example SoC for a personal digital assistant:
Example 2. PDAs today are trying to perform such traditional functions as managing personal contact information, performing housekeeping functions like keeping track of expenditures, playing music, browsing the Web, and even cellular communications, GPS and so forth. The key technology enabling all this is the low-voltage process combined with intellectual property (IP) like USB host, Bluetooth, global-positioning satellite, ARM, DSP, PLL, ADC, DAC and so on. In order to make this IP integration a succ ess, the design team must keep the following in mind:
IP when used in an SoC typically does not reach its targeted speed in silicon. Because of this, memory access is on the critical path. So a custom layout is required to reach targeted speed or an extra-long layout period is needed to meet timing. You also need an extra margin in Mips.
The IP needs to work as a subsystem. We need to have system components such as timer interrupt controller, GPIO, UART, USB, DMA controller and the like work in an integrated fashion.
We need to define the architectures such as bus protocol, external interface and memory access scheme.
We need to have built-in self-test and verify functionality of the IP and the interface.
The single-chip solution provides lower manufacturing costs, less power consumption, smaller footprint and higher system reliability all of which translate into lower per-unit-piece prices. These advantages, however, are only achieved when the engineering te am understands the hows of making trade-offs. The whole idea is to be able to choose standard, tested IP that reduces the time and effort to get to market and at the same time reduces the risk. Therefore, picking an experienced design partner is just as critical as selecting the elements that comprise the SoC.