Bill Krenik, Dennis Buss, and Peter Rickert, Texas Instruments
May 24, 2005 (5:00 AM)
The era of the voice-only mobile phone is behind us. While voice will always be a key driver for mobile phone use, today's 3G handsets are multimedia systems with color displays, games, audio, video, cameras, Bluetooth, GPS, WLAN, high-speed wide-area data services, and other advanced features. With so much functionality, the latest 3G phones boost processing complexity by 5X to 10X over voice-only 2G phones. In addition, applications processing requirements jump even more.
Consumers expect all these new features in sleek, ergonomic, reasonably priced handsets with battery life at least equal to what they've become accustomed in less functional handsets. Studies show that users won't accept handsets with talk time below two hours. And they anticipate ever smaller physical devices that will somehow provide larger displays.
These apparently contradictory requirements—more complex features using less power and space—place considerable pressure on component providers to aggressively integrate the electronics. System-in-package (SiP) and System-on-chip (SoC) are two approaches that help satisfy these integration demands. SoC, with its ability to reduce board real-estate requirements, cut system costs, and lower power consumption, has been embraced by phone makers and their semiconductor suppliers. SiP, capable of marrying semiconductors made with different process technologies in one package, is coming into its own in handsets. Each approach offers advantages and drawbacks.
Mobile phone processors require significant amounts of RAM to keep the processor cores running with minimal wait states. This means that on-chip level one (L1) and level two (L2) cache memory arrays must be large enough to keep the processor core pipelines loaded and prevent excessive off-chip fetches.
To get an understanding for the amount of memory required in SoC applications, feature-laden 2.5G handsets may have up to 16 Mbytes of NOR flash memory, and the same amounts of NAND flash and DRAM. Sizes such as these are unsuitable for an embedded solution because of the area required to implement them and the resulting negative impact on board space and cost.
The key to deciding whether these memories should be embedded or off-chip is determining whether the embedded solution delivers more value than integrating the external memory. This means that any embedded solution should leverage the integration from an architectural perspective. One example application is graphics, which requires high memory bandwidth. For this requirement, it's advantageous to use a wide memory bus to deliver the added bandwidth. Bus width in these applications can be 256 or 512 bits wide. This wide bus-width is impractical with external memory. As a result, the embedded solution delivers value beyond a cost reduction.
However, SoC integration isn't cost-effective if the process complexity of the integrated chip is substantially higher than that of the isolated ICs. Consider the example of a 50-mm2 logic chip and a 50-mm2 DRAM chip. Assume that both the 6LM logic chip and the 3LM DRAM chip each require 26 mask levels. Using mask levels as a proxy for cost, the "cost" of the individual chips is 100 mm2 × 26 masks = 2600 mask-mm2. Now, if the logic and DRAM are integrated, the mask count increases to 32, and the "cost" rises to 3200 mask-mm2, a 23% increase! For most system designs, this increase would not be acceptable.
Another issue that arises with SoC memory integration is the fact that embedding a given process technology typically entails nine to twelve months of development beyond the standard CMOS version of that same process technology node. This means that complex embedded memory solutions become available almost a year after the standard CMOS implementation.
SiP offers one cost-effective solution to the embedded memory dilemma. The stacked die SiP implementation provides the same small footprint as a SoC solution. In this implementation one (or more) commodity memory die is stacked on top of the SoC logic device. Low-cost wirebond assembly technology interconnects the devices, which are encapsulated in an inexpensive chip-sized ball-grid array (BGA) package (Fig. 1).
1. This photomicrograph of a stack die SiP shows the wire-bonding.
Because the stacked die approach doesn't add process complexity to the high-performance CMOS communications processor, but leverages cost-effective commodity memory, it offers the best of both worlds. It doesn't normally require custom chips, which helps reduce time to market. And, because it adds space vertically, rather than horizontally, it fits the footprint suitable for battery-operated consumer products.
Analog and power-management integration
Today, analog and power management functions are normally implemented in analog process technologies, very different from the deep submicron CMOS used for digital baseband chips. If the analog and power-management functions can be implemented in deep submicron digital CMOS without adding process complexity, then the SoC integration of analog and power management functions can be a low-cost approach.
The biggest challenge in implementing high-speed and high-precision analog functions in digital CMOS is the process' low power supply voltage. Other limitations include poor matching of small components, high 1/f noise, and the absence of on-chip passive components (resistors, capacitors, and varactors) with adequate analog characteristics. Because of these limitations, it usually isn't feasible to copy existing analog functions in digital CMOS. Instead, the total system must be re-optimized to take advantage of digital CMOS and to develop new architectures to exploit the benefits of low-voltage and low-cost digital logic. In most cases, these architectures are well-known, but the low-voltage trade-offs are different:
- At low voltage, the power dissipated by flash converters is greatly reduced, thereby making flash architectures advantageous.
- Because flash converters have low power, multi-bit sigma-delta converters have advantages over single-bit architectures.
- Very fast logic allows offset compensation in a fraction of a sample period and enables small, low-power comparators.
- Oversampling DACs and ADCs become increasingly feasible at low voltage, thereby reducing kT/C noise and easing analog filter requirements.
- Digital, self-calibration and dynamic element matching have increasing advantages at small feature size and low voltage.
In an example of a re-optimized design, the 12-bit delta- sigma ADC takes advantage of the logic and high-speed switching capability of 90nm CMOS. Such a converter's high resolution and sampling rate allow more of the radio-channel signal processing to be done in the digital domain. This digital processing improves flexibility and reduces cost and power versus a more heavily analog implementation.
Power management is becoming increasingly distributed, especially in low-power applications, because of the need to reduce standby power by putting unused logic and memory in a standby or sleep mode. Much of this power management functionality can be accomplished using switches to activate or deactivate logic blocks. In addition, local, on-chip voltage regulation is also needed, and this requires on-chip low-drop out (LDO) regulators. Activating a switch or realizing an LDO at voltage near VDD often requires circuits to operate in excess of the circuit's VDD, and this can be achieved using drain-extended (DE) CMOS, which can sustain voltage on the drain in excess of the BVDss of a normal MOSFET. DE CMOS can also be used to implement high-voltage battery charger circuitry in battery-operated products.
Over the past several years, tremendous progress had been made in implementing analog and power management functions in deep submicron CMOS. Today, many analog functions required in a cellular handset can be implemented cost effectively in deep submicron digital CMOS. Consequently, SoC integration, coupled with the DBB chip, offers an attractive path to combined analog and power-management circuitry.
The radio in a modern handset faces some severe performance requirements. Signals with amplitudes of only a few microvolts must be received in the face of strong interferers, high output power levels (roughly 30 dBm) must be produced to drive the antenna, and isolation between various radios within the handset must be accounted for. In addition, radio designs require accurate filtering at high frequencies and good matching between circuits in the signal path. These combined requirements make radio integration a considerable challenge and make the choice of SiP vs. SoC for the radio function a complex decision.
Looking at the typical functions found in a modern GSM radio, the transceiver contains the small-signal radio electronics needed for up- and down-converting the information signal to the transmission band (Fig. 2). The power amplifier (PA) module amplifies the transceiver output to produce an output signal with adequate power for reliable transmission. The front-end (FE) module normally includes the RF switch function (for separating the time-multiplexed transmit and receive signals) and the RF preselect filters which are normally surface acoustic wave (SAW) devices (the module can be partitioned in other ways, as well). A similar diagram would represent a cellular standard, such as CDMA, that implements full-duplex operation in the air interface. However, the switch function would be replaced by a duplexer.
One possible partitioning option for the radio electronics, an SoC, integrates the radio transceiver with the baseband processor. Alternatively, using SiP integration, the transceiver could be integrated with the PA and FE modules to create one analog radio module. Because interfacing the radio signals between analog functions normally requires matching networks, many passive components are associated with the radio design. Consequently, it's advantageous to pull as many passive elements as possible into the PA and FE modules. Normally, the PA and FE modules remain separate to avoid thermal stability issues with the SAW filters from being aggravated by the heat generated in the PA. However, some designs integrate the entire radio function into one package.
Ultimately, a discussion of SiP vs. SoC in the RF implementation comes down to whether the transceiver is better integrated through packaging technology with the FE and/or PA module, or if it's better to integrate the transceiver monolithically with the baseband processor.
Because SiP allows use of a conventional analog RF transceiver, no new transceiver architectures or special IC technology is required. The transceiver capability is well established and, apart from layout considerations associated with module integration (bond pad placement, IC aspect ratios, etc.), little stands in the way of SiP integration. However, integrating the transceiver also offers little in terms of improving the overall system. While board area may be reduced, no improvement in power consumption is gained and the overall system cost may increase.
SoC integration of the transceiver is normally undertaken with monolithic integration in deep submicron CMOS. Alternatively, a BiCMOS (SiGe) wafer process might be used to implement a conventional radio architecture. However, the additional reticles required for a SiGe wafer would drive up the cost of the system logic and memory, and the lack of SiGe processes at state of the art lithography would increase the logic area. Also, the benefits associated with tightly coupling the system logic with the radio function wouldn't be fully realized if a conventional radio is employed. Hence, monolithic integration in BiCMOS (or SiGe) isn't an attractive option.
Consequently, SoC integration of the transceiver must be undertaken in CMOS. Fortunately, deep submicron CMOS transistors offer good RF performance and meet the needs of integrated transceiver designs (low noise figures and high transition frequencies are possible). However, conventional RF transceiver designs make extensive use of analog components and require high-performance passive elements. Producing such a design in CMOS would normally require several additional processing steps to produce the resistors, capacitors, and inductors needed.
Given the tremendous logic density and high clock speeds offered by deep submicron logic processes, however, it seems natural to look for ways to exploit this process technology through SoC. Doing so may require developing new radio architectures for implementation in deep submicron CMOS, but it can provide significant advantages. Foremost among them is the fact that, as advances in CMOS wafer processing produce faster switching speeds, it becomes possible to sample at higher rates. Oversampling of the input signal reduces noise aliasing problems and relaxes the input networks design. More complex filtering can be added and A/D conversion can occur closer to the antenna. In addition, SoC integration improves system yield because more of the system function is implemented as logic (verses analog RF which suffers parametric yield loss). Moving the radio function to an aggressively scaled technology also reduces board area and total silicon area (Fig. 3).
A system as complex as a 3G cellular handset may best be implemented with a combination of SiP and SoC technologies. For system memory, SoC integration may require additional reticle steps, and therefore be less cost-effective than SiP die stacking. On the other hand, high-performance A/D and D/A converters and excellent power management have been realized with SoC integration in CMOS.
For RF integration, a mix of SiP and SoC appears optimal. While power amplifiers, SAW filters, RF switches, and their associated passive elements are best implemented as SiP modules, considerable benefits are gained from SoC integration of the RF transceiver function with the system baseband processor in deep sub-micron CMOS. RF SoC integration can reduce power, cost, board area, and test cost while improving performance, phone manufacturability and yield.
About the author
Peter Rickert, TI Fellow, is the director of ASP platform management. He received a BSEE degree at Clarkson University. Rickert can be reached at email@example.com. Dr. Dennis Buss is the vice-president of silicon technology development at TI. He holds bachelor's, master's, and doctoral degrees in electrical engineering from MIT. Buss can be reached at firstname.lastname@example.org. Bill Krenik is a wireless advanced architectures manager in TI's wireless terminals business unit. He received a PhD in electrical engineering at the University of Texas, an MSEE degree at Southern Methodist University, and a BSEE degree from the University of Minnesota. Krenik can be reached at email@example.com. firstname.lastname@example.org.
Copyright © 2003 CMP Media, LLC | Privacy Statement
Click here to read more ...