Embedded memory has become essential for achieving greater bandwidth and faster processing in SoC designs at 0.13 µm and below. By eliminating off-chip delays and reducing system size, embedded memory can offer a significant competitive advantage for SoC designers targeting high-performance applications.
Indeed, memory has moved well beyond its traditional status as a commodity component, and today designers can differentiate their products through advanced memory technology. As embedded memory begins to dominate SoC designs, however, the choice of memory technology plays an increasingly profound role not only in overall SoC performance and cost, but also in key SoC quality metrics, including yield, reliability and soft error rate (SER).
With the growing sophistication of key consumer applications like cell phones and consumer video, designers are turning to larger embedded memory arrays to satisfy seemingly contradictory req uirements for faster access, lower power and more feature-rich products. Driven by these application requirements, embedded memory already accounts for 52% of die area on the average in today's leading SoC designs - and will rise to over 90% by the end of the decade according to the Semiconductor Industry Association.
Because of the continued demand for more effective memory solutions, researchers will continue to leverage increasingly sophisticated design capabilities and advanced process technologies to deliver a steady stream of alternative embedded memory architectures. As in the past, however, new memory technologies are likely to stumble at one or more key hurdles particularly, manufacturability and ease of integration with logic.
Unless it is able to show some highly compelling advantage, any new memory technology that is unable to achieve high yield using standard processes, will not attract a critical mass of manufacturing support and will fail in the marketplace. Furthe rmore, any memory technology that cannot be easily integrated with logic at the process level will be too costly and inevitably considered too exotic for use in high-volume opportunities.
Conventional embedded DRAM and 6-transistor (6T) SRAM technologies have continued to survive the emergence of alternative embedded memory technologies over the past 30 years, because each has been able to meet prevailing memory requirements. As SoC requirements evolve, however, concerns such as manufacturing cost in the case of DRAM or cell size in the case of 6T SRAM make these approaches less attractive for SoC companies facing growing pressure for high-density, low-cost parts.
DRAM offers the promise of very high density memories, but this advantage is counterbalanced by DRAM's manufacturing disadvantages: Embedded DRAM architectures rely on substantial process changes that alter the performance characteristics of transistors, requiring substantial effort to integrate logic and embedded DRAM memory. Fur thermore, the resulting memory arrays are typically unable to meet high-speed access requirements because of DRAM's inherent high latency due to relatively long bit and word lines.
In contrast, embedded 6T SRAM remains compatible with logic processes and delivers good performance, providing a satisfactory solution for small embedded-arrays below about 0.25 Mbit in size. As SoC designers look to create larger, significantly denser arrays, however, this technology is proving less effective. 6T SRAM manufacturers have tried to respond by tightening design rules to create small 6T SRAM bit cells, but this technology does not scale easily.
In moving to higher density processes, newer 6T SRAM technologies have resulted in significantly lower yield, dropping below 50% for a 10 Mbit memory even at the 0.18 µm node. Although manufacturers have turned to redundant memory bits to improve yield, redundancy not only increases complexity but also adds cost due to the added manufacturing requirements for laser fusing repair and specialized test.
For all these efforts, SoC designers nevertheless face ongoing reliability challenges because 6T SRAM arrays also exhibit higher SER characteristics with more advanced process technologies at 0.13 µm and below. Measured in errors per megabit of memory, SER is a measure of the rate of functional failures that arise from alpha particles or cosmic rays, which create fault-inducing ionization paths through devices. In the move from 0.18 µm to 0.13 µm technologies, 6T SRAM SER has jumped from about 1,000 failures in time per megabit of memory (FITs/Mbit) to over 100,000 FITs/Mbit. Although error-checking and correction (ECC) methods can help reduce these problems, the addition of ECC circuitry adds significant cost and area.
While 6T SRAM SER jumps with each new technology node, newer memory technologies like 1T-SRAM have emerged to offer extremely low SER characteristics that remain virtually flat across advanced process generations. Using a single transistor and capacitor, this kind of very dense memory technology delivers high densities normally associated only with DRAM and standard logic process compatibility normally associated only with SRAM.
For designers, the result is an embedded memory architecture that translates into faster, simpler designs. Unlike conventional DRAM, this approach uses extremely short bit and word lines that speed access times to match 6T SRAM. Furthermore, this type of memory is organized into huge numbers of separate internal banks able to support parallel operations independent of each other, permitting refresh operations to be hidden from the user. At the system level, these architectural details result in highly efficient memory arrays that consume considerably less active and standby power than 6T SRAM technologies - an increasingly important product differentiator for the growing number of high-volume mobile consumer applications.
Newer embedded memory architectures continue to offer SoC designers dramatic improvements in key characteristics including reliability, speed and power. Ultimately, however, embedded memory must be cost-effective, and advanced high-density architectures can help reduce cost beyond their more obvious ability to reduce die size and enhance yield.
For example, because 1T-SRAM technology's simple internal structure enables use of logic design rules, this technology significantly improves yield even within an equivalent silicon area that might be used by alternative technologies. In combination, smaller die, higher yield and greater efficiency translate into substantial cost savings manufacturers face growing competition and more severe cost pressures.
Across diverse applications, the clear trend toward memory-dominated SoCs drives a growing urgency for more effective memory architectures. While the gap between traditional memory solutions and embedded memory requirements continues to wi den, the emergence of more advanced alternatives promises to deliver higher density, better yield, greater reliability and improved performance of larger memory arrays and consequently of SoCs themselves.
See related chart