It’s an exciting time for everyone in the semiconductor industry. We are powering one of the most transformative shifts in human history by enabling the use of AI to reshape and improve many aspects of our daily lives. From detecting diseases earlier with AI-powered imaging to workers automating routine tasks, AI is fundamentally changing how we live, how we communicate, and how we work.
All around us, the rapid proliferation of AI is driving an unprecedented demand for high performance compute—both in data centers, for training and inference, and in the edge, through smarter, AI-enabled devices. However, this massive scale-out of compute is increasingly constrained by our ability to deliver power and manage the resulting thermal dissipation.
The numbers are quite stark: the power consumption of individual AI accelerator units per package has increased threefold in the last five years as more logic is integrated into one unit. When you compound it with the eightfold increase in deployed die-based units over the previous three years, it’s clear that this exponential growth in demand for power makes energy efficiency a key factor for the continued proliferation of AI. We cannot overcome this ‘power wall’ through the efforts of one company alone or by focusing on just one area. Instead, we must accelerate AI breakthroughs by driving innovation together across the semiconductor ecosystem, with a particular focus on advancing logic technology and packaging as the foundations for enabling AI innovation.
With collaboration in mind, nearly 2,000 members of our design community—including ecosystem partners and customers—participated our Open Innovation Platform® (OIP) Forum in North America this year.
Dr. L.C. Lu, VP of TSMC’s R&D and Design Technology Platform and Senior Fellow, expanded upon the core thesis that we can enable energy-efficient compute for AI through strong and continuous collaboration along multiple vectors, harnessing TSMC’s leading logic and 3DFabric® advanced packaging technologies. He highlighted how innovations in areas such as next-generation backside power delivery networks, Design-Technology-Co-optimization (DTCO) for standard cell and SRAM designs, and scaling of compute-in-memory (CIM) technologies each contribute and collectively amplify energy-efficient designs.
Additionally, AI processors require large bursts of current in extremely short intervals, creating significant Ldi/dt noise. We were excited to hear Dr. Lu explain how system-level innovations, especially advances in embedded deep trench capacitors (eDTC) and MIM capacitors (UHPMIM), can address these challenges and support 50% more power density without compromising power integrity.
AI scaling faces two more key challenges: expanding memory capacity and improving access bandwidth/latency. At TSMC, we enable the 3DFabric ecosystem to collaboratively deliver innovation across the entire stack to address these issues. Dr. Lu highlighted a transformative approach of using foundry logic technology to deliver 1.5X to 2X energy efficiency gains. In particular, he highlighted the use of TSMC’s N12 process for the logic base die in HBM4 designs and TSMC’s N3P process for custom HBM4E designs.
To address the interconnect bottleneck, innovations in co-packaged optics are key to enabling low power, low latency, and reliable data communication for future AI systems. To that end, it was invigorating to hear how TSMC’s COUPE™ technology offers a 5X to 10X power efficiency gain, 10X to 20X lower latency, and a compact form factor by delivering an interposer-based integration.
Bringing things full circle, Dr. Lu showed us how AI itself helps design AI chips. In fact, AI capabilities are already being infused into EDA tools to transform design exploration, Power-Performance-Area (PPA) target attainment, circuit optimization, electro-/optical co-design and analysis, and substrate layout creation. Multi-faceted collaborations between TSMC and EDA ecosystem partners are enabling customers to achieve better designs and higher levels of productivity.
Collaboration is at the heart of the OIP Forum. This event allows us to come together and celebrate the contributions of our OIP ecosystem partners in enabling, accelerating, and proliferating TSMC technologies for the benefit of our mutual customers. In the words of Dr. Lu, “Our intensive collaborations with ecosystem partners are very important to delivering these innovations.”
Every year, our global OIP ecosystem reunites across North America, Europe, and Asia, bringing together over 5,000 attendees and 750 companies for more than 225 technical talks. At this year’s OIP Forum in North America, the keynotes, customer and partner presentations, and pavilion exhibition highlighted the enduring and deep collaboration within the OIP ecosystem. Together, these collective efforts are accelerating the development of energy-efficient designs, strengthening a robust AI ecosystem, and pushing the boundaries of semiconductor innovation to empower what’s next in AI.