Opinion: The foundry makes all of the logic chips critical for AI data centers, and might do so for years to come.
semiengineering.com, Jun. 16, 2025 –
Large language models (LLMs like ChatGPT) are driving the rapid expansion of data center AI capacity and performance. More capable LLM models drive demand and need more compute.
AI data centers require GPUs/AI Accelerators, switches, CPUs, storage and DRAM. About half of semiconductors are consumed by AI data centers now. This percentage will be much higher by 2030.
TSMC has essentially 100% market share in AI data center logic semiconductors. TSMC makes:
The only essential AI data center chips not made by TSMC are memories: HBM, DDR, flash.
TSMC has the big 4 things AI data centers must have:
Advanced Process Technology: AI data center chips, especially AI accelerators, need the most advanced process technology to get the most transistors on a chip. Foundries like GlobalFoundries were not able to fund the development of advanced finFET nodes. Only Intel, Samsung, and TSMC have 2nm-and-below process technologies and roadmaps.
Advanced Package Technology: LLM models have grown exponentially in size, so a single GPU chip can’t process the model even with the most advanced process nodes and maximum reticle size. Multiple GPUs/AI accelerators are required to run an LLM model. With multiple chips the bottleneck becomes chip-to-chip data rates. The data rates between chips using copper interconnects at 200 Gb/s is very, very slow compared to on-chip data rates.
The solution is advanced packages that integrate multiple GPU chiplets and HBM memories on a multi-reticle substrate with very, very fast chip-to-chip data transfer to create a GPU compute assembly. TSMC constantly develops ahead of customer needs. Co-packaged optic communications will be in use for AI accelerators of the future to replace copper connects. TSMC in 2024 announced their COUPE process for integrating optical engines in advanced packages. Nvidia earlier this year announced their first scale-out switch using this technology.