The big picture: During his tenure as Intel's CEO, Pat Gelsinger sought to correct a critical strategic misstep that allowed TSMC to surpass Intel in process technology. Gelsinger promised that Intel's 18A process would enable the company to reclaim its leadership in the foundry space, but that claim has faced scrutiny since Intel reset its guidance in July.
Last week, rumors emerged suggesting Intel's 18A process node suffered from an "abysmal 10 percent yield" and a significantly lower SRAM density compared to TSMC's competing N2 manufacturing node. Naturally, this led to a lot of discussion in the tech community and prompted several comments on the matter from both industry analysts and even from Intel's now-former CEO Pat Gelsinger.
Shortly after Patrick Moorhead of ARInsights dismissed reports that Broadcom was dissatisfied with Intel's 18A node, Gelsinger weighed in, expressing confidence in the progress being made:
thank you Pat for helping to set the record straight. I'm so very proud of the TD/18A team for the incredible work and progress they are making.
– Pat Gelsinger (@PGelsinger) December 7, 2024
It's important to note that when Gelsinger joined Intel a few years ago, the company was in crisis mode. His efforts to reshape Intel's culture and strategy have yet to yield significant results, partly because of the audacious "five nodes in four years" goal he set for the company.
The 18A process represents the culmination of these efforts, but much of the current debate about its viability stems from incomplete or outdated information.
At Deutsche Bank's 2024 Technology Conference in September, Gelsinger reported that Intel's 1.8nm-class process technology had achieved a defect density of less than 0.4 defects per square centimeter. This is a promising metric, especially given that the industry standard for this stage of development is typically below 0.5 defects per square centimeter.
For comparison, TSMC's N7 and N5 nodes also achieved defect densities of approximately 0.33 defects per square centimeter about a year before entering high-volume production.
However, yield analysis is more nuanced than simply than looking at defect density in a vacuum. Chip size plays a crucial role, as the number of chips that can fit on a wafer is directly related to their size.
Smaller chips – such as those used in smartphones and IoT devices – are typically the first to adopt new nodes, allowing manufacturers time to optimize yields for larger chips, like GPUs and AI accelerators.
This dynamic explains why companies like Intel and AMD have transitioned away from monolithic chip designs in favor of a chiplet-based approach. Given defects are small enough, fitting more chips on a given wafer has proven a good strategy to get a higher yield rate.
Others like Broadcom and Nvidia build their large processors in a way that allows them to get usable chips even from partially defective dies (which are usually repurposed as lower-end products, a.k.a. binning), further refining their yield evaluation process beyond simplistic "napkin math."
Intel's next-generation Panther Lake CPUs will feature several chiplet configurations, and leaks suggest the largest die containing the CPU and NPU cores will have a surface area of around 114 square millimeters.
At a defect rate of 0.4 defects per square centimeter, the theoretical yield for this die would range between 50 and 68 percent, depending on the yield model used. This estimate assumes Intel's 18A process yields have not improved since September, which is unlikely.
Despite TSMC's dominant position in the semiconductor manufacturing market, Intel's aggressive push to revive its foundry business remains a high-stakes gamble. Gelsinger's recent departure as CEO has been seen by some as a premature move that fails to address Intel's fundamental challenges.
Critics argue that without resolving its core issues, Intel risks being split up, shutting down its factories, or even being acquired by a major player like Qualcomm. For now, we'll have to wait and see.