Evolution of Nvidia's packaging, panel-level fan-out packaging opportunities are

2024-06-11

CoWoS Production Capacity is Tightly Stretched, GB200 is Planned to Introduce Panel-Level Fan-Out Technology Ahead of Schedule Next Year.

To alleviate the tight capacity of CoWoS advanced packaging, the supply chain has revealed that NVIDIA is planning to introduce its "world's most powerful AI chip" GB200 into panel-level fan-out packaging (FOPLP) ahead of schedule, moving from the original 2026 to 2025, which will ignite the business opportunities of panel-level fan-out packaging ahead of time.

The latest report from foreign investment also confirms the relevant information and points out that the supply chain of NVIDIA GB200 super chip has been launched and is currently in the design fine-tuning and testing phase, with business opportunities on the verge of breaking out.

The latest report from foreign investment estimates that, judging from the CoWoS advanced packaging capacity, it is estimated that 420,000 GB200 will be sent to the downstream market in the second half of this year, and the output next year is expected to reach 1.5 million to 2 million.

Advertisement

Overall, under the trend of insufficient supply of CoWoS capacity, the expansion speed cannot keep up with the demand, and the industry expects that panel-level fan-out packaging, which is also an advanced packaging technology, will become a powerful tool to alleviate the supply of AI chips.

There are two branches of fan-out packaging, namely wafer-level fan-out (FOWLP) and panel-level fan-out (FOPLP). The size of high-end chips is getting bigger and bigger, but the size of semiconductor components has reached a point where it can hardly be further miniaturized. Using packaging technology to improve computing performance and reliability, including making special thin-film processes and chip stacking on glass substrates, has become the mainstream of the industry. This is done through heterogeneous integration packaging layout, with panel-level fan-out packaging becoming the new mainstream in the market. Especially, the glass substrate is very flat, which can perform more precise etching and increase the density of transistors.

At present, among the packaging and testing factories in the Taiwan region, Powertech Technology is the fastest in the layout of panel-level fan-out packaging. The company has passed through its Zhubei No. 3 factory to fully lock in panel-level fan-out packaging and TSV CIS (CMOS image sensor) and other technologies, emphasizing that through fan-out packaging, heterogeneous integration of ICs can be achieved.Li Cheng had previously stated that he views the business opportunities brought by the panel-level fan-out packaging era positively, and compared with wafer-level fan-out packaging, the chip area produced by panel-level fan-out packaging is 2 to 3 times larger.

Panel giant Qun Chuang is optimistic that 2024 will be the "advanced packaging mass production year" for the group to enter the semiconductor field. The first phase of the fan-out panel-level packaging product line's capacity has already been booked out, and mass production and shipment are planned for the third quarter of this year.

Qun Chuang's chairman, Hong Jinyang, emphasized that the advanced packaging technology (PLP) connects the chips through rerouting (RDL) to meet the requirements for high-reliability, high-power output, and high-quality packaging products. It has obtained the packaging process and reliability certification from international first-tier customers, and the yield rate has also been affirmed by the customers, which can be mass-produced this year.

A look at TSMC's process node roadmap for 2025 and 2026
Net profit soars six times, Nvidia announces a 1-for-10 stock split
Ye Tianchun: Full industry chain data! Taking the path of integrated circuit inn
Chen Gang, General Manager of BYD Semiconductor: The ecosystem of cars is far gr
The last rise of DDR3?
Evolution of Nvidia's packaging, panel-level fan-out packaging opportunities are
FOPLP, widely popular
The top ten artificial intelligence chip manufacturing companies in 2024
What will future chips look like?
SoftBank wants to become an AI investment leader, it is recommended to ask Nvidi

CoWoS capacity issues are difficult to resolve

The four major global CSPs (Cloud Service Providers) - Microsoft, Google, Amazon, and META - continue to expand their AI infrastructure, with a total capital expenditure expected to reach 170 billion US dollars this year. Institutional investors pointed out that the surge in demand for AI chips is beneficial, but the increase in the area of the silicon interposer will reduce the number of 12-inch wafers cut out, which will continue to make TSMC's CoWoS (Chip on Wafer on Substrate) capacity in short supply.

CoWoS can be divided into two parts: CoW and WoS. CoW (Chip-on-Wafer) is the stacking of chips, and WoS (Wafer-on-Substrate) is the stacking of chips on a substrate. The combined CoWoS is the stacking of chips and then packaging them on a substrate.

NVIDIA's GPU accounts for 80% of the global market share. Research institutions predict that by the end of 2024, TSMC's CoWoS capacity will reach about 40,000 wafers per month, and it will double by the end of next year. However, with the release of NVIDIA's B100 and B200, the area of the interposer used by a single chip will be larger than before, which means that the number of interposers that can be obtained from a 12-inch wafer will further decrease, and CoWoS capacity will continue to chase GPU demand.

Since 2011, the iteration of CoWoS has revealed some clues. Each generation of silicon interposers continues to grow, and as the area of the interposer increases, the number of interposers that can be obtained from a 12-inch wafer decreases. In addition, the number of HBMs (High Bandwidth Memories) installed is increasing exponentially, and the HBM standards are also continuously improving. Relevant industry insiders revealed that the number is only the value obtained by dividing the area of the 12-inch wafer by the area of the interposer, so the actual number is even less.

In addition, in CoWoS, multiple HBMs (High Bandwidth Memories) are placed around the GPU, and HBM is also considered one of the bottlenecks.Relevant manufacturers have stated that High Bandwidth Memory (HBM) is also a significant challenge. With the gradual increase in the number of Extreme Ultraviolet (EUV) layers, taking SK Hynix, the leader in HBM market share, as an example, the company applied a single layer of EUV during the 1α production phase. This year, they have started transitioning to 1β and may increase the application of EUV by 3 to 4 times.

In addition to the increased technical difficulty, as HBM has gone through several iterations, the number of DRAMs within HBM has also increased synchronously. The number of DRAMs stacked in HBM2 is between 4 to 8, while in HBM3/3E, it has increased to 8 to 12. In HBM4, the number of stacked DRAMs will increase to 16.

Under the dual bottleneck, it is still difficult to overcome in the short term. Competitors have also proposed solutions, such as Intel using rectangular glass substrates to replace the 12-inch wafer interposer. However, the preparation work also requires time and R&D investment, and breakthroughs are awaited from the manufacturers.

Leave a comment