AEI

ASIA ELECTRONICS INDUSTRYYOUR WINDOW TO SMART MANUFACTURING

Samsung Tops Industry in High-Stack HBM Solution

Samsung Electronics Co., Ltd. has announced the HBM3E 12H, the industry’s first 12-stack HBM3E DRAM and the highest-capacity High Bandwidth Memory (HBM) product to date. Samsung’s HBM3E 12H provides an all-time high bandwidth of up to 1,280GB/s and an industry-leading capacity of 36GB. In comparison to the 8-stack HBM3 8H, both aspects have improved by more than 50%.

HBM3E 12H is the industry’s first 12-stack HBM3E DRAM.

“The industry’s AI service providers are increasingly requiring HBM with higher capacity. (Specifically), our new HBM3E 12H product has been designed to answer that need,” said Yongcheol Bae, Executive Vice President of Memory Product Planning at Samsung Electronics. “This new memory solution forms part of our drive toward developing core technologies for high-stack HBM and providing technological leadership for the high-capacity HBM market in the AI era.”

Adopts Thermal Compression Non-Conductive Film

Mainly, the HBM3E 12H applies advanced thermal compression non-conductive film (TC NCF). Specifically, this allows the 12-layer products to have the same height specification as the 8 layers. Thus, it meets current HBM package requirements. The technology is anticipated to have added benefits, especially with higher stacks. This is so as the industry seeks to mitigate chip die warping that comes with thinner die.

Additionally, Samsung continued to lower the thickness of its NCF material. Accordingly, it achieved the industry’s smallest gap between chips at 7µm, while also eliminating voids between layers. These efforts result in enhanced vertical density by over 20% compared to its HBM3 8H product.

Samsung’s advanced TC NCF also improves thermal properties of the HBM by enabling the use of bumps in various sizes between the chips. During the chip bonding process, smaller bumps are used in areas for signaling. Meanwhile, larger ones are placed in spots that require heat dissipation. Also, this method also helps with higher product yield.

As AI applications grow exponentially, the HBM3E 12H will be an optimal solution for future systems that require more memory. Its higher performance and capacity will especially allow customers to manage their resources more flexibly and reduce total cost of ownership (TCO) for datacenters. When used in AI applications, it is estimated that, in comparison to adopting HBM3 8H, the average speed for AI training can be increased by 34%. Moreover, the number of simultaneous users of inference services can be expanded more than 11.5 times.1

Samples of Samsung’s HBM3E 12H are now available to customers, while mass production will be on the first half of this year.

1 Based on internal simulation results