HBM 4, open book!
????If you hope to meet more often, please mark the star ?????? and add it to your collection~
For the memory industry, HBM has become the focus of everyone's attention.
While several large manufacturers have generally suffered losses in the past two years, only the HBM market has continued to grow, becoming one of the few performance indicators that can be shown off. In particular, manufacturers like Hynix, which holds the right to supply HBM in Nvidia's H100, have become one of the manufacturers that earn the most in the AI wave.
Although it has only been about a year since the release of the first HBM 3E, major manufacturers have already put HBM4 on the agenda, especially the two Korean manufacturers, SK Hynix and Samsung, which are competing fiercely over the mass production time of the next-generation HBM4 memory semiconductors.
The two companies plan to complete the basic design and enter the mass production stage, the so-called "Tape Out", in October and November. This stage also means that the memory chip has complete functions. Both companies are waiting to supply HBM 4 for Nvidia's Rubin-based AI chips, thereby occupying a leading position in the future market.
Will SK Hynix continue to maintain its leading position, or will Samsung regain its strength?
Route dispute
First, let's take a brief look at the technical specifications of HBM 4. Compared with HBM3E, it provides double the channel width, that is, 2048 bits versus 1024 bits, and the data transmission speed and performance have been significantly improved. HBM3E stacks 12 DRAM chips and supports 24GB and 32GB capacities, while HBM4 can stack 16 DRAM chips and support 64GB capacity.
According to JEDEC, HBM4 is designed to increase data processing speeds while retaining key features such as higher bandwidth, lower power consumption and greater capacity per chip or stack, which are critical for applications that require efficient management of large data sets and complex calculations, such as generative artificial intelligence, high-performance computing, high-end graphics cards and servers.
According to preliminary JEDEC specifications, HBM4 is expected to have “double the number of channels per stack” compared to HBM3, which indicates higher utilization and thus significantly improved performance. Also of note, to support device compatibility, the new standard ensures that a single controller can support both HBM3 and HBM4.
JEDEC noted that HBM4 will specify 24 Gb and 32 Gb layers, supporting TSV stacking from 4 hi to 16 hi. The committee has tentatively agreed to speeds up to 6.4 Gbps and is discussing higher frequencies.
Interestingly, JEDEC did not specify how HBM4 will integrate memory and logic semiconductors into a single package, which is one of the main challenges that the industry is eager to solve.
Image source: NVIDIA
We have discussed before that each generation of HBM standards is essentially a battle of technology routes. Whoever's standards are adopted will gain a leading position in the market. Therefore, Hynix, Samsung and Micron have engaged in a fierce battle over the standards.
As Korean manufacturers, Hynix and Samsung originally intended to use standardization for their own benefit: SK Hynix originally studied the concept of direct connection between HBM and logic processors, which involved memory and logic manufacturers jointly designing chips, which were then manufactured by wafer foundries such as TSMC. The same is true for Samsung, which obviously has more advantages in this regard because it has both wafer foundry and packaging businesses.
Micron did not originally plan to integrate HBM and logic chips into one chip. What it wanted to promote was that everyone could get faster memory access speeds through combination chips such as HBM-GPU, but relying solely on a single chip means greater risks, which means that the Korean factory's standards cannot become a reality.
US media previously claimed that as machine learning training models grow larger and training times lengthen, the pressure to shorten runtimes by speeding up memory access and increasing the memory capacity of each GPU will also increase. Giving up the competitive supply advantage of standardized DRAM in order to obtain a locked HBM-GPU combination chip design (although with better speed and capacity) may not be the right way forward.
But the Korean media's argument is just the opposite. For many years, South Korea's non-memory semiconductors have been difficult to develop. Now HBM brings a once-in-a-lifetime opportunity that cannot be missed. They said that in addition to customized DRAM foundries, there may be a bigger world, and even giants like Nvidia will have to design on boards manufactured by Samsung and SK Hynix.
Of course, it seems that Korean manufacturers, who have both technology and market share, will one day make their own path the de facto standard for HBM4. Micron originally pushed HMC with the intention of taking the lead in data center memory and getting out of the traditional semiconductor cycle, but it ultimately failed. Now it is well aware of the impact of Hynix's push for customized memory and tried to stop it, but it is already behind and can only follow suit.
Two major Korean manufacturers, a strong showdown
On August 19 this year, SK Hynix Vice President Ryu Sung-soo attended the "SK Icheon Forum 2024" held in Seoul. At the second meeting of the forum, Ryu Sung-soo announced SK Hynix's ambitious strategy to develop a product with 30 times higher performance than the existing HBM.
“We aim to develop products that offer 20 to 30 times better performance than current HBM, with a focus on differentiated products,” said Yoo. He stressed that the company is focused on addressing the mass market with AI-oriented memory solutions through advanced execution capabilities. This strategy is critical because demand for high-performance memory is growing, driven by the rapid development of AI technology.
Ryu stressed that SK Hynix’s HBM has received great attention from global companies, especially the M7, which includes Apple, Microsoft, Google Alphabet, Amazon, Nvidia, Meta, and Tesla. “All members of the M7, which are large U.S. technology companies, have approached us to provide customized HBM solutions,” Ryu revealed.
The Vice President also shared his personal commitment to meeting these demands, stating, “I have been in constant communication with M7 all weekend. Significant engineering resources will be required internally to meet their requirements, and we are making a great effort to secure these resources.” This dedication reflects SK Hynix’s determination to maintain its leadership in the HBM market.
He also said that SK Hynix needs to define its own memory specifications instead of following other companies. "We need to create our own (memory semiconductor) specifications instead of following specific companies." He concluded: "We are at an important turning point in the HBM model, and the demand for customized products is increasing. We will seize these opportunities and continue to grow the memory business."
However, HBM4 involves more advanced logic chips. Hynix, which is not good at this, chose to work with TSMC. As the world's largest wafer foundry, TSMC is also the supplier of M7. It is clear to their needs and is naturally adept at making similar products.
Earlier this year, TSMC and SK Hynix formed the so-called AI Semiconductor Alliance, which will combine the strengths of the two companies in their respective fields and coordinate the strategies of both parties under the principle of "one team strategy". Subsequently, the two parties announced a collaboration to develop HBM4-based chips. TSMC confirmed that it will use its 12FFC+ (12nm-level) and N5 (5nm-level) process technologies to assist Hynix in producing HBM4 chips.
Image source: Hynix
TSMC’s N5 process enables more integrated logic and functionality, with interconnect pitches from 9 microns to 6 microns, which is critical for on-chip integration. The 12FFC+ process, derived from TSMC’s 16nm FinFET technology, will enable the production of cost-effective substrates that connect memory to the host processor using a silicon interposer.
TSMC is also optimizing its packaging technologies, specifically CoWoS-L and CoWoS-R, to support HBM4 integration. These advanced packaging methods can build up to eight mask-sized interposers and facilitate the assembly of up to 12 HBM4 memory stacks. The new interposers will have up to eight layers to ensure efficient routing of more than 2,000 interconnects while maintaining proper signal integrity. So far, experimental HBM4 memory stacks have achieved data transfer rates of 6 GT/s at 14mA, according to TSMC's slides.
A TSMC representative said: “We have also optimized CoWoS-L and CoWoS-R for HBM4. Both CoWoS-L and CoWoS-R use more than eight layers, enabling HBM4 to route more than 2,000 interconnects with [proper] signal integrity. We work with EDA partners such as Cadence, Synopsys, and Ansys to certify HBM4 channel signal integrity, IR/EM, and thermal accuracy.”
However, it should be noted that despite the introduction of TSMC's advanced process and packaging, the DRAM in Hynix's HBM4 chip still uses the fifth-generation 10nm or 1b process, and SK Hynix expects to mass-produce 12-layer HBM4 in the second half of 2025.
At the same time, Samsung, as an IDM company with wafer foundry, memory, packaging and other capabilities, is also actively promoting customized HBM AI solutions.
In July 2024, Choi Jang-seok, head of new business planning at Samsung Electronics' memory division, stated at the "Samsung Foundry Forum" that the company intends to develop a variety of customized HBM memory products for the HBM4 generation, and announced cooperation with major customers such as AMD and Apple.
Choi Jang-seok pointed out that the HBM architecture is undergoing profound changes, and many customers are turning from traditional general-purpose HBM to customized products. Samsung Electronics believes that customized HBM will become a reality in the HBM4 generation.
Samsung's plan is to use HBM4 as an opportunity to reverse its disadvantage in the HBM battle. Samsung has both a system LSI department and a foundry department. The two departments work together internally to optimize performance from the initial design of the HBM4 basic chip. And because manufacturers such as NVIDIA want to entrust the entire process including foundry and packaging to one company, Samsung's so-called "turnkey (mass production)" strategy is obviously more competitive than the cooperation between Hynix and TSMC.
Image source: Samsung
Samsung formed a new HBM development team of around 400 people within its Device Solutions (DS) division around July and has made progress on HBM4, aiming to complete the product's tape-out by the end of this year. This move is also seen as laying the foundation for its mass production of 12-layer HBM4 products by the end of 2025.
It is reported that Samsung's existing HBM3E uses a 7nm process, but HBM4 will skip the 5-6nm process and use a 4nm logic process. The memory chip is more radical than Hynix and will use a new 10nm sixth-generation (1c) DRAM product.
As Samsung plans to use 1c DRAM in its HBM4 core chips, related investments will follow. TrendForce reports that Samsung's P4L plant will become a key location for expanding memory capacity starting in 2025, with DRAM equipment installation expected to begin in mid-2025 and mass production of 1c nanometer DRAM expected to begin in 2026.
For now, Samsung's HBM3E is still struggling with the certification process with Nvidia. TrendForce noted that as the company is eager to grab a higher HBM market share from SK Hynix, its 1alpha (1α) production capacity has been reserved for HBM3e.
Is hybrid bonding the future?
It is important to note that JEDEC's HBM4 standard does not mention stack height. It was originally scheduled to release the HBM4 standard at the beginning of this year, but the release was reportedly postponed due to differences of opinion among member companies on the stack height. It is understood that JEDEC intends to relax the height limit from the existing 720 microns (μm) to 775μm because additional space is needed to build more layers.
This also makes "hybrid bonding" technology the focus of the memory market. This hybrid bonding technology, which can reduce the thickness of HBM and increase the speed, is considered to be the key technology that determines the success or failure of the market.
According to Korean media reports, SK Hynix is developing two bonding methods for HBM4, which is expected to be mass-produced next year. They are the existing "MR-MUF" (Mass Reflow-Molded UnderFill) and a hybrid bonding dual-track method.
Bonding refers to the bonding process between semiconductors. HBM is a product made by stacking DRAMs, and MR-MUF is a method of first heating and performing a soldering-like operation, and then adding a viscous liquid between the chips to solidify it. At the same time, the "packaging" process to protect the chips is also carried out. In this process, DRAMs are connected to each other through a material called "bumps" (spherical conductive protrusions). However, hybrid bonding technology does not require the use of bumps between DRAMs, and directly connects DRAMs. This technology can not only significantly reduce the thickness of HBM, but also shorten the distance between DRAMs, thereby speeding up data transmission. Because this method performs well in making up for the weaknesses of traditional bonding methods, it has attracted great attention from major customers.
A semiconductor industry insider said, "Due to the high technical difficulty of hybrid bonding, SK Hynix may continue to use the MR-MUF method for HBM4's 16-layer products, but it is expected that hybrid bonding technology will be introduced anyway from the year after next."
In particular, the International Semiconductor Standards Organization (JEDEC) recently relaxed the thickness of the HBM4 standard from the previous 720 microns (㎛) to 775 microns. This means that memory companies can implement HBM4 through existing bonding methods, and it is expected that MR-MUF and hybrid bonding will coexist and develop for some time to come.
However, what is more noteworthy is Samsung Electronics' attempt to disrupt the HBM market. Samsung Electronics is said to be very determined to achieve success in hybrid bonding in HBM4. Another industry insider said, "If hybrid bonding technology is difficult to achieve, Samsung Electronics may switch from the existing 'TC-NCF' (thermal compression non-conductive film) method to MR-MUF, but it is more likely to focus on hybrid bonding at present." Samsung Electronics currently manufactures HBM through the TC-NCF method, which adds a thin non-conductive film (NCF) between chips and then performs thermal compression. However, so far, this method is considered to be less competitive than MR-MUF in terms of product integrity and production efficiency.
Samsung Electronics published a paper at the recent Electronic Components Technology Conference (ECTC) in Denver, Colorado, USA, emphasizing that hybrid bonding technology is necessary for HBM products with more than 16 layers. Despite the relaxation of the thickness standard by JEDEC, Samsung Electronics still hopes to successfully implement hybrid bonding before its competitors to ensure its market leadership. Hybrid bonding will become a necessary technology if more advanced products such as 24 layers and 32 layers are launched in the future.
This move is expected to prompt SK Hynix, which is being chased by Samsung Electronics, to accelerate the development of hybrid bonding technology. SK Group Chairman Choi Tae-won visited SK Hynix headquarters earlier this month and conveyed to employees the message of "realizing the commercialization of the sixth-generation HBM ahead of schedule next year", which the industry believes also includes the relevant content of hybrid bonding technology. In fact, SK Hynix's senior executives frequently mentioned hybrid bonding packaging technology in public.
Micron is also said to be focusing on hybrid bonding technology for HBM4. However, the industry predicts that its technology maturity is relatively behind Samsung Electronics and SK Hynix. Industry insiders said: "Micron Technology is expected to continue using the current TC-NCF method for some time to come."
Final Thoughts
As of now, the HBM market has formed a pattern of "one super, one strong, and one flat".
Hynix has the strongest technical strength. As Nvidia's most important supplier, it holds the initiative. Although Samsung has tried its best, its performance in Nvidia's HBM3 and 3E certification is not ideal. HBM4 is already a must. Although Micron has shipped HBM to Nvidia, its market share is too small, and its impact on the HBM standard is also relatively small. It is difficult for it to pose a substantial threat to the two Korean manufacturers in a short period of time.
With the advent of HBM4, the industry may usher in a more intense war, and the winner is expected to truly dominate the DRAM market in the next decade.
END
????Semiconductor boutique public account recommendation????
▲Click on the business card above to follow
Focus on more original content in the semiconductor field
▲Click on the business card above to follow
Focus on the trends and developments of the global semiconductor industry
*Disclaimer: This article is originally written by the author. The content of the article is the author's personal opinion. Semiconductor Industry Observer reprints it only to convey a different point of view. It does not mean that Semiconductor Industry Observer agrees or supports this point of view. If you have any objections, please contact Semiconductor Industry Observer.
Today is the 3870th content shared by "Semiconductor Industry Observer" for you, welcome to follow.
Recommended Reading
★ Important report on EUV lithography machine released by the United States
Silicon carbide "surge": catching up, involution, and substitution
★ Chip giants all want to “kill” engineers!
Apple , playing with advanced packaging
★ Continental Group, developing 7nm chips
★
Zhang Zhongmou's latest interview: China will find a way to fight back
"The first vertical media in semiconductor industry"
Real-time professional original depth
Public account ID: icbank
If you like our content, please click "Reading" to share it with your friends.