DDR memory technology has evolved over the past 20 years
Shortly after DDR5 memory became mainstream, Samsung has taken the lead in early development of the next-generation DDR6 memory and expects to complete the design by 2024.
It is reported that at a recent seminar, Samsung's vice president of testing and system packaging (TSP) revealed that as the performance of memory itself expands in the future, packaging technology also needs to continue to develop. It has been confirmed that Samsung is already in the early development stage of the next generation of DDR6 memory and will adopt MSAP technology. Currently, MSAP has been used in DDR5 by Samsung's competitors (SK Hynix and Micron).
According to reports, the next generation of DDR6 memory will not only use MSAP to strengthen circuit connections, but will also adapt to the increased number of layers in DDR6 memory. In terms of specifications, the speed of DDR6 memory will be twice that of the existing DDR5 memory, with a transmission speed of up to 12800 Mbps (JEDEC), and the speed after overclocking can exceed 17000 Mbps.
Samsung is expected to complete its DDR6 design by 2024, with commercialization not likely until after 2025.
Unknowingly, from the initial leap from KB to GB, DDR memory has gone through 5 generations and is moving towards the 6th generation.
The memory market is full of twists and turns
What is DDR?
Before reviewing the development history of DDR, let's first understand what DDR is.
Storage is divided into ROM and RAM.
ROM is the abbreviation of Read Only Memory, which is a solid-state semiconductor memory that can only read previously stored data. Its characteristic is that once the data is stored, it cannot be changed or deleted, and the data will not disappear when the power is turned off.
RAM is the abbreviation of Random Access Memory. Random means that data is not stored linearly in sequence and can be accessed in any order, regardless of which location was accessed last time. RAM will lose data after power failure.
As for RAM, it is divided into two categories: SRAM (static random access memory) and DRAM (dynamic random access memory).
SRAM is a memory with static access function. It can save the data stored inside without refreshing the circuit. In other words, when the power is on, it does not need to be refreshed and the data will not be lost. SRAM is the fastest storage device for reading and writing in the early days, but its disadvantages are low integration, high power consumption, large volume for the same capacity, and high price. It is used in small quantities in critical systems to improve efficiency, such as the CPU's first-level cache and second-level cache.
DRAM is the most common system memory and can only keep data for a short time. In order to keep data, DRAM uses capacitors for storage, so it must be refreshed every once in a while. If the storage unit is not refreshed, the stored information will be lost.
The subsequent SDRAM and DDR SDRAM are both developed based on DRAM and are also a type of DRAM. SDRAM (Synchronous DRAM) means that the memory needs to synchronize the clock, and the sending of internal commands and the transmission of data are based on the clock.
The difference of DDR SDRAM (Double Data Rate SDRAM) is that it can read and write data twice in one clock, which doubles the data transmission speed. With performance and cost advantages, DDR SDRAM has become the most used memory in computers and servers. This is the DDR memory we are going to discuss today.
The Evolution of DDR
Like other hardware, memory follows Morgan's Law. From the ancient SIMM to the emergence of DDR, and then iterating based on DDR, memory standards and specifications have changed a lot.
From the initial leap from KB to GB, and from the evolution from a single 1GB stick to a single 16GB and 32GB stick, the development of memory capacity has gone through a long process.
In the early personal computers, the memory was installed directly on the DRAM socket of the motherboard in the form of DIP chips. Eight to nine such chips were required, and the capacity was only 64-256KB. It was very difficult to expand, but the memory capacity was sufficient for the processors and programs at the time. However, with the emergence of software programs and the new generation of 80286 hardware platforms, programs and hardware have put forward higher requirements for memory performance. In order to increase speed and expand capacity, memory must appear in an independent package, thus giving rise to the concept of "memory stick".
Memory stick and memory slot
When the 80286 motherboard was first introduced, the memory stick used the SIMM interface, with a capacity of 30 pins and 256 KB, and a bank must consist of 8 data bits and 1 parity bit. Therefore, the 30-pin SIMM we see is usually used in groups of four. Since the PC entered the civilian market in 1982 until now, the 30-pin SIMM memory with the 80286 processor has been the pioneer in the memory field.
30pin SIMM
Around the 1990s, PC technology reached a new peak of development - the 386 and 486 eras. At this time, the CPU had already developed to 16 bits, and 30-pin SIMM memory could no longer meet the demand. Its low memory bandwidth had become a bottleneck that needed to be solved urgently, and its 8-bit data bus resulted in a high procurement cost and increased failure rate. At this time, 72-pin SIMM memory appeared.
72pin SIMM
72pin SIMM supports 32-bit fast page mode memory, which greatly improves memory bandwidth. The capacity of a single 72pin SIMM memory is generally 512KB-2MB, and only two are required to be used at the same time. Most 386, 486 and later Pentium, Pentium Pro, and early Pentium II processors use this memory. Because it is incompatible with 30pin SIMM memory, 30pin SIMM memory has been eliminated by the times.
Early memory frequencies were not synchronized with the CPU external frequencies. They were asynchronous DRAMs, which included FPM DRAM (Fast Page Mode DRAM) and EDO DRAM (Extended data out DRAM). Common interfaces included 30-pin SIMM and 72-pin SIMM, and the operating voltage was 5V.
FPM DRAM is an improvement on the early Page Mode DRAM. When reading the same column of data, it can continuously transmit the row address without transmitting the column address, and can read out multiple data. This method was very advanced at the time.
FPM DRAM
EDO DRAM memory was a popular memory stick between 1991 and 1995. It is a type of 72-pin SIMM. It has a larger capacity and more advanced addressing method. Its reading speed is much faster than FPM DRAM. Its operating voltage is generally 5V, its bandwidth is 32bit, and its speed is above 40ns. It was mainly used in the 486 and early Pentium computers at the time.
EDO DRAM memory of different specifications
When EDO memory was popular from 1991 to 1995, with the rapid development of manufacturing technology, EDO memory had great breakthroughs in cost and capacity, and the capacity of a single EDO memory reached 4MB-16MB. Since the data bus width of Pentium and higher-level CPUs is 64bit or even higher, EDO RAM and FPM RAM are basically used in pairs.
EDO was popular from Pentium to Pentium 3, and was later replaced by SDRAM.
With the continuous upgrade of CPU, the launch of Intel Celeron series and AMD K6 processors and related motherboard chipsets, EDO DRAM can no longer meet the needs of the system. Memory technology has also undergone a revolution. The socket has been upgraded from the original SIMM to DIMM, and memory has ushered in the classic SDR SDRAM era.
SDRAM brought new life to memory. Its 64-bit bandwidth was consistent with the processor bus width at the time, which meant that one SDRAM was enough to keep the computer running normally, greatly reducing the cost of purchasing memory. Since the memory transmission signal was synchronized with the processor external frequency, the DIMM standard SDRAM was much ahead of SIMM memory in terms of transmission speed.
During this iterative period, due to the frequency dispute between Intel and AMD, SDRAM memory developed from the early 66MHz to the later 100MHz and 133MHz, and the memory specification also developed from PC66 to PC100, PCIII, PC133 and the less successful PC600, PC700, PC800.
SDRAM
Although the bottleneck problem of memory bandwidth has not been completely solved, CPU overclocking has become a permanent topic for DIY users. In the era of frequency competition between Intel and AMD, in order to achieve the goal of monopolizing the market, Intel joined hands with Rambus to promote Rambus DRAM memory in the PC market. Rambus DRAM memory uses high clock frequency to simplify the amount of data per clock cycle, so the memory bandwidth is quite excellent. Rambus DRAM was once considered a perfect match for Pentium 4.
Rambus DRAM
Despite this, the Rambus DRAM memory, which was too high-brow for the masses, was born at the wrong time and was later "robbed" of its throne by the higher-speed DDR. At the time, the PC600 and PC700 Rambus RDRAM memory failed due to the Intel820 chipset "failure" and the PC800 Rambus RDRAM was too expensive to be accepted by the general public. All these problems caused the Rambus RDRAM to fail in the womb and eventually succumbed to the DDR memory.
The DDR Era is Coming
DDR
DDR SDRAM (DDR for short) can be said to be an upgraded version of SDRAM. DDR transmits data once on the rising edge and once on the falling edge of the clock signal, which makes the data transmission speed of DDR twice that of traditional SDRAM. Since only the falling edge signal is used, it does not cause an increase in energy consumption. As for the addressing and control signals, they are the same as traditional SDRAM and are only transmitted on the rising edge of the clock. In addition, since DDR uses the 2.5V voltage of the SSTL2 standard, which is lower than the 3.3V voltage under the LVTTL standard of SDRAM, it consumes less power.
DDR SDRAM
DDR memory is a compromise solution between performance and cost. Its purpose is to quickly establish a solid market space, and then gradually increase the frequency to eventually make up for the lack of memory bandwidth.
The frequency of the first generation of DDR memory was 200MHz, and then DDR266, DDR333 and the mainstream DDR400 of that era were gradually born. As for those 500MHz, 600MHz and 700MHz, they were all considered overclocked. When DDR memory first came out, it only had a single channel. Later, chipsets that supported dual channels appeared, which doubled the memory bandwidth and increased the capacity from 128MB to 1GB.
DDR2
With the continuous improvement of CPU processor front-end bus bandwidth and the emergence of high-speed local bus, DDR performance has become a bottleneck restricting processor performance. Therefore, in 2003, Intel announced the development plan of DDR2 SDRAM.
The biggest difference from the previous generation DDR memory technology standard is that although both use the basic method of transmitting data simultaneously on the rising/falling edges of the clock, DDR2 memory has more than twice the pre-read capability of the previous generation DDR memory.
DDR2 SDRAM
DDR2 can provide a minimum bandwidth of 400MB/s per pin based on a 100MHz signaling frequency, and its interface will run on a 1.8V voltage, thereby further reducing heat generation in order to increase the frequency. From the DDR2 standard elaborated by the JEDEC organizer, the DDR2 memory for the PC market will have different clock frequencies such as 400, 533, and 667MHz, and the high-end DDR2 memory will have frequencies of 800, 1000, and 1200MHz.
In addition, it is worth noting that DDR2 abandoned the traditional TSOP and opened the door to memory FBGA packaging, reducing parasitic capacitance and impedance matching problems and increasing stability.
DDR3
In 2007, the JEDEC Association officially launched the DDR3 SDRAM specification, and DDR3 began to take the stage.
Compared with DDR2, thanks to the improvement of production technology, the operating voltage of DDR3 has been reduced from 1.8V to 1.5V and 1.35V (DDR3L), which further reduces power consumption and heat generation. It also adopts functions such as automatic self-refresh and partial self-refresh based on temperature, which to a certain extent makes up for the disadvantage of longer delay time of DDR3.
At the same time, because DDR3 can output 8 bits of data in one clock cycle, while DDR2 is 4 bits, its data transmission volume per unit time is twice that of DDR2. The speed of DDR3 starts from 800MHz and can reach up to 1600MHz. DDR3 memory has the same 240Pin DIMM interface as DDR2, but the positions of the anti-mock notches of the two are different, so they cannot be mixed. The common capacity is 512MB to 8GB. Of course, there are also single 16GB DDR3 memory, but they are very rare.
Intel Core i series (such as LGA1156 processor platform), AMD AM3 motherboard and processor platforms are all its "supporters".
Today, DDR2 and DDR3 are gradually being phased out of the market.
三星已在2021年末Q4确定停产DDR2;同时三星及海力士计划逐步退出DDR3市场。根据DDR3市占率顶峰期2014年的数据(市占率达84%)显示,三星及海力士市场份额达67%,短期内,两大内存厂商退出市场将在供给端造成显著空缺。
DDR4
As early as 2007, some information about the DDR4 memory standard was made public.
At the Intel Developer Forum held in San Francisco in August 2008, a guest speaker from Qimonda provided more public information about DDR4. In the description of DDR4 that year, DDR4 will use a 30nm process, run at a voltage of 1.2V, have a regular bus clock rate of 2133MT/s, and the "fever" level will reach 3200MT/s. It will be launched in the market in 2012, and the operating voltage will be improved to 1V by 2013.
However, in January 2011, Samsung Electronics announced that it had completed the manufacturing and testing of DDR4 DRAM modules, using a 30nm process, with a data transfer rate of 2133MT/s and an operating voltage of 1.2V. This was also the first DDR4 memory in history. Prior to this, the successful tape-out of Samsung Electronics' 40nm process DRAM chips became the key to the development of DDR4.
Three months later, SK Hynix announced the launch of 2GB DDR4 memory modules with a speed of 2400MT/s, also operating at 1.2V, and announced that mass production was expected to begin in the second half of 2012. In May 2012, Micron announced that it would use the 30nm process to produce DRAM and flash memory particles in late 2012.
However, it was not until 2014 that DDR4 memory was first used. The first product to support DDR4 memory was Intel's flagship x99 platform. At the end of 2014, DDR4 memory products with a starting frequency of 2133MHz began to be launched one after another. With the release of Intel's Skylake processor and 100 series motherboards in August 2015, DDR4 began to really go to the public, marking the arrival of the DDR4 era.
DDR4 SDRAM
Compared with DDR3, the operating voltage of DDR4 is reduced from 1.5V to 1.2V and 1.05V (DDR4L), which means lower power consumption and less heat generation. In terms of speed, DDR4 starts at 2133MHz and can reach a maximum speed of 4266MHz, which is nearly three times that of DDR3.
The reason is that, on the one hand, in addition to supporting traditional SE signals, DDR4 also introduces differential signal technology, that is, it has evolved to the stage of bidirectional transmission mechanism; on the other hand, DDR4 adopts a point-to-point design, which simplifies the design of memory modules and makes it easier to achieve high frequency; in addition, DDR4 also adopts three-dimensional stacking packaging technology, which increases the capacity of unit chips, and also adopts temperature compensated self-refresh, temperature compensated automatic refresh and data bus inversion technology, which has a good effect in reducing power consumption.
In addition, DDR4 has added functions such as DBI, CRC, and CA parity, making DDR4 memory faster and more power-efficient while also enhancing signal integrity and improving data transmission and storage reliability.
From DDR to DDR3, the number of memory pre-fetch bits doubles in each generation of DDR technology. The first three were 2-bit, 4-bit and 8-bit, respectively, to achieve the goal of doubling memory bandwidth. However, DDR4 maintains the 8-bit design of DDR3 in pre-fetch bits. Because it is too difficult to further double the pre-fetch bit to 16-bit, DDR4 increases the number of banks instead. The number of bank units in a rank unit increases to 16, and each DIMM module has a maximum of 8 rank units.
DDR5
The development of memory technology and the PC market have always complemented each other.
As the processor competition between Intel and AMD intensifies, memory performance has become a new bottleneck. As early as 2017, JEDEC, the organization responsible for computer memory technology standards, announced that it would complete the final standard for DDR5 memory in 2018. Memory manufacturers such as Micron and Samsung began to develop 16GB DDR5 products in 2018, and even in 2019, several manufacturers have begun to gradually mass-produce DDR5 memory. However, it was not until July 2020 that JEDEC officially released the standard for DDR5 memory, and the starting point was 4800MHz, which was much higher than originally imagined.
According to JEDEC, the DDR5 standard will provide twice the performance of the previous generation and greatly improve power efficiency. Under the DDR5 memory standard, the maximum memory transfer speed can reach 6.4Gbps. In addition, DDR5 also improves the operating voltage of DIMM, reducing the voltage from 1.2V of DDR4 to 1.1V, which can further improve the energy efficiency of memory.
DDR5 can double the number of system channels (Source: Mircon)
In terms of memory density, the DDR5 memory standard will allow the density of a single memory chip to reach 64Gbit, which is four times higher than the 16Gbit density of the DDR4 memory standard. Such a high memory density, combined with multi-chip packaging technology, can achieve a stacking of up to 40 units, and the effective memory capacity of such a stacked LRDIMM can reach 2TB.
According to DIGITIMES, Samsung Electronics, SK Hynix and Micron Technology have all expanded their DDR5 chip production to accelerate the industry's transition from DDR4 to DDR5. Sources said that 2022 will be regarded as a warm-up year for DDR5, and the penetration rate of DDR5 will increase significantly in 2023.
It has been more than 20 years since Samsung produced the earliest commercial DDR SDRAM chip in 1998. The DRAM memory market has been developing from DDR to DDR2, DDR3, DDR4, DDR5, and then the DDR6 that is under development.
From the evolution of DDR technology and JEDEC specifications, we can see that in order to keep up with the industry's continuous pursuit of performance, memory capacity and power consumption, the operating voltage of the specifications is getting lower, the chip capacity is getting larger, and the IO rate is getting higher.
From the earliest 128Mbps DDR to today's 6400Mbps DDR5, the data rate of each generation of DDR has doubled.
According to Yolle analysis, the transition time between the two generations of memory is only about two years. This means that by 2023, the market share of DDR5 memory will be higher than that of DDR4, and by 2026, the share of DDR4 should drop below 5%. The entire DRAM market is expected to reach $200 billion by 2026.
DRAM Branches and Evolution
According to the application scenario, DRAM is divided into three categories: standard DDR, LPDDR, and GDDR. JEDEC defines and develops these three standards to help designers meet the power, performance, and size requirements of their target applications.
Standard DDR: Targets servers, cloud computing, networking, laptops, desktops, and consumer applications, allowing for wider channel widths, higher densities, and different form factors;
GDDR: Graphics DDR, generally referred to as video memory, "G" stands for Graphics. As the name implies, GDDR is a type of DDR memory specialized for graphics display cards. With the development and popularity of computer games after 2000, people's demand for graphics card performance has been growing. Running computer games requires high-speed data interaction with the graphics card GPU, and the data exchange between the GPU and the video memory is very frequent, especially the texture mapping of 3D games has higher requirements for video memory bandwidth and capacity. Therefore, GDDR came into being. GDDR is suitable for computing fields with high bandwidth requirements, such as graphics-related applications, data centers, and AI, and is used in conjunction with GPUs;
LPDDR:Low Power DDR,是DDR SDRAM的一种,又称为 mDDR(Mobile DDR SDRAM),是JEDEC固态技术协会面向低功耗内存而制定的通信标准,以低功耗和小体积著称,提供更窄的通道宽度,专门用于移动式电子产品。
DDR, GDDR, and LPDDR, as the memory of computers, graphics cards, and mobile phones respectively, all have their own areas of cultivation and specialization. Although there are many different types, they all have the same essence and are evolved based on some principles of DDR.
However, judging from the development of different types of memory, their current and future relationship will no longer be a relationship of inherited development but a relationship of parallel development. DDR will continue to steadily follow the performance route, while GDDR will also focus more on optimizing bandwidth and capacity. As for the leader of mobile terminals, LPDDR has seen strong market demand in recent years and great pressure for technological iteration, and is expected to continue to lead. At the same time, the new technologies used in the three types of memory can also feed back to the DDR family, providing reference and technical verification for their respective development.
In addition, for applications that urgently require high bandwidth, such as gaming and high-performance computing, high-bandwidth memory (HBM) has become an excellent solution to bypass the evolution of DRAM's traditional IO enhancement mode.
High Bandwidth Memory (HBM)
The direct packaging of HBM with the processor is no longer limited by chip pins, breaking through the bottleneck of IO bandwidth. In addition, the physical proximity of DRAM and CPU/GPU further improves the speed.
In terms of size, HBM also makes it possible to greatly reduce the design of the entire system. At present, HBM2 is largely a competitor to GDDR6. However, in the long run, because 2D is close to the ceiling in manufacturing, DRAM still has a strong trend towards 3D.
DDR market structure and domestic development
高资金壁垒、高技术壁垒促使DRAM供应端形成寡头垄断市场。存储芯片的设计与制造产业具备较高的技术壁垒和资本壁垒,早期进入存储器颗粒领域的头部企业具备显著的竞争优势。同时随着晶圆制程的不断提升芯片设计和研发的难度持续提升,晶圆制造产线的投资额也随之增长,IDM 模式的存储芯片企业资本支出高企。
After decades of multiple rounds of industry cycles and technological changes, the memory chip market has formed an oligopoly, dominated by leading companies in South Korea and the United States. From the perspective of the DDR memory chip market, the three giants Samsung, SK Hynix, and Micron Technology have a leading advantage. According to statistics, the three giants accounted for a combined market share of more than 90% in 2021. Taiwan storage companies Winbond and Nanya Technology, and mainland storage company Changxin Storage are technology followers.
Currently, only Samsung, Hynix, and Micron have the ability to mass-produce DDR5/LPDDR5 on the market. Hefei Changxin Storage, a leading domestic storage company, plans to conduct trial production of DDR5 in Q1 2022. Changxin Storage was established in 2016 and is a follower in the industry with rapid development. In September 2019, Changxin Storage released the official mass production of its self-developed 8Gb DDR4 chip, which was built using a 19nm process. In 2020, Changxin Storage's DDR4 and LPDDR4 (X) have entered the market, mainly used in domestic PC/mobile phones, with market recognition for performance and attractive prices.
It is reported that Changxin Memory will also put into production DDR5 memory using the 17nm process this year, and there will be upgrades to the 10G5 process and DDR6 in the future. This shows that my country is accelerating its catch-up in the field of memory chips, and may no longer be restricted by foreign monopolies in this field in the future.
In addition, Innosilicon has recently taken the lead in breaking through 10Gbps in the LPDDR5X field, mass-producing the world's fastest LPDDR5/5X/DDR5 IP one-stop solution with advanced FinFet technology. In addition to the speed increase, the latency is also reduced by 15%, which is very suitable for application scenarios such as 5G communications, automotive high-resolution AR/V, and AI edge computing.
In addition to LPDDR5/5X/DDR5, Innosilicon recently officially released the world's first GDDR6X high-speed video memory technology. The first GDDR6/6X Combo IP, with a single DQ that can reach an ultra-high rate of 21Gbps, has been successfully mass-produced and shipped in multiple advanced FinFet processes. Innosilicon also took the lead in launching the self-developed physical layer IP solution Innolink Chiplet that is compatible with the UCIe standard. This is the first cross-process and cross-package chiplet connection solution, and has been successfully mass-produced and verified on advanced processes.
It is worth mentioning that in May this year, another domestic technology company, Montage Technology, officially announced that it had successfully launched the first trial production of the second-generation RCD chip for DDR5 memory. The RCD chip is a buffer located between the memory controller and the DRAM IC that can redistribute the command/address signals within the module, thereby improving signal integrity and connecting more memory devices to a DRAM channel.
In addition, Montage Technology also released the world's first CXL memory expansion controller chip MXC, which can greatly expand memory capacity and bandwidth. Shortly afterwards, Samsung Electronics released the first 512GB memory expander DRAM module, and the CXL memory expansion controller chip of this memory module used Montage Technology's MXC.
Caixin Securities stated that the gradual implementation of high-traffic application scenarios requires higher server performance, and processor manufacturers have successively launched new platforms, marking that DDR5 has begun to replace DDR4, which will lead to an increase in the unit price of memory interface chips. At the same time, the introduction of supporting chips will also bring incremental space.
Final Thoughts
Since Intel invented the first DRAM in 1971, the industry has experienced ups and downs for more than 50 years. The leader of the DRAM industry has also shifted from the United States to Japan, and now to South Korea.
There is no doubt that with the birth of domestic DDR5 memory, market competition will be further intensified.
In the past decade, South Korea has dominated the memory market, but no one can predict the future. Changxin Memory has caught up with Samsung by only 1-2 generations in 6 years, and may catch up with Samsung in another 5 years.
After all, all great fires begin with a single spark.
Article reference:
PConline, "The Development History of Memory"
Fanyi Education, "A Brief History of Memory DDR Development"
Full-stack cloud technology architecture, "In-depth and simple: a comprehensive interpretation of DDR memory principles"
Shande Information, "This article takes you through the history of DDR memory"
*Disclaimer: This article is originally written by the author. The content of the article is the author's personal opinion. Semiconductor Industry Observer reprints it only to convey a different point of view. It does not mean that Semiconductor Industry Observer agrees or supports this point of view. If you have any objections, please contact Semiconductor Industry Observer.
Today is the 3109th content shared by "Semiconductor Industry Observer" for you, welcome to follow.
Recommended Reading
Samsung Semiconductor, full firepower
★ Semiconductors are in a downturn, but they are expanding production against the trend!
TSMC discloses future R&D plans
Semiconductor Industry Observation
" The first vertical media in semiconductor industry "
Real-time professional original depth
Scan the QR code , reply to the keywords below, and read more
Wafers|ICs|Equipment |Automotive Chips|Storage|TSMC|AI|Packaging
Reply
Submit your article
and read "How to become a member of "Semiconductor Industry Observer""
Reply Search and you can easily find other articles that interest you!