DRAM (Dynamic Random Access Memory) is particularly attractive to designers because it provides a wide range of performance and is used in the design of memory systems for various computers and embedded systems. This article summarizes the concept of DRAM and introduces SDRAM, DDR SDRAM, DDR2 SDRAM, DDR3 SDRAM, DDR4 SDRAM, LPDDR, and GDDR.
DRAM
One advantage of DRAM over other memory types is that it can be implemented with fewer circuits per memory cell on an IC (integrated circuit). DRAM memory cells are based on the charge stored on capacitors. A typical DRAM cell is made using a capacitor and one or three FETs (field effect transistors). A typical SRAM (static random access memory) memory cell uses six FET devices, reducing the number of memory cells per IC of the same size. Compared to DRAM, SRAM is easier to use, has an easier interface, and has faster data access times.
The DRAM core structure consists of multiple memory cells organized into a two-dimensional array of rows and columns (see Figure 1). Accessing a memory cell requires two steps. First, find the address of a row, then find the address of a specific column within the selected row. In other words, the entire row is read inside the DRAM IC, and then the column address selects which column of the row the DRAM IC I/O (input/output) pins want to read or write.
DRAM reads are destructive, meaning that the data in the row of memory cells is destroyed during a read operation. Therefore, the row data must be written back to the same row at the end of a read or write operation on that row. This operation is called precharge and is the last operation on the row. This operation must be completed before a new row can be accessed, and this operation is called closing the open row.
An analysis of computer memory accesses shows that the most common type of memory access is reading sequential memory addresses. This makes sense because reading computer instructions is generally more common than reading or writing data. In addition, most instruction reads occur sequentially in memory until an instruction branch or jump to a subroutine occurs.
Figure 1. DRAMs memory cells are organized into a two-dimensional array of rows and columns.
A row of DRAM is called a memory page, and once a row is opened, you can access multiple sequential or different column addresses in that row. This increases memory access speed and reduces memory latency because it does not have to resend the row address to the DRAM when accessing memory cells in the same memory page. As a result, the row address is the high-order address bits of the computer and the column address is the low-order address bits. Because the row address and column address are sent at different times, the row address and column address are multiplexed onto the same DRAM pins to reduce package pin count, cost, and size. Generally speaking, the row address size is larger than the column address because the power used is related to the number of columns.
Early RAMs had control signals such as RAS# (row address select active low) and CAS# (column address select active low) to select the row and column addressing operations to be performed. Other DRAM control signals include WE# (write enable active low) to select write or read operations, CS# (chip select active low) to select DRAM, and OE# (output enable active low). Early DRAMs had asynchronous control signals and had various timing specifications covering their sequence and time relationship to determine the DRAM operating mode.
Early DRAM read cycles had four steps. In the first step, RAS# and the row address on the address bus go low. In the second step, CAS# and the column address on the address bus go low. In the third step, OE# goes low and the read data appears on the DQ data pins. The time from the first step to the third step when data is provided on the DQ pins is called latency. The last step is when RAS#, CAS# and OE# go high (inactive) to wait for the internal precharge operation to complete the recovery of the row data after the destructive read. The time from the beginning of the first step to the end of the last step is the memory cycle time. The signal timing of the above signals is asynchronous with respect to the edge sequence. These early DRAMs did not have synchronous clock operation.
DRAM memory cells must be refreshed to avoid losing their data contents. This requires refreshing the capacitors before losing their charge. Refreshing the memory is the responsibility of the memory controller, and the refresh time specification varies for different DRAM memories. The memory controller performs a RAS#-only cycle on the row address to perform the refresh. At the end of the RAS#-only cycle, a precharge operation is performed to restore the row data addressed in the RAS#-only cycle. Generally, the memory controller has a row counter that sequentially generates all the row addresses required for the RAS#-only refresh cycle.
There are two refresh strategies (see Figure 2). In the first strategy, the memory controller sequentially refreshes all rows in a burst of refresh cycles and then returns memory control to the processor for normal operation. The next burst of refresh operations occurs before the maximum refresh time is reached. The second refresh strategy is for the memory controller to interleave refresh cycles with normal processor memory operations. This refresh method spreads out the refresh cycles within the maximum refresh time.
Figure 2. DRAM refresh implementations include distributed refresh and burst refresh.
Early DRAM evolution and implementation of refresh counters on DRAM ICs handle sequentially generated row addresses. Internally to the DRAM IC, the refresh counter is a multiplexer input that controls the memory array row address. Another multiplexer input is the row address from the external address input pin. This internal refresh counter does not require an external refresh counter circuit in the memory controller. Some DRAMs support a CAS# before the RAS# cycle to initiate a refresh cycle using the internally generated row address.
SDRAM
The asynchronous operation of DRAM presents many design challenges when interfacing to synchronous processors.
SDRAM (Synchronous DRAM) is designed to synchronize DRAM operations to the rest of the computer system without defining all memory operation modes based on the sequence of CE# (chip enable active low), RAS#, CAS#, and WE# edge transitions.
SDRAM adds the concept of clock signals and memory commands. The type of memory command depends on the CE#, RAS#, CAS# and WE# signal states on the rising edge of the SDRAM clock. The product datasheet describes the memory commands in a table based on the CE#, RAS#, CAS# and WE# signal states.
For example, the Activate command sends a row address to the SDRAM, opening a row (page) of memory. This is followed by a sequence of Deselect commands to meet timing requirements before a Read or Write command is sent to the column address. Once the row (page) of memory is opened using the Activate command, multiple Read and Write commands can be run on that row (page) of memory. A Precharge command is required to close the row before another row can be opened.
Table 1. DDR SDRAM data rates and clock speeds.
DDR SDRAM
DDR (Double Data Rate) SDRAM improves memory data rate performance by increasing clock rate, bursting data, and transferring two data bits per clock cycle (see Table 1). DDR SDRAM bursts multiple memory locations in one read command or one write command. Reading memory operations requires sending an Activate command followed by a Read command. The memory responds with a burst of two, four, or eight memory locations after a delay at a data rate of two memory locations per clock cycle. Therefore, four memory locations are read from or written to in two consecutive clock cycles.
DDR SDRAM has multiple banks, providing multiple interleaved memory accesses, thereby increasing memory bandwidth. One bank is one memory array, two banks are two memory arrays, four banks are four memory arrays, and so on (see Figure 3). Four banks require two bits for the bank address (BA0 and BA1). [page]
Figure 3. Multiple banks in DDR SDRAM increase access flexibility and improve performance.
For example, a DDR SDRAM with four banks works as follows. First, the Activate command opens a row in the first bank. The second Activate command opens a row in the second bank. Now, any combination of Read or Write commands can be sent to the first bank or the second bank with the opened row. At the end of the Read and Write operations on the bank, the Precharge command closes the row, and the bank is ready for the Activate command and can open a new row.
Note that the power required by DDR SDRAM is related to the number of banks with open rows. The more rows that are open, the higher the power required, and the larger the row size, the higher the power required. Therefore, for low-power applications, only one row should be open at a time in each bank, rather than multiple banks of a row at a time.
Interleaving of consecutive memory words in consecutive memory banks is supported when the bank address bits are connected to the lower order address bits in the memory system. When the bank address bits are connected to the higher order address bits in the memory system, consecutive memory words are located in the same memory bank.
DDR2 SDRAM
DDR2 SDRAM has several improvements over DDR SDRAM. DDR2 SDRAM has a higher clock rate, which increases the memory data rate (see Table 2). As the clock rate increases, signal integrity becomes more and more important to reliable memory operation. As the clock rate increases, the signal traces on the board become transmission lines, and proper layout and termination at the ends of the signal lines become more important.
Termination of address, clock, and command signals is relatively straightforward because these signals are unidirectional and terminated on the board. Data signals and data strobes are bidirectional. The memory controller hub drives these signals during write operations and the DDR2 SDRAM drives these signals during read operations. Multiple DDR2 SDRAMs are connected to the same data signals and data strobes, further increasing the complexity. Multiple DDR2 SDRAMs can be located on the same DIMM in the memory system or on different DIMMs in the memory system. As a result, the data and data strobe drivers and receivers change constantly depending on the read/write operation and which DDR2 SDRAM is being accessed.
Table 2. DDR2 SDRAM data rates and clock speeds.
DDR2 SDRAM improves signal integrity by providing ODT (on-die termination), providing an ODT signal, enabling on-die termination, and being able to program the on-die termination value (75 ohms, 150 ohms, etc.) using the DDR2 SDRAM extended mode register.
The size and operation of the on-chip termination is controlled by the memory controller center and is related to the location of the DDR2SDRAM DIMM and the type of memory operation (read or write). The ODT operation improves signal integrity by creating a larger eye diagram for the data valid window, increasing voltage margin, increasing conversion rate, reducing overshoot, and reducing ISI (inter-symbol interference).
DDR2 SDRAM operates at 1.8V, reducing the power of the memory system by 72% of the 2.5V power of DDR SDRAM. In some implementations, the number of columns in a row has been reduced, reducing power when activating the row for reading or writing.
Another benefit of lower operating voltage is lower logic voltage swing. The reduced voltage swing increases logic transition speed at the same conversion rate, supporting faster clock rates. In addition, data strobes can be programmed as differential signals. Using differential data strobe signals reduces noise, crosstalk, dynamic power consumption, and EMI (electromagnetic interference), improving noise margin. Differential or single-ended data strobe operation is configured with the DDR2 SDRAM extended mode register.
A new feature introduced by DDR2 SDRAM is the additional delay, which enables the memory controller center to flexibly send Read and Write commands faster after the Activate command. This optimizes memory throughput and is configured by programming the additional delay using the DDR2 SDRAM extended mode register. DDR2 SDRAM uses eight memory banks to improve the data bandwidth of 1Gb and 2GbDDR2 SDRAM. The eight memory banks increase the flexibility of accessing large memory DDR 2SDRAM by interlacing different memory bank operations. In addition, for large memory, DDR2 SDRAM supports a burst length of up to eight memory banks.
DDR3 SDRAM
DDR3 SDRAM is a performance evolution that enhances SDRAM technology, starting at 800 Mb/s, the highest data rate supported by most DDR2 SDRAMs. DDR3 SDRAM supports six data rates and clock speeds (see Table 3). DDR3-800/1066/1333 SDRAMs were introduced in 2007, DDR3-1600/1866 SDRAMs are expected to be introduced in 2008, and DDR3-2133 SDRAMs are expected to be introduced in 2009.
DDR3-1066 SDRAM consumes less power than DDR2-800 SDRAM because the operating voltage of DDR3 SDRAM is 1.5 V, which is 83% of that of DDR2 SDRAM, which operates at 1.8 V. In addition, the impedance of the DDR3 SDRAM data DQ driver is 34 ohms, while the impedance of DDR2 SDRAM is lower, 18 ohms.
Table 3. Estimated DDR3 SDRAM data rates and clock speeds.
DDR3 SDRAM will start with 512 Mb memory and will grow to 8 Gb memory in the future. Like DDR2 SDRAM, DDR3 SDRAM data output configurations include x4, x8, and x16. DDR3 SDRAM has 8 memory banks, while DDR2 SDRAM has 4 or 8 memory banks, depending on the memory size.
Both DDR2 and DDR3 SDRAM have 4 mode registers. DDR2 defines the first two mode registers, and the other two mode registers are reserved for future use. DDR3 uses all 4 mode registers. An important difference is that the DDR2 mode register specifies the CAS latency for read operations, and the write latency is 1 minus the mode register read latency setting. The DDR3 mode register settings for CAS read latency and write latency are unique.
DDR3 SDRAM uses an 8n prefetch architecture to transfer 8 data words in 4 clock cycles. DDR2 SDRAM uses a 4n prefetch architecture to transfer 4 data words in 2 clock cycles.
The DDR3 SDRAM mode register can be programmed to support on-the-fly mutation, which shortens the transfer of 8 data words to 4 data words by driving address line 12 low during a read or write command. On-the-fly mutation is similar in concept to the read and write auto-precharge feature of address line 10 in DDR2 and DDR3 SDRAM.
Another DDR3 SDRAM attribute worth mentioning is the differential data select signal DQS, while the DDR2 SDRAM data select signal can be programmed to be single-ended or differential by the mode register. DDR3 SDRAM also has a new pin, which is an active low asynchronous RESET# pin, which improves system stability by putting the SDRAM in a known state regardless of the current state. DDR3 SDRAM uses the same FBGA package type as DDR2 SDRAM.
DDR3 DIMMs provide terminations for command, clock, and address on the DIMM. Memory systems using DDR2 DIMMs terminate the command, clock, and address on the motherboard. DDR3 DIMM termination on the DIMM supports a flying topology where each command, clock, and address pin on the SDRAM is connected to a trace that is then terminated at the trace end of the DIMM. This improves signal integrity and operates faster than the DDR2 DIMM tree structure.
The flying topology introduces a new DDR3 SDRAM write leveling feature for the memory controller, taking into account the timing offset between the clock CK and the data strobe signal DQS during the write process. The main difference between DDR3 DIMMs and DDR2 DIMMs is to prevent the wrong DIMM from being inserted into the motherboard.
DDR4 SDRAM
The curtains have been raised on DDR4 SDRAM, which is expected to be released in 2012. The goal is to run these new memory chips on a power supply of 1.2V or less while achieving data transfer speeds of more than 2 million per second.
GDDR and LPDDR
Other DDR variants, such as GDDR (Graphics DDR) and LPDDR (Low Power DDR), are also gaining ground in the industry.
GDDR is a memory technology specifically designed for graphics cards, with four variants currently specified: GDDR2, GDDR3, GDDR4, and GDDR5. GDDR technology is very similar to traditional DDR SDRAM, but with different power requirements. It reduces power requirements to simplify cooling and provide higher performance memory modules. GDDR is also designed to better handle graphics requirements.
LPDDR uses a 166 MHz clock rate and is becoming increasingly popular in portable consumer electronics where low power consumption is required. LPDDR2 improves energy efficiency, operating at voltages as low as 1.2V and clock speeds from 100 to 533 MHz.
Previous article:Application of EMC Technology in Single Chip Microcomputer System
Next article:Maximizing DDR DRAM Efficiency with a CAM-Based DDR Controller Architecture
Recommended ReadingLatest update time:2024-11-16 08:55
- Popular Resources
- Popular amplifiers
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- The sadness of being a liberal arts student: recruiting work for HR who doesn’t understand technology
- Implementation of TCP/IP Protocol Stack on TI C6000DSP
- Please help me see if this 12864 is bad.
- [RVB2601 Creative Application Development] Environmental Monitoring Terminal 07-Web Application Design
- MOS tube cannot enter saturation state
- Live broadcast at 2pm today [Solutions of Wi-Fi 6 and Bluetooth in smart applications]
- Authorization code problem
- MOS drain and source
- TI CC3x20/CC3x35 SimpleLink Wi-Fi and IoT OTA Updates Application Report
- [Share] I am leaving for my future father-in-law’s house, please bless me