Are there any opportunities for these new types of storage?
Source: The content is compiled from allaboutcircuits by Semiconductor Industry Observer (ID: i cb ank), thank you .
At a high level, computer architecture has several key components: the CPU (consisting of compute cores and interconnects), memory (consisting of cache, main memory, and hard drives), and input and output hardware. While all parts of the architecture are important to overall efficiency, for modern workloads, the primary performance bottleneck is memory.
As a result, there has been some significant research in recent years trying to increase memory speed while maintaining power efficiency as much as possible.
In this article, we discuss three recent memory announcements from industry and academia to assess the progress in the field. We'll start with a brief review of memory technology to highlight the importance of new developments from SureCore, Lancaster University, Western Digital and Kioxia.
A brief overview of memory technology
Memories can be broadly divided into two categories: non-volatile and volatile. Non-volatile memory can store its contents through millions of power cycles. The contents of memory are retained until they are intentionally erased or overwritten. Two examples of non-volatile memory are flash memory and HDD. In contrast, volatile memory loses its contents after power cycles. Two common types of volatile memory are SRAM and DRAM.
While both SRAM and DRAM are volatile, DRAM is generally slower because of the way it is implemented. In DRAM, each bit is stored using a capacitor. Because the capacitor loses charge unless a potential difference is continuously applied across the board, DRAM needs to be refreshed periodically to prevent data loss. This periodic refresh results in high latency, causing the memory to slow down.
SRAM, on the other hand, does not use capacitors to store data. Instead, it uses multiple transistors, called SRAM cells, to store bits. SRAM can be written to and read using a grid of bit lines and word lines.
An SRAM cell can contain six MOSFETs. Bit lines and world lines are used to write values into cells
To write to a typical SRAM cell, the associated bit line is driven low or high depending on the desired value, and then the world line is driven high. To read data from a typical SRAM cell, both bit lines are driven high, then the world line is driven high. This way, SRAM can differentiate between read and write operations.
Bit lines and word lines in SRAM control access to individual bits
For embedded and mobile computing applications such as smartphones, designers use SRAM, which is as energy-efficient as possible to extend the battery life of the device. Power dissipation can be active or passive due to leakage current. In SRAM, parasitic capacitance causes charges to move in and out of the memory circuit, causing real power dissipation.
SureCore incorporates energy-saving technology into SRAM
A British company called SureCore that specializes in ultra-low power embedded IP has developed a patented technology called Cascode Precharge Sense Amplifier (CPSA). The company claims this technology can significantly reduce active and passive power consumption on SRAM.
CPSA works by controlling the bit line voltage swing on the SRAM, which can vary significantly due to the manufacturing process, thereby reducing power consumption. Although individual bitline voltage swings are typically small on SRAMs, there are so many bitlines that the total bitline swing actually accounts for the majority of the effective power consumption on SRAMs.
Lancaster researchers create 'ULTRARAM'
Recent research into improving memory has also focused on non-volatile storage. Researchers at Lancaster University have announced the formation of a spin-out company to create ULTRARAM, a memory technology that combines the non-volatility of flash with the performance and power advantages of DRAM.
ULTRARAM exploits the quantum resonance tunneling effect in the semiconductor materials that make up RAM. The semiconductor compounds used in ULTRARAM belong to the 6.1 Angstrom series, such as GaSb, InAs and AlSb, which are particularly suitable for high-speed designs. While the physics behind quantum resonance tunneling is complex, it essentially allows researchers to create a non-volatile memory that doesn't consume as much power and offers faster speeds than DRAM.
In ULTRARAM, each logic state is stored in a floating gate. Due to the characteristics of 6.1A semiconductors, the floating gate can switch from a high-resistance state to a high-conductivity state at low voltages. These features make it fast, energy-efficient and less volatile.
Kioxia and Western Digital team up to develop 3D Flash
Cell density is another very important property of non-volatile memory, as it allows more cells to be packed into the same area and increases storage capacity. In March, Western Digital and Kioxia announced details of new 3D flash memory technology. In this technology, each cell wafer is manufactured individually and then bonded together to maximize bit density. The memory is scaled both vertically and horizontally to increase bit density, thereby increasing the capacity of the flash memory.
Innovating modern computers, starting with memory
Modern computers are complex machines. Modern designs are large and complex with multiple cores, multiple levels of cache, main memory, and complex multithreading and speculative mechanisms. However, the basic goal of modern computer architects has remained essentially the same for decades: to create a machine with sufficient computing power while still being energy-efficient and affordable. These researchers and companies hope to achieve this balance by innovating different memory technologies - whether ultra-low-power SRAM, ULTRARAM or 3D flash memory.