Address mapping between main memory and cache

Publisher:温柔的爱情Latest update time:2015-09-23 Source: eefocus Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere
Compared with the main memory capacity, the cache capacity is very small. The information it stores is only a subset of the main memory information, and the information exchange between the cache and the main memory is in blocks. The size of each block in the main memory is equal to the size of the block in the cache. In order to put information in the cache, the space between the main memory and the cache must be specified in advance.The address mapping mode refers to which main memory blocks a cache block can be used as a copy of (i.e., an image of). Once the mapping mode is determined, it determines the understanding of the main memory address when accessing the cache, and thus determines the organization structure of the cache. There are currently three address mapping modes: direct mapping,Fully associative images and set associative images.
Main memory address = main memory block number + address within the block.
If the main memory is divided into 2n blocks, then the main memory block number is n bits.
1. Direct Mapping  Only one comparison is performed. The main memory address is understood as a tag (area code), a block number (the corresponding block number in the cache), and an address within the block. The main memory block number is decomposed into a tag and a block number (the number of bits is determined by the number of cache blocks).
When direct mapping is used, a certain block of the cache can only establish a mapping relationship with a fixed number of main memory blocks, and a certain block of the main memory can only correspond to one cache block.
2. Associative Mapping  : Compare with all blocks in the cache. The main memory address is understood to consist of two parts: the tag (main memory block number) and the address within the block.  The main memory block number is the tag.
  When using fully associative mapping, a block in the cache can establish a mapping relationship with any main memory block, and a block in the main memory can also be mapped to any position in the cache. Since a block in the cache can establish a mapping relationship with any main memory block, the tag part of the cache must record all the information of the main memory block address. For example, the main memory is divided into 2n blocks, the block address is n bits, and the tag should also be n bits.
In order to determine whether it is a hit, the tag part of the main memory address needs to be compared with the tags of all blocks in the cache. In order to shorten the comparison time, the tag part of the main memory address and the tags of all blocks in the cache are compared at the same time. If it is a hit, the hit block in the cache (whose tag is the same as the tag given by the main memory address) is accessed according to the address within the block; if it is a miss, the main memory is accessed.
The advantages of fully associative mapping are flexibility and high cache utilization. There are two disadvantages: first, the number of tag bits increases (all information about the main memory block address needs to be recorded), which makes the cache circuit scale larger and the cost higher; second, the comparator is difficult to design and implement (usually "content-addressable" associative memory is used). Therefore, only small-capacity caches use this mapping method.
3. Set Associative Mapping  All  blocks in a cache group. The main memory address is understood to be composed of three parts: tag, group number and block address. The main memory block number is divided by the tag and the group number, and the group number occupies the low bit of the main memory block number (the number of bits is determined by the cache group).
 The set associative mapping method is a compromise between direct mapping and fully associative mapping. Suppose there are m blocks in the cache. When the set associative mapping method is adopted, the m cache blocks are divided into u groups (sets), each with k blocks (i.e. m=u×k), with direct mapping between groups and fully associative mapping within the group (as shown in Figure 3.42). The so-called direct mapping between groups means that the cache blocks in a certain group can only establish mapping relationships with some fixed main memory blocks.
The so-called intra-group fully associative mapping means that the main memory block corresponding to a cache group can establish a mapping relationship with any cache block in the group.


Example:
The cache and main memory use a fully associative address mapping method. The cache capacity is 4MB, divided into 4 blocks, each block is 1MB, and the main memory capacity is 256MB. If the main memory read and write time is 30ns, the cache read and write time is 3ns, and the average read and write time is 3.27ns, then the cache hit rate is ___(3)___%. If the address conversion table is as follows, when the main memory address is 8888888H, the cache address is ___(4)___H.
  Address conversion table
38H
88H
59H
67H

  (3)A. 90 B. 95 C. 97 D. 99
  (4)A. 488888 B. 388888 C. 288888 D.188888
The main memory capacity is 256MB, which is represented by 28 binary addresses and corresponds to 7 hexadecimal numbers 8888888H. Each IMB block indicates that the address within the block is a 20-bit binary number, which is a 5-bit hexadecimal number 88888. The remaining 2 hexadecimal digits 88H represent the block address (fully associative mapping).

 

1. The Cache with a capacity of 64 blocks is mapped in a set associative manner. The word block size is 128 words, and each group consists of 4 blocks. If the main memory capacity is 4096 blocks and is addressed by word, then the main memory address should be __(7)__ bits, and the main memory area number should be __(8)__ bits. (7) A.16 B.17 C.18 D.19
(8) A.5 B.6 C.7 D.
Analysis: This question involves the knowledge of the working storage principle of Cache. Cache is a copy of a local area of ​​main memory and is used to store currently active programs and data. Copying the contents of a local range from main memory to cache allows the CPU to read data from cache at high speed, which is much faster than accessing main memory. Cache has three mapping methods.

Here, since the main memory capacity is 4096 blocks (or "pages"), and each block is 128 words, the main memory address is 4096*128=2^n; n=19.
The main memory can be divided into 4096/64=64 groups, so the main memory area number is 2^n=64; n=6.
Answer choice (7) D (8) B

0
1
2
3

Reference address:Address mapping between main memory and cache

Previous article:Questions and answers about LCD 1602/1620/1604
Next article:Emu8086-A good helper for learning assembly language

Latest Microcontroller Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号