The basic task of dynamic memory management is to effectively allocate and recycle dynamic memory while ensuring the speed, reliability and stability of the system. When the system requests to allocate memory, the system needs to find a suitable free block from all free blocks for allocation; when the user no longer uses a block of memory and releases it, the system needs to recycle this memory block in preparation for reallocation when a new request is generated. To this end, the system needs to establish a memory description corresponding to the current usage of the memory to record all information related to memory allocation and recycling. How to record this information and allocate and recycle it has become the most fundamental difference between various algorithms.
1 Characteristics of memory management technology in embedded systems
Real-time systems are divided into hard real-time systems and soft real-time systems. A hard real-time system means that each task in the system must not only be executed correctly but also on time; a soft real-time system means that each task in the system runs as fast as possible, and does not require a certain task to be completed within a certain time. It can be seen that dynamic memory allocation cannot be used in hard real-time systems, because dynamic allocation has time uncertainty (allocation time is related to the number of memory blocks), and dynamic allocation may result in unsuccessful allocation. Therefore, for hard real-time systems, only static memory allocation can be used. Static allocation means allocating the memory space required by the program during compilation or linking, so that allocation failure will not occur.
1.1 Static allocation and dynamic allocation
Static allocation provides the best reliability and real-time performance for the system. For those with extremely high requirements for real-time performance and reliability, static allocation can only be used. However, static allocation will inevitably make the system lose flexibility. Therefore, all possible situations must be considered during the design phase, and corresponding space allocations must be made for all requirements. Once a situation that is not considered occurs, the system cannot handle it. In addition, static allocation will inevitably lead to a lot of waste, because the maximum configuration must be made according to the worst case, and only a small part of it may be used in actual operation.
Dynamic allocation provides great flexibility for the system and is significantly superior to static allocation in terms of memory utilization and system function expansion.
Most systems use a combination of static allocation and dynamic allocation. Since the reliability and real-time requirements of each task in the embedded system are different, there are usually some tasks that do not have particularly strict requirements for reliability and real-time. These tasks are acceptable to certain delays and failures. Therefore, for such tasks, dynamic allocation can be used to meet some or all of their memory requirements. For those tasks with strict time limits, static allocation should be used to ensure the reliability and real-time performance of the tasks.
1.2 Technical requirements for dynamic allocation
(1) Speed: Since embedded systems have high real-time requirements, the memory allocation process must be as fast as possible. In embedded systems, since the complex and complete memory allocation strategies used in general operating systems cannot be used, simple and fast memory allocation solutions are generally used.
(2) Reliability: Since embedded systems have high reliability requirements, the memory allocation process must also have high reliability. However, since the reliability requirements of each embedded system are different, the reliability requirements for memory allocation are also different. Generally, systems with extremely high reliability requirements do not use dynamic allocation.
(3) Efficiency: Since memory resources are relatively limited in embedded systems, in order to ensure the normal operation of the program, the memory management system must use memory as efficiently as possible to reduce unnecessary waste.
2 Buddy System
2.1 Principles of the Partner System
The basic principle of the buddy system is to allocate memory in powers of 2, so as to achieve very high allocation and recovery speeds. For example, for a 10KB space request, the buddy system will allocate 16KB of space to satisfy it. Because the size of the free memory block is a power of 2, 16KB is required, which is the minimum space to satisfy 10KB; similarly, for a 70KB space request, the buddy system will allocate 128KB of space to satisfy it.
2.2 Problems with the Partner System
The buddy system has many advantages over algorithms that allocate addresses by size rather than by integer multiples of blocks. The advantage is that when a block of size 2K is freed, the storage manager only needs to search for blocks of size 2K bytes to determine whether they need to be merged. Strategies that allow memory blocks to be split in any form need to search all free block tables. In comparison, the buddy system is faster.
However, the buddy system is extremely inefficient in terms of memory utilization. The problem is that all memory requests must be satisfied with a power-of-two size. A 35KB request requires 64KB of space to be allocated, and the remaining 29KB is wasted. In extreme cases, nearly 50% of memory resources are wasted.
3. Contiguous memory allocation
3.1 Problem Statement
The memory management method of the buddy system has a huge waste of space usage. In some cases, due to the waste of resources, subsequent memory requests cannot be met, which seriously affects the normal function of the program.
Consider: the system has 1MB of memory available, managed by a buddy system. There are the following requests: Request A, 300KB; Request B, 300KB; Request C, 50KB. The results are shown in Table 1.
It can be seen from Table 1 that the buddy system wastes space extremely seriously. Even when there is sufficient physical memory (1MB), there may be situations where requests (650KB) cannot be satisfied.
Since there is generally no virtual memory support in embedded systems, when memory requests cannot be met, the normal function of the program will be seriously affected. Therefore, in order to ensure the normal function of the program, it is necessary to improve the utilization of memory and meet the memory requests of the program as much as possible. Therefore, based on the analysis of the partner system, a continuous memory allocation method is proposed.
3.2 Principle of continuous memory allocation
It is not difficult to see from the above example that as long as the continuous memory allocation method is adopted (that is, allocating space equal to the request size to satisfy the request), it is not difficult to satisfy request C, and there will be a large amount of remaining space to continue to satisfy other memory requests. The results are shown in Table 2.
The continuous memory allocation method is obviously different from other memory management methods that allocate in blocks. It uses the continuous allocation method to allocate space according to the actual size of the requested space.
3.3 Continuous memory allocation implementation method
In order to improve the utilization of memory space, space is allocated according to the actual size of the request rather than allocating space according to the integer multiple of the block size.
The basic method is to organize all independent blocks (free blocks and used blocks) into a double linked list in the order of physical addresses, and store some independent information of the block itself in each block. When allocating space, you need to search the entire linked list to find the best-fitting free block, and then split the free block into two blocks, one to meet the request, and the other to be inserted into the linked list as a free block. When releasing space, you can easily find the adjacent physical block based on the linked list, and insert the new block into the linked list after the space merging process is completed.
But in fact, since we only need to find the best-fitting block in the free blocks during the allocation process, we can use a separate linked list to link all the free blocks together, which can greatly speed up the search for free blocks during space allocation.
In addition, from the analysis of the buddy system, it can be seen that keeping all the free memory areas in one or more linked lists sorted by free area size will make memory allocation very fast. Therefore, by referring to the implementation of the buddy system, a single free partition chain can be organized into multiple ordered free partition chains between linked lists. In the case of frequent dynamic memory operations, this will improve the efficiency of system memory allocation.
Since the above method easily generates many small free areas that cannot be used any more, when allocating space, if the allocated free area is smaller than a certain value, no space allocation will be performed, but the entire space will be returned as the request result.
To summarize, memory blocks can be organized into two double linked lists, namely the free block linked list and the physical block linked list.
(1) Free block linked list: All free blocks are organized into multiple independent free linked lists in an orderly manner. When allocating space, the corresponding linked list is selected for search according to the size of the space. If the search is successful, the allocated space is returned to the requester after the space allocation process is completed; if the search fails, the search continues in the linked list with a larger space; if no suitable free block is found until all linked lists are searched, the allocation operation is considered to have failed.
(2) Physical block linked list: All independent blocks (free blocks and used blocks) are organized into a double linked list. The nodes in the linked list are linked in the order of physical addresses. Each block stores some independent information of the block itself. When the space is released, there is no need to search, but to directly find the block adjacent to the released block based on this linked list. Then, based on the relevant information in the adjacent blocks, the memory block merge operation can be conveniently performed.
In specific implementation, the control information of the two linked lists can be stored independently from the user space to prevent the control information from being accidentally destroyed. The physical block itself can also be used to store the control information to obtain better space utilization and better flexibility.
3.4 How contiguous memory allocation works
First, assume that the minimum value of the space that is no longer allocated is 1KB, that is, when the free area obtained when the space is split is less than 1KB, the space will no longer be allocated.
Keep linked lists of 1KB, 2KB, 4KB, 8KB bytes, etc. until they are larger than the total memory capacity. For example, for a memory capacity of 1024KB, there are 12 linked lists ranging from 1KB bytes to 2048KB bytes. Initially, all memory is free, and the 2048KB linked list contains a 1024KB free area (each linked list stores all memory blocks that are smaller than the number of bytes of the linked list itself and greater than or equal to the number of bytes of the previous linked list, and the 2048KB linked list stores all memory blocks that are smaller than 2048KB and greater than or equal to 1024KB). The other linked lists are all empty, and the initial state of the memory is shown in Figure 1. For convenience, a single linked list is used in the figure to represent the link relationship. The solid line represents the physical block linked list pointer, the short dash line represents the free block linked list pointer, and the dotted line represents the null pointer. For the physical block linked list, the gray block represents the allocated block, and the white block represents the free block.
When a 300KB memory request (indicated by A) is generated, the system finds the head of the 512KB linked list. Because this linked list may contain the minimum space required to satisfy the request. Then, according to the best fit algorithm, the linked list is searched to see if there is a sufficient minimum free area. At this time, the 512KB linked list is empty, and the 1024KB linked list is also empty, so 1024KB of available space is finally found in the 2048KB linked list. This space is divided into blocks of 300KB and 700KB in size respectively. The 700KB block (larger than the minimum block by 1KB) is inserted into the 1024KB linked list, and the 300KB block is returned to request A.
Then, there is a memory request for 300KB (indicated by B) and another for 50KB (indicated by C). The result is shown in Figure 2.
Now block B is released. At this point, according to the physical block linked list, physical blocks A and C adjacent to B can be easily found. Since blocks A and C are not released, there is no need to merge. Because the size of block B is between 256KB and 512KB, block B is inserted into the 512KB free linked list, and the result is shown in Figure 3.
Next, block C is released. At this time, according to the physical block list, we know that block B and block F are adjacent to block C, and both blocks are in idle state. Block B and block F are deleted from the 512KB free block list, and then the three blocks are merged into a 700KB block (represented by F) and inserted into the 1024KB free list. The result is shown in Figure 4.
Finally, when block A is released, block A and block F are merged into a 1,024KB block, returning to the initial situation with only 1,024KB of free area.
4 Conclusion
Dynamic memory management has always been an important technology in the computer field. Dynamic memory management provides users with a relatively large degree of freedom. Users can apply for memory blocks from the system partition or from the user memory area. This increases the flexibility of the system and also limits the possibility of a large number of fragments (under the premise of not frequently deleting and establishing system memory partitions). It also increases the portability of some C codes.
Previous article:Research on Flash Secondary Loading Technology of C6000 Series DSP
Next article:Real-time U disk boot technology for embedded systems
- Popular Resources
- Popular amplifiers
- Molex leverages SAP solutions to drive smart supply chain collaboration
- Pickering Launches New Future-Proof PXIe Single-Slot Controller for High-Performance Test and Measurement Applications
- CGD and Qorvo to jointly revolutionize motor control solutions
- Advanced gameplay, Harting takes your PCB board connection to a new level!
- Nidec Intelligent Motion is the first to launch an electric clutch ECU for two-wheeled vehicles
- Bosch and Tsinghua University renew cooperation agreement on artificial intelligence research to jointly promote the development of artificial intelligence in the industrial field
- GigaDevice unveils new MCU products, deeply unlocking industrial application scenarios with diversified products and solutions
- Advantech: Investing in Edge AI Innovation to Drive an Intelligent Future
- CGD and QORVO will revolutionize motor control solutions
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications