MCU Resources: Thinking in Time and Space

Publisher:chunliLatest update time:2017-11-25 Source: eefocus Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

    I have read some books, blogs and listened to the discussions of experts recently. I have some experience in my heart. I have recorded it down. It is a summary of heap code that I wrote recently. Although it is a very simple application, I have learned a lot from it. A great man once said that life progresses a little bit every day (I forgot what it was after that...). Okay, no more nonsense, let's get started.

    First, let me talk about how to use a singly linked list to simulate a heap (a memory pool to be precise).

    Use a singly linked list to record some available memory, mainly use a singly linked list to record these available memories.

    Use your own malloc to return a piece of available memory from the singly linked list and delete this piece of available memory from the linked list.

    Use your own free to add the used memory to the available memory list.

    There is a memory_add function that initializes a piece of available memory into an available memory linked list.

Implementation:

void free_check_str_t(check_str_t *ptItem)
{
    check_t* ptThis = (check_t*)ptItem;
    check_t* ptTempList = s_ptFreeList;
    if(NULL == ptThis){
        return;
    }
    
    s_ptFreeList = &this;
    this.ptNext = ptTempList;
}


check_str_t *malloc_check_str_t(void)
{
    check_t *ptThis = s_ptFreeList;
    if(NULL == ptThis){
        return NULL;
    }


    s_ptFreeList = this.ptNext;
    this.ptNext = NULL;
    return (check_str_t*)(&this.tItem);
}


bool add_memory_block_to_check_str_t_heap(void *pBlock, uint32_t wBlockSize)
{
    check_t *ptNew = NULL;
    uint8_t *pchTemp = pBlock;
    uint32_t wi = 0;
    if(NULL == pBlock){
        return false;
    }
    if(CHECK_LIST_ITEM_SIZE > wBlockSize){
        return false;
    }
    //Add new node
    for(wi=0;wi        if((wi + CHECK_LIST_ITEM_SIZE) > wBlockSize){//Not enough left, no more allocation
            return true;
        }
        ptNew = (check_t * )(pchTemp+wi);
        free_check_str_t((check_str_t*)ptNew);
    }
    return true;
}


This is how a simple dedicated heap is implemented, or more precisely, a stack structure. What are the advantages of doing this over using static variables?

Let me start with the disadvantages:

1. The same variable needs one more pointer space than before (not the linked list itself, but the HEAP address for storing the request);

2. This shortcoming is very important, and memory leaks may occur if you are not careful;

Given the disadvantages, why do people still use dedicated heaps so enthusiastically?

I didn't understand it at first, and I still didn't understand it later. It turns out that this is done to exchange time for space (which leads to today's topic). What does it mean?

For example, the above is a reentrant module that provides a service (needs to consume a struct space).

There are three tasks A/B/C that all need to call this service. If RAM is sufficient, you can provide a separate struct for each task (for reentrancy).

Then at this time, the three tasks do not need to wait and can run at the same time, that is, by sacrificing RAM (space) in exchange for the time for the three tasks to run simultaneously.

If the RAM is not large enough, then only one struct can be used. Everyone takes turns to apply for it. Whoever gets it gets to use it and release it after use. In this way, the three tasks that could have run in parallel are

Due to RAM limitations, we have to execute sequentially, which means we trade time for space. In other words, using the dedicated heap provided by the module, we can easily choose according to the actual RAM size.

Run the program without calling the upper-level logic. This is the role of the dedicated HEAP.


Actually, I have said so much nonsense, mainly to introduce the following blog (don’t call me shameless, I am actually a rookie, I just learned it):

Source: http://www.amobbs.com/thread-5642364-1-1.html

Although not a concept proposed by Linux, the popularity of stream and block processing is definitely due to
the contributions of stream files and block files. However, stream and block are actually more general concepts, which represent 
two completely different strategies of "trading time for space" and "trading space for time" in data processing.

1. Block processing that trades space for time
  When it comes to block processing, its most notable feature is that the data to be processed is saved in a continuous memory, and we can
randomly access and process these data - in simple terms, it is to exchange the memory space occupied by the data for the convenience of access
, and reduce the time cost of access and processing - so we can say that block processing is a typical
strategy of using memory space for processing time.
  For example, suppose we need to process a few words from a paragraph of text, we can simply save the entire paragraph of text
continuously in RAM, and then use block-based string processing functions to operate on it. Since the access method is simple and direct (random
access), the efficiency of string processing is mainly determined by the processing algorithm, and string access is not a bottleneck - in short, the faster
the algorithm can be, the faster the entire operation can be, and there is no additional delay in accessing the string.

  In fact, this example seems to have problems, because the basic unit of string processing is still the character. Although the target data is completely
stored in the data block, common string operations still need to be performed "one character at a time" in sequence like streams (such as
comparing strings or finding the position of a string). But I have to give this example because it is not only the most common
target of stream operations and block operations, but more importantly, this "one character at a time" feature makes people confused
about . Therefore, here I would like to point out in particular through this example:

> In block processing, the bottleneck of string processing efficiency is determined by the algorithm itself, compared with which the cost of data access is almost negligible
(random access). The advantage of block processing is that when we access any data in a block in any order, the access
cost is the lowest. Block processing can often be understood as: batch processing, which is synonymous with efficiency;

> In stream processing, the bottleneck of string processing efficiency is data access. In contrast, the efficiency of the algorithm can be almost ignored (sequential
access) - either the program processes the current character, drinks tea, takes a nap, and the next character is not ready; or the time consumed by
the processor to process a character is far less than reading the next character from the stream (considering queue communication between multiple tasks).
In practical applications, these two situations are often combined - when a task "anxiously" tries to read
the for data frame parsing, the queue operation time and the time waiting for the character to be received are usually
several orders of magnitude higher than the processing time of a single character.

  Having said so much nonsense, let's make a summary: Block processing is a
processing method that consumes memory space to store all target data to facilitate random access by data processing algorithms, thereby saving at least access time (since access
is random, the available algorithms become diverse, and choosing the optimal algorithm will also bring additional benefits). This is a strategy of exchanging
memory space for access time.

  The manifestation of a block is a randomly accessible memory space. "Randomly accessible" is its temporal nature, and blocks do not
require data to be continuous in space (although it feels like it should be continuous literally, this continuity is not mandatory).

2. Stream processing that exchanges time for space
  The most notable feature of stream processing is that large data blocks are artificially split into small data units, and the data processor can only receive
or process one data unit at a time. The biggest limitation of stream processing is the restriction on data access order - all data units must be accessed in the order provided by the stream
sender . It is precisely because of this restriction on access order that stream processors often feel
awkward that "whoever uses it knows it". - In many cases, for the convenience of data access, part of the stream will be buffered into small data blocks first, and then
processed - a typical example is the common data frame (Frame) in serial data communication.
  The benefits of stream processing are also obvious. Since large blocks of memory space are not required for temporary storage of data, it is particularly popular in processors with small resources
, especially MCUs with small SRAM. The disadvantage is that, constrained by the access order, the stream processing algorithm is often more complicated, and
some functions cannot even be realized (stream processing alone cannot achieve this, and local application fast processing is required); at the same time, the data
access time caused by stream processing is also very considerable. People often say that the essence of stream processing is a manual workshop with zero inventory, and everything must be done step by step
. The way to improve the efficiency of stream processing is to use the waiting time required for data transmission in a pipeline manner.

  In summary, stream processing is a
processing method that consumes more processor time and increases the restrictions on the access order (the order restriction is still a restriction on the time axis) to save memory space. This is a strategy of exchanging time for space.

3. Interchange of streams and blocks
  In common embedded system data streams, each data processing link (referred to as data processing process) has different
preferences . The data stream here may contain multiple subsystems (processors) with different divisions of labor, and even remote server systems.
Some processes are sensitive to performance (time) and have ample RAM. Obviously, they should take advantage of the "trading space for time" feature of block processing to
convert the advantage of surplus RAM into performance advantages. Some processes are sensitive to space (often due to cost sensitivity) and
do not require high performance (such as UART, 9600, or even 2400 baud rate to transmit information, such as infrared remote control to transmit remote control information). In
this case, using stream processing in exchange for more cost advantages is almost the only choice.
  So, when adjacent processes have different preferences in a data stream, how to coordinate?

> The producer process uses "stream" processing; the consumer process uses "block" processing
1. The consumer provides a queue Q, which provides the block MEM used to store data to Q as a buffer to initialize it as an empty queue;
2. Permanently block the dequeue interface of Q
3. Provide the enqueue interface of Q to the producer
4. When the queue is full, take out the MEM and hand it over to the consumer for processing;
5. If MEM is allocated from the heap, try to get a new MEM from the heap and repeat step 1; when the consumer completes
processing the data MEM, it should release the MEM back to the heap at an appropriate location afterwards

> The producer process uses "block" processing; the consumer process uses "stream" processing
1. The producer provides a queue Q and provides the block MEM that stores the data to Q as a buffer to initialize the full queue;
2. Permanently block the queue entry interface of Q
3. Provide the queue exit interface of Q to the consumer
4. When the queue is empty, 1) if MEM is allocated from the heap, release the MEM back to the heap and wait for the producer to provide an available data block,
and then return to step 1; 2) if MEM is statically allocated and unique, it is returned to the producer for the next data production at this time;

  through the above description, we find that the queue plays a key bridge role in the conversion between stream blocks. Here, there is not only the
traditional operation of "initializing the queue to empty", but also the concept of "initializing the queue to full".

  Why is the queue so magical? Because its essence is: use the memory space
to buy . (I believe everyone knows the perverted swimming pool water release model in elementary school) The queue is a typical data
structure that uses space to exchange time. What it can buy is the time difference caused by the speed difference between "out of the queue" and "in the queue"
. How much time can be exchanged is completely determined by the size of the buffer.

  Many of our "comrades" always regard the queue as a panacea. As long as they think that "there may be speed inconsistency problems" in data transmission, they will take
a pill, regardless of whether the speed difference is constant (if the queue is larger than the queue, the queue will overflow, and the time
is determined by the size of the queue buffer); it's as if using a queue can make you an ostrich-bury your head in the sand-and all communication problems
will no longer exist:

   Well! Existence is not a problem of speed difference-"I use a queue? Why is there still a problem?"-hahaha!

  In fact, the whole article has so much foreshadowing, just to pave the way for the last spit, and then call it a day! Enjoyable! Comfortable~hahahahaha.



<--------------------------------------------Dividing line-------------------------------------------------------------->

I’m too lazy to write another article about my experience from yesterday’s exchange, so I’ll just stop here.

The topic was brought up when I saw the following passage:

It is best if B at the lower level knows nothing about A at the upper level. If struct A must appear in module B,

Then we should at least ensure that there is only a reference to struct A *, and there must not be any reference to the interface in module A.

Don't think that using clever methods to bypass the circular dependency initialization problem is enough. This should be a design principle and should not be violated.

This is the object-oriented design principle: the single-phase dependency principle.

That is to say, if A is the upper layer of B, then the implementation of A depends on B, but the implementation of B should not depend on A.

In other words, if the code of module A is deleted, can the code of module B be compiled?

In reality, there are definitely such problems:

B wants to call A's method. Under what circumstances will this problem occur?

If B is a thread, why not use a communication mechanism? If there is no communication mechanism, then we can only use callbacks (we will discuss safe callbacks later).

If B is not a thread, why use callback? In this case, you should redesign and find alternatives; if you cannot bypass it and must use callback, you should use safe callback.

What does it mean? For example: B wants to call A's method, but this method needs to pass the parameters in A, but A is not visible to B. What should I do? Declare a pointer to A in module B.

Then use the pointer of A to pass parameters. This callback is an unsafe callback and we should avoid it. Normal variables should be passed during callback, which is a safe callback.


I'm currently learning OOPC, it's so difficult...

<------------------------------------- Gorgeous dividing line ----------------------------------------------------->

Basic concepts of packaging


We call modules services instead of modules.

A service must be saved in a folder; the folder should be named the same as the service.

A service is a black box to the outside world. The meaning of a black box is that the outside world does not care about the internal contents of the service, that is, the contents of the folder;

In theory, the black box communicates with the outside world only through interfaces. Therefore, in a service folder, there must be a .h file with the same name as the service, which we call an interface.

Header file. If the outside world wants to use this service, it can only be achieved by including this header file.

For example, we have a service called byte_queue; then the folder name is also called byte_queue; and there is a byte_queue.h in this folder.

Here, the interface header file is used to provide an interface from the inside of the service to the outside. From the perspective of providing information, it is from the inside out. Therefore, we generally believe that the interface header

Files are used to "output interfaces". In contrast, for a service, some input information is often required. For example, system configuration (using macros to configure code);

The basic environment of the system (variable types, etc.); and other services that this service may depend on; this type of information is input from the outside in.

We use a configuration header file app_cfg.h to take on this responsibility.

 

Almost everyone has encountered a system where there is only one system.h at the top level; then all .c files include system.h to get a convenient pattern. Even if you have not encountered it before,

You will definitely encounter this in the future - this is the more traditional way of processing header files.

This approach mixes the functions of the two header files mentioned above - it is precisely because of the mixing that it leads to a lot of confusion:

First of all, this hybrid approach has no modularity at all.

Second, mixing input and output information together can easily lead to complex inclusion errors.

Therefore, the type of information should be distinguished and then placed in the correct header file.

For example, crystal configuration, hardware connection, and other information that clearly belongs to the configuration or basic environment category should be input and written into the configuration header file.

A service must have an interface header file; it must also have a configuration header file.

The name of the interface header file is the same as the service name; but the configuration header file must be called app_cfg.h (why must it be? Because it is stipulated by predecessors)

The contents of app_cfg.h in the module are as follows:

#include "..\app_cfg.h"

   #ifndef __XXX_APP_CFG_H__

   #define __XXX_APP_CFG_H__

   ...

   #endif

 

__XXX_APP_CFG_H__ is a protection macro that prevents the same content from being included twice.

Please note that app_cfg.h must include app_cfg.h in the parent directory before the protection macro.

 

A service is a black box. It doesn’t matter how it is implemented; but it must contain an interface header file and a configuration header file.

It should be emphasized that for the interface header file, all .c and .h files in the black box will not include it - in simple terms, the interface header file is an isolated header file.

Regarding the configuration header file, all files in the module will include it.

This is the basic rule of service encapsulation.


Reference address:MCU Resources: Thinking in Time and Space

Previous article:MCU Modularization 1: Button Thinking
Next article:MCU bare programming thinking to eliminate software delay

Latest Microcontroller Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号