4190 views|3 replies

103

Posts

0

Resources
The OP
 

[RT-Thread reading notes]——The definition of threads and the implementation of thread switching [Copy link]

On April 30, I was a little anxious to see that everyone had posted their own reading experience. I hurriedly posted my initial impressions of "RT-Thread Kernel Implementation and Application Development Practical Guide". After I posted one article, I calmed down and read other people's reading notes. I found several gaps between myself and others. What I wrote did not mark the key points, making it difficult for readers to read. The logic before and after many things was not enough, but I had a lot of feelings. The projects I have done include CAN communication, LWIP transplantation, SPWM drive stepper motor, AMIS30543 drive 42 stepper motor, powerstep01 drive 86 stepper motor. Among all these projects, from the compiled hex file, LWIP project is the largest, and it can still be implemented using the front and back ends. I still don't know when I have to program in the operating system way, but I realized through studying the RT-Thread kernel implementation and application development practical guide that one of the great advantages of embedded operating systems is real-time performance, but compared with bare metal systems, there are too many more specifications, requirements, and concepts. Because I don't understand multi-threaded systems, I looked up the concept of multi-threading in the book: Compared with the front-end and back-end systems, the event response of multi-threaded systems is also completed in interrupts, but the event processing is completed in threads. In a multi-threaded system, threads have priorities like interrupts, and threads with high priorities will be executed first. When an urgent event is marked in an interrupt, if the priority of the thread corresponding to the event is high enough, it will be responded to immediately. Compared with the front-end and back-end systems, the real-time performance of multi-threading is improved. In a multi-threaded system, according to the function of the program, we divide the main body of the program into independent, infinitely looping and non-returning small programs, which we call threads. Each thread is independent, does not interfere with each other, and has its own priority, which is scheduled and managed by the operating system. I don't understand the sentence in the book: "Adding an operating system makes our programming simpler. The extra overhead of the entire system is the tiny bit of FLASH and RAM occupied by the operating system." For me, many definitions, concepts and operating principles of embedded operating systems are confusing. This sentence not only did not increase my confidence, but also made me question myself.
In Chapter 6 of the book - the definition of threads and the implementation of thread switching, the importance of this chapter is difficult for me to describe in words. It can be said that when we build a high-rise building, the content of this chapter is like the foundation of the high-rise building. If the foundation is not laid well, the high-rise building is prone to collapse. If we use our learning career from childhood to adulthood as an analogy, the content of this chapter is as important as the pinyin language we learn. Then everyone must be wondering: "What is a thread?" The book describes it like this: In a multi-threaded system, we divide the entire system into independent and non-returning functions according to different functions. We call this function a thread. This sentence describes the characteristics of threads, but I wonder what advantages threads have. With such doubts, I can only continue to understand the implementation of threads. In the section on defining thread stacks, it is said: "The difference between threads and bare metal systems is: in bare metal systems, if there are global variables, sub-function calls, and interrupts. Then when the system is running, we don't care where the global variables are placed, where the local variables are placed when the sub-functions are called, and where the function return address is placed when the interrupt occurs. When writing an RTOS, we must figure out how these various environmental parameters are stored. In bare metal systems, they are all placed in a place called a stack. The stack is a continuous memory space in the RAM of the microcontroller. The size of the stack is generally specified in the startup file or link script, and finally initialized by the C library function main. However, in a multi-threaded system, each thread is independent and does not interfere with each other, so independent space must be allocated for each thread. This stack space is usually a pre-defined array or a dynamically allocated space, but they all exist in RAM." Here, the advantage of the thread stack is emphasized again: independence. I must keep this in mind when understanding it. A thread is a special function. The characteristics of this function are independence, and the main body of the function loops infinitely and cannot return. To implement thread conversion in the main function, it is necessary to define the thread control block. In the section on defining the thread control block, the book says: "In a bare metal system, the main body of the program is executed sequentially by the CPU. In a multi-threaded system, the execution of threads is scheduled by the system. In order to smoothly schedule threads, the system defines an additional thread control block for each thread. This thread control block is equivalent to the identity card of the thread, which contains all the information of the thread, such as the stack pointer of the thread, the name of the thread, the formal parameters of the thread, etc. With this thread control block, it is equivalent to a thread. In the future, all operations of the system on the thread can be implemented through this thread control block. Defining a thread control block requires a new data type, which is declared in the header file rtdef.h. It can be used to define a thread control block entity for each thread." In addition to defining the thread control block, the thread stack and the entity function of the thread ultimately need to be linked so that they can be uniformly scheduled by the system. This connection process is called the thread initialization function. In the function body of this function, there is a very important concept that I don't know how to apply, which is the linked list. The book says: "After initializing the thread linked list node, we insert the thread into various linked lists later, which is achieved through this node. It is like a hook in the thread control block, which can hang the thread control block in various linked lists. Before initialization, we need to add a thread linked list node in the thread control block." I still have a partial understanding of the linked list here. I hope someone can help me explain it! Here I would like to add Baidu’s explanation of linked lists: “A linked list is a physical storage unit. 51)],storage structure,data elements, 51)]The logical order is through the pointer in the linked list[url=https://baike.baidu.com/item/%E6%8C%87%E9%92%88/2878304]Link order is implemented. A linked list consists of a series of nodes (each element in a linked list is called a node), and nodes can be generated dynamically at runtime. "For the initialization of a single linked list, the next and prev node pointers in the node point to the node itself. The first step to insert a node after the head of a doubly linked list is to point the previous prev pointer to the new node. The second step is to point the next pointer of the new node to the node pointed to by the next pointer of node 1, and the third step is to point the next pointer of 1 to the n node, and the prev pointer of n to the 1 node. Inserting a node after the head of a doubly linked list and deleting a node from a doubly linked list are similar to the operations described above. The second step in initializing the thread function is to save the thread entry to the entry member of the thread control block. And the other members are saved accordingly. The last step is to initialize the thread stack and return the thread stack top pointer. When the thread runs for the first time, the environment parameters loaded into the CPU register are They must be initialized in advance. Starting from the top of the stack, the order of initialization is fixed. The first are the 8 registers that are automatically saved when an exception occurs, namely xPSR, R15, R14, R12, R3, R2, R1 and R0. Among them, bit 24 of the xPSR register must be 1, the PC pointer of R15 must store the entry address of the thread, R0 must be the thread parameter, and the remaining R14, R12, R3, R2 and R1 are initialized to 0. The remaining are 8 parameters that need to be manually loaded into the CPU registers, namely R4~R11, which are initialized to 0xdeadbeaf by default. After initializing the thread, the book continues to explain how to implement the ready list. After the thread is created, we need to add the thread to the ready list, indicating that the thread is ready and the system can be scheduled at any time. The book further explains how to implement the scheduler.
The scheduler is the core of the operating system. Its main function is to switch threads, that is, to find the thread with the highest priority from the ready list and then execute the thread. The scheduler must be initialized before use. First, define a local variable and modify it with the C language keyword register to prevent it from being optimized by the compiler. Initialize the thread ready list. After initialization, the entire ready list is empty. Then initialize the current thread control block pointer to empty. rt_current_thread is a global pointer defined to point to the thread control block of the currently running thread. Generally, we put the scheduler initialization after hardware initialization and before thread creation. Then we start the scheduler. How to start the scheduler? The function name of the scheduler startup is void rt_system_scheduler_start() function. When the scheduler starts, it will take out the thread control block of the thread with the highest priority from the ready list and then switch to the changed thread. However, our threads do not support priority at present, so we manually specify the first running thread as the thread hanging in the linked list with the index 0 in the ready list. rt_list_entry() is a macro that infers the first address of a structure when the address of a member in the structure is known. The rt_hw_context_switch_to() function implements the first switch to a new thread. This is a function implemented in assembly language. In this function, the PendSV_Handles() function is the place where the thread context switch is actually implemented. Now that we know the implementation of the scheduler, the next step is to apply the scheduler to implement system scheduling. System scheduling is to find the ready thread with the highest priority in the ready list and then execute the changed thread. However, we do not support priority at present. We only implement the switching of two threads in turn, which is implemented by the system scheduling function rt_schedule(). The rt_hw_contex_switch() function is used to generate context switching. Finally, there is the entire main() function. The main function first initializes the hardware, then initializes thread 1, then inserts thread 1 into the ready list, then initializes thread 2, inserts thread 2 into the ready list, starts the system scheduler, and implements thread switching in the thread. Reproduction or use for commercial purposes requires the author's consent and indication of the source 0pt] Finally, it is the entire main() function. The main function first initializes the hardware, then initializes thread 1, then inserts thread 1 into the ready list, then initializes thread 2, inserts thread 2 into the ready list, starts the system scheduler, and implements thread switching in the thread. Reproduction or use for commercial purposes requires the author's consent and indication of the source 0pt] Finally, it is the entire main() function. The main function first initializes the hardware, then initializes thread 1, then inserts thread 1 into the ready list, then initializes thread 2, inserts thread 2 into the ready list, starts the system scheduler, and implements thread switching in the thread. Reproduction or use for commercial purposes requires the author's consent and indication of the source


This post is from Programming Basics

Latest reply

Well written. Keep it up!   Details Published on 2021-9-24 10:18
Personal signature坚持自己的坚持,终究会有拨开云雾的一天
 

2549

Posts

0

Resources
2
 
Personally, I'd rather understand one chapter than speed-read ten .
This post is from Programming Basics

Comments

This is just one chapter! A few pages from the book! I am a slow learner and not particularly smart.  Details Published on 2019-5-6 13:40
 
 
 

103

Posts

0

Resources
3
 
Digital Leaflet published on 2019-5-6 12:59 Personally, I would rather understand one chapter than speed read ten. It's pointless
This is just one chapter! A few pages in the book! I learn very slowly and I'm not particularly smart
This post is from Programming Basics
 
Personal signature坚持自己的坚持,终究会有拨开云雾的一天
 
 

51

Posts

0

Resources
4
 

Well written. Keep it up!

This post is from Programming Basics
 
 
 

Guess Your Favourite
Just looking around
Find a datasheet?

EEWorld Datasheet Technical Support

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京B2-20211791 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号
快速回复 返回顶部 Return list