Introduction
By reading the source code of a sequential program line by line, it is not difficult to tell what specific operations the program will ask the processor to perform, and also the order of these operations. In fact, if all the inputs to a sequential program are known, not only can the series of machine codes executed by the processor be accurately predicted, but also the final output value or system behavior
of the system can be calculated. In this way, no matter how fast or slow the program runs, a unique result can be obtained. However, in reality, sequential operation programs are rare. For example, the main() function in the C program of an embedded system, although it looks sequential, this seemingly sequential program will eventually be interrupted by the system's hardware interrupt at a certain moment. When a peripheral device interrupt occurs, the corresponding interrupt service routine will run, replacing the current main() function execution. This process can be called preemption.
Preemption means that the main() function will execute slower than expected. This is because its execution speed is directly related to the number of system interrupts, the execution time of the interrupt program, and the operation time used to save and restore the processor state. In essence, most of the processor cycles are occupied by the interrupt service routine. Unless there is a time limit on the interrupt service routine, these interrupts themselves do not change the output of other parts of the system, they just slow down the execution of the program.
Since most interrupt service routines handle interrupts from system devices, their execution will inevitably bring about a change in the system state. This change in state will eventually change the system behavior in the subsequent instruction main sequence, and the instruction main sequence must react appropriately to avoid the impact of the state change. At this time, it is not only difficult to predict what operations the processor will perform, but also difficult to know when and in what order to perform these operations.
Most processors support nested interrupts. An interrupt service routine that interrupts the execution of the program sequence can be interrupted by another interrupt service routine with a higher priority. When the higher priority interrupt service routine is executed, the original interrupt can continue to execute before the instruction main sequence. When
each preemption process occurs, the processor flags, the current PC pointer, and the contents of key registers should be saved (usually in RAM). This is called the context of the preempted program. This information is restored to the processor before the program enters the running state. Most processors automatically save these values when an interrupt event occurs, and all that remains is to execute the entry and exit code of the interrupt service routine.
1 Pseudo-parallelism
A similar technique is to make the processor treat software events as hardware events. To achieve this goal, the system needs to be divided into a series of independent events to be processed, namely tasks. Preemptive scheduling makes this idea possible. This scheduling method manages the use of the processor by the system software and enables the system to ensure that time-critical events are executed efficiently.
Each task is a function that is executed sequentially and often ends in an infinite loop. In this way, it seems that the task has exclusive use of the processor; at the same time, each task is given a specific job, such as reading a sensor, scanning the keyboard, recording some data, or refreshing the display. Each task has a corresponding priority and has its own stack space in RAM. In general, this series of tasks together completes the function that the entire system is supposed to complete.
When a high-priority task preempts a low-priority task, the scheduler does the same thing as the processor handles an interrupt. First, the context of the currently running task is saved somewhere in memory, and then the new task is started. If the new task has been running before, it must have a saved context, so these contents need to be restored to continue running. When the high-priority task is finished, the scheduler saves its final context and resumes the execution of the preempted task as if the low-priority task had never been interrupted.
With this division, each task function can be written to be exclusive to the processor. In practice, most systems have only one processor. So only one task or interrupt should be executing at a given time. When no interrupt occurs, the scheduler decides the order of execution of the tasks based on the priority of the ready tasks.
Figure 1 shows the execution of two tasks of different priorities and an interrupt service routine. First, the interrupt service routine preempts the low-priority task into the running state, but the interrupt service routine puts a higher-priority task into the ready state. So after the interrupt service routine is executed, the scheduler selects the high-priority task to enter the running state, thus delaying the resumption of execution of the preempted task. It should be noted that the processor always considers the low-priority interrupts in the system to be more important than the high-priority tasks. [page]
2 Task Control
Information about each task, such as the task start address (the address of the function name in C), the task priority, and the stack space required for the task execution, must be provided to the scheduler. The system call uses this information to create a new task. Although this information may vary in different operating systems, its purpose is the same.
In the implementation of the task function, system functions related to software events or timed events may be called. Many tasks wait for a specific type of event and respond to it. For example, some may generate a software event; others may wait for 100ns and then repeat.
Software events and timeout events can be generated by other tasks or interrupt service routines. For the latter, see Figure 1. Figure 1 shows an interrupt service routine generating an event that a high-priority task is waiting for (thereby waking up the task). Of course, it is possible that the interrupt service routine only performs a clock interrupt and the high-priority task is just waiting for a counter to reach a certain value. Due to the arrival of the new software event, the high-priority task will be put into operation after the next task is scheduled.
Task priorities can be set in different ways, even randomly. However, the monotonic execution rate algorithm (RAM) provides us with an ideal method to ensure that the deadlines of critical tasks are always met.
3 Trade-offs
In a system using a preemptive scheduling strategy, memory consumption mainly consists of additional ROM for system call functions and RAM used by task stacks. Another cost is the loss of CPU time. For example, scheduling policies consume processor clock cycles; context switches and clock ticks consume a considerable portion of CPU time, especially when they occur frequently.
When tasks share system resources such as global variables, data structures, or peripheral control and status registers, a system mechanism called mutual exclusion is used to avoid competition for these shared resources. The mutual exclusion mechanism is an effective way to avoid resource competition; but at the same time it brings a new problem-priority inversion (see references). In some applications,
the system is divided into independent tasks and a preemptive scheduling policy is used. This will simplify the system design, but the pros and cons of this approach need to be weighed. Only by fully considering these trade-offs can we make a correct judgment on whether this method is suitable for our application.
References
1 Barr Michael, David Stewart. Rate Monotonic Scheduling. Embedded Systems Programming, 2002(3)
2 Barr Michael, David Kalinsky. Priority Inversion. Embedded Systems Programming, 2002(4)
3 www.ucos-ii.com , www. bmrtech.com
Previous article:Selection and Analysis of Embedded System Development Elements
Next article:Interface Design between Embedded Processor MPC8250 and CF Card
- Popular Resources
- Popular amplifiers
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- 【Beetle ESP32-C3】1. Unpacking materials and lighting (Arduino)
- Online BootLoader method, no need to use C2000hex to get flash program data
- How to avoid pitfalls in GPIO operations?
- Please advise on the position of the rectifier bridge and common-mode inductor in power supply design
- Chinese manual and examples of GD32VF103 of domestic RISC-V
- Implementation of full-duplex programmable UART based on finite state machine.pdf
- Programmers make their own epidemic simulation program: The spread of the epidemic is more terrible than you think
- BornHack 2022 Game Badge
- Radar design will usher in a major change - reconfigurable PA
- Counting TI's star products in T-BOX: Automotive Ethernet | Section 1 DP83TC811S-Q1: Automotive Ethernet makes your T-BOX...