Preemptive Scheduling Strategy in Embedded Operating Systems

Publisher:MysticGlowLatest update time:2012-06-27 Source: 单片机及嵌入式系统应用 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Introduction
By reading the source code of a sequential program line by line, it is not difficult to tell what specific operations the program will ask the processor to perform, and also the order of these operations. In fact, if all the inputs to a sequential program are known, not only can the series of machine codes executed by the processor be accurately predicted, but also the final output value or system behavior

of the system can be calculated. In this way, no matter how fast or slow the program runs, a unique result can be obtained. However, in reality, sequential operation programs are rare. For example, the main() function in the C program of an embedded system, although it looks sequential, this seemingly sequential program will eventually be interrupted by the system's hardware interrupt at a certain moment. When a peripheral device interrupt occurs, the corresponding interrupt service routine will run, replacing the current main() function execution. This process can be called preemption.

Preemption means that the main() function will execute slower than expected. This is because its execution speed is directly related to the number of system interrupts, the execution time of the interrupt program, and the operation time used to save and restore the processor state. In essence, most of the processor cycles are occupied by the interrupt service routine. Unless there is a time limit on the interrupt service routine, these interrupts themselves do not change the output of other parts of the system, they just slow down the execution of the program.

Since most interrupt service routines handle interrupts from system devices, their execution will inevitably bring about a change in the system state. This change in state will eventually change the system behavior in the subsequent instruction main sequence, and the instruction main sequence must react appropriately to avoid the impact of the state change. At this time, it is not only difficult to predict what operations the processor will perform, but also difficult to know when and in what order to perform these operations.

Most processors support nested interrupts. An interrupt service routine that interrupts the execution of the program sequence can be interrupted by another interrupt service routine with a higher priority. When the higher priority interrupt service routine is executed, the original interrupt can continue to execute before the instruction main sequence. When

each preemption process occurs, the processor flags, the current PC pointer, and the contents of key registers should be saved (usually in RAM). This is called the context of the preempted program. This information is restored to the processor before the program enters the running state. Most processors automatically save these values ​​when an interrupt event occurs, and all that remains is to execute the entry and exit code of the interrupt service routine.

1 Pseudo-parallelism

A similar technique is to make the processor treat software events as hardware events. To achieve this goal, the system needs to be divided into a series of independent events to be processed, namely tasks. Preemptive scheduling makes this idea possible. This scheduling method manages the use of the processor by the system software and enables the system to ensure that time-critical events are executed efficiently.
Each task is a function that is executed sequentially and often ends in an infinite loop. In this way, it seems that the task has exclusive use of the processor; at the same time, each task is given a specific job, such as reading a sensor, scanning the keyboard, recording some data, or refreshing the display. Each task has a corresponding priority and has its own stack space in RAM. In general, this series of tasks together completes the function that the entire system is supposed to complete.

When a high-priority task preempts a low-priority task, the scheduler does the same thing as the processor handles an interrupt. First, the context of the currently running task is saved somewhere in memory, and then the new task is started. If the new task has been running before, it must have a saved context, so these contents need to be restored to continue running. When the high-priority task is finished, the scheduler saves its final context and resumes the execution of the preempted task as if the low-priority task had never been interrupted.

With this division, each task function can be written to be exclusive to the processor. In practice, most systems have only one processor. So only one task or interrupt should be executing at a given time. When no interrupt occurs, the scheduler decides the order of execution of the tasks based on the priority of the ready tasks.

Figure 1 shows the execution of two tasks of different priorities and an interrupt service routine. First, the interrupt service routine preempts the low-priority task into the running state, but the interrupt service routine puts a higher-priority task into the ready state. So after the interrupt service routine is executed, the scheduler selects the high-priority task to enter the running state, thus delaying the resumption of execution of the preempted task. It should be noted that the processor always considers the low-priority interrupts in the system to be more important than the high-priority tasks. [page]

2 Task Control

Information about each task, such as the task start address (the address of the function name in C), the task priority, and the stack space required for the task execution, must be provided to the scheduler. The system call uses this information to create a new task. Although this information may vary in different operating systems, its purpose is the same.

In the implementation of the task function, system functions related to software events or timed events may be called. Many tasks wait for a specific type of event and respond to it. For example, some may generate a software event; others may wait for 100ns and then repeat.

Software events and timeout events can be generated by other tasks or interrupt service routines. For the latter, see Figure 1. Figure 1 shows an interrupt service routine generating an event that a high-priority task is waiting for (thereby waking up the task). Of course, it is possible that the interrupt service routine only performs a clock interrupt and the high-priority task is just waiting for a counter to reach a certain value. Due to the arrival of the new software event, the high-priority task will be put into operation after the next task is scheduled.

Task priorities can be set in different ways, even randomly. However, the monotonic execution rate algorithm (RAM) provides us with an ideal method to ensure that the deadlines of critical tasks are always met.

3 Trade-offs

In a system using a preemptive scheduling strategy, memory consumption mainly consists of additional ROM for system call functions and RAM used by task stacks. Another cost is the loss of CPU time. For example, scheduling policies consume processor clock cycles; context switches and clock ticks consume a considerable portion of CPU time, especially when they occur frequently.

When tasks share system resources such as global variables, data structures, or peripheral control and status registers, a system mechanism called mutual exclusion is used to avoid competition for these shared resources. The mutual exclusion mechanism is an effective way to avoid resource competition; but at the same time it brings a new problem-priority inversion (see references). In some applications,

the system is divided into independent tasks and a preemptive scheduling policy is used. This will simplify the system design, but the pros and cons of this approach need to be weighed. Only by fully considering these trade-offs can we make a correct judgment on whether this method is suitable for our application.

References
1 Barr Michael, David Stewart. Rate Monotonic Scheduling. Embedded Systems Programming, 2002(3)
2 Barr Michael, David Kalinsky. Priority Inversion. Embedded Systems Programming, 2002(4)
3 www.ucos-ii.com , www. bmrtech.com

Reference address:Preemptive Scheduling Strategy in Embedded Operating Systems

Previous article:Selection and Analysis of Embedded System Development Elements
Next article:Interface Design between Embedded Processor MPC8250 and CF Card

Latest Microcontroller Articles
  • Download from the Internet--ARM Getting Started Notes
    A brief introduction: From today on, the ARM notebook of the rookie is open, and it can be regarded as a place to store these notes. Why publish it? Maybe you are interested in it. In fact, the reason for these notes is ...
  • Learn ARM development(22)
    Turning off and on interrupts Interrupts are an efficient dialogue mechanism, but sometimes you don't want to interrupt the program while it is running. For example, when you are printing something, the program suddenly interrupts and another ...
  • Learn ARM development(21)
    First, declare the task pointer, because it will be used later. Task pointer volatile TASK_TCB* volatile g_pCurrentTask = NULL;volatile TASK_TCB* vol ...
  • Learn ARM development(20)
    With the previous Tick interrupt, the basic task switching conditions are ready. However, this "easterly" is also difficult to understand. Only through continuous practice can we understand it. ...
  • Learn ARM development(19)
    After many days of hard work, I finally got the interrupt working. But in order to allow RTOS to use timer interrupts, what kind of interrupts can be implemented in S3C44B0? There are two methods in S3C44B0. ...
  • Learn ARM development(14)
  • Learn ARM development(15)
  • Learn ARM development(16)
  • Learn ARM development(17)
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号