Linux Kernel Warfare Course (Process): SMP Load Balancing
-
multi-core architecture
-
CPU topology
-
Scheduling domains and scheduling groups
-
Scheduling domain sched_domain
-
Scheduling group sched_group
-
Summarize
-
When to do load balancing?
-
The basic process of load balancing
The previous scheduling studies are all based on the default scheduling strategy on a single CPU. We know that in order to reduce "interference" between CPUs, there is a task queue on each CPU. During the running process, some CPUs may be busy and some CPUs may be idle, as shown in the following figure:
To avoid this problem, the Linux kernel implements load balancing between CPU runnable process queues. Because load balancing is balancing on multiple cores, before explaining load balancing, we first look at the multi-core architecture. The process of transferring tasks from a heavily loaded CPU to a relatively lightly loaded CPU is the process of load balancing.
multi-core architecture
Here we take the NUMA (Non Uniform Memory Access) architecture of Arm64 as an example to look at the composition of the multi-core architecture.
As you can see from the picture, this is a non-uniform memory access. Each CPU accesses local memory, which is faster and has lower latency. Because of the existence of the Interconnect module, the entire memory will form a memory pool, so the CPU can also access remote memory, but it is slower and has greater delay than local memory.