Article count:1087 Read by:1556316

Account Entry

Linux Kernel Warfare Course (Process): SMP Load Balancing

Latest update time:2023-06-25
    Reads:
  • multi-core architecture

  • CPU topology

  • Scheduling domains and scheduling groups

    • Scheduling domain sched_domain

    • Scheduling group sched_group

  • Summarize

  • When to do load balancing?

  • The basic process of load balancing

The previous scheduling studies are all based on the default scheduling strategy on a single CPU. We know that in order to reduce "interference" between CPUs, there is a task queue on each CPU. During the running process, some CPUs may be busy and some CPUs may be idle, as shown in the following figure:

To avoid this problem, the Linux kernel implements load balancing between CPU runnable process queues. Because load balancing is balancing on multiple cores, before explaining load balancing, we first look at the multi-core architecture. The process of transferring tasks from a heavily loaded CPU to a relatively lightly loaded CPU is the process of load balancing.

multi-core architecture

Here we take the NUMA (Non Uniform Memory Access) architecture of Arm64 as an example to look at the composition of the multi-core architecture.

As you can see from the picture, this is a non-uniform memory access. Each CPU accesses local memory, which is faster and has lower latency. Because of the existence of the Interconnect module, the entire memory will form a memory pool, so the CPU can also access remote memory, but it is slower and has greater delay than local memory.

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号