2764 views|2 replies

224

Posts

0

Resources
The OP
 

"Run Linux Kernel 2: Debugging and Case Analysis" 1- Concurrency and Synchronization [Copy link]

Thanks to eeworld and the BarRunBa community for giving me the opportunity to read this book. After receiving the book, I also read the first chapter of the book. The first chapter mainly explains the concurrency and synchronization of Linux. Although concurrency and synchronization are two related but different concepts. However, in computer science, they are often discussed together, because when dealing with multitasking and multithreading, it is necessary to consider how to implement and coordinate concurrency and synchronization, and the first chapter of this book is mainly about this. The Linux kernel provides a variety of concurrent access protection mechanisms, such as atomic operations, spin locks, semaphores, mutex locks, read-write locks, RCU, etc. Below I will introduce these methods one by one, and selectively write some test programs.

1. Atomic Operations

Atomic operations ensure that instructions are executed atomically and the execution process is not interrupted. For example, a simple i++ instruction may be accessed concurrently on a multi-processor architecture or a single-processor architecture. At this time, we must ensure the atomicity of i++, because a simple i++ operation process is completed through " read-modify-write back ". If its atomicity is not guaranteed, the resource may be preempted by other processors or interrupts when executing the read, which will not achieve the desired effect. Therefore, for such instructions, we must "atomically" (uninterruptedly) complete " read-modify-write back ". Through reading this book, I have summarized some advantages of atomic operations, which are low overhead, and disadvantages, which are not applicable to complex data structures.

2. Spin lock

If there is only one variable in the critical section, then the atomic variable can solve the problem, but in most cases the critical section has a set of data operations, similar to the "read--modify--write" operation, so it is not appropriate to use atomic operations at this time. This is also the disadvantage of atomic operations I mentioned above. Spin locks can solve this problem very well. Spin locks can only be held by one kernel code path at the same time. If another kernel code path tries to acquire a spin lock that has already been held, then the kernel code path needs to wait until the spin lock holder releases the lock. If the lock is not held (contention) by other kernel code paths, then the lock can be acquired immediately.

Spin lock features:

  1. There are two types of lock mechanisms in the operating system, one is busy waiting and the other is sleep waiting. Spin lock belongs to the former. When the lock cannot be acquired, it will continue to try until it is acquired.
  2. Only one code path can acquire the lock at a time.
  3. The execution time in the critical section cannot be too long, otherwise the CPU busy waiting outside will be wasted. In particular, sleep is not allowed in the critical section.
  4. Spin locks can be used in interrupt context.

be careful:

When using a spin lock to protect a critical section, no interrupts can occur, whether hard or soft. Otherwise, if a code segment acquires a spin lock, and a hard interrupt occurs, the spin lock must also be acquired. Since the spin lock is preempted, the interrupt can only be in a busy waiting state. This leads to a deadlock, and the spin lock holder cannot release the lock as soon as possible because it is interrupted by the interrupt. The interrupt handler is always busy waiting for the lock. For this situation, the LINUX kernel has a spin lock variant that can solve this situation, spin_lock_irq(), which turns off local processor interrupts before acquiring the lock.

test program:

#include <linux/init.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/kthread.h>
#include <linux/freezer.h>
#include <linux/delay.h>

static DEFINE_SPINLOCK(hack_spinA);//定义一个自旋锁hack_spinA
static struct page *page;//定义一个指向页面结构的指针变量
static struct task_struct *lock_thread;//定义一个指向任务结构的指针变量

static int nest_lock(void)
{
 int order = 5;

 spin_lock(&hack_spinA);
 page = alloc_pages(GFP_KERNEL, order);
 if (!page) {
    printk("cannot alloc pages\n");
    return -ENOMEM;
 }

 spin_lock(&hack_spinA);  //不释放自旋锁,第二次尝试抢占锁
 msleep(10);
 __free_pages(page, order);
 spin_unlock(&hack_spinA);
 spin_unlock(&hack_spinA);

 return 0;
}

static int lockdep_thread(void *nothing)
{
 set_freezable();
 set_user_nice(current, 0);

 while (!kthread_should_stop()) {
    msleep(10);
    nest_lock();
 }
 return 0;
}

static int __init my_init(void)
{
    lock_thread = kthread_run(lockdep_thread, NULL, "lockdep_test");
    if (IS_ERR(lock_thread)) {
        printk("create kthread fail\n");
        return PTR_ERR(lock_thread);
    }

    return 0;
}
static void __exit my_exit(void)
{
 kthread_stop(lock_thread);
}
MODULE_LICENSE("GPL");
module_init(my_init);
module_exit(my_exit);

3.MCS lock

The MCS lock is an optimization solution for spin locks. Its purpose is to solve the problem of CPU cache line thrashing in queued spin locks (FIFO). The algorithm of MCS is that each lock applicant can only spin on the local CPU, not on global variables. In the Linux kernel, it is managed by a CPU linked list. The following is the flowchart for applying for an MCS lock:

4. Semaphores and mutexes

A semaphore allows a system process to enter a sleep state. In simple terms, a semaphore is a counter that supports two operation primitives: P operation and V operation. A semaphore is a mechanism for protecting multiple processors from accessing a common resource in a parallel processing environment. Mutex locks are used for mutual exclusion operations. Those who have studied real-time operating systems should be familiar with this, so I will not introduce it in detail.

Semaphore features:

  1. Allow the process to enter the sleep state, that is, sleep waiting.
  2. As long as there are enough resources, multiple processes can operate the semaphore simultaneously.

5.RCU

RCU is designed to solve the problem of multiple CPUs competing for shared variables in the previous synchronization operations, which makes cache consistency very bad. For example, in the case of read-write semaphores, the goal of the RCU mechanism is to have no synchronization overhead for the reader thread, or the synchronization overhead is very small and negligible, and no additional locking, atomic operation instructions, or memory barrier instructions are required, that is, unimpeded access, and then the tasks that need synchronization are handed over to the thread. In this way, the reading overhead is reduced and the overall performance is improved.

Summary: Through reading this issue, I have a deeper understanding of the relationship between concurrency and synchronization in Linux systems, I am more familiar with the synchronization operations of Linux systems, and I have also learned some new synchronization operations, such as RCU.

This post is from Embedded System

Latest reply

The synchronous operation of the Linux system is a learning focus.   Details Published on 2024-3-19 07:45

6570

Posts

0

Resources
2
 

The synchronous operation of the Linux system is a learning focus.

This post is from Embedded System

Comments

Yes  Details Published on 2024-3-25 09:08
 
 

224

Posts

0

Resources
3
 
Jacktang posted on 2024-3-19 07:45 The synchronous operation of the Linux system is a key point of study,,

Yes


This post is from Embedded System
 
 
 

Guess Your Favourite
Just looking around
Find a datasheet?

EEWorld Datasheet Technical Support

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京B2-20211791 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号
快速回复 返回顶部 Return list