Synchronization is the interaction between processes and between processes and system resources. Since the Linux kernel uses multitasking, there must be a synchronization mechanism between multiple processes to ensure coordination.
There are many synchronization mechanisms in the Linux kernel. Today we will focus on the asynchronous and synchronous mechanisms in the kernel, focusing on the asynchronous mechanism in the kernel. The asynchronous mechanism in the kernel is divided into two types: one is the synchronization mechanism of the application layer, that is, the communication between application layer threads; the other is the synchronization mechanism of the kernel.
When a thread enters the kernel state, it can communicate directly with the kernel. There are two threads in the kernel: one is thread A. After it enters the kernel state, it will communicate directly with the kernel and tell it what to do. When it is completed, it will notify the kernel. (We call this operation semi) When a thread enters the kernel state, it will first communicate with the kernel once, and then it can be executed directly.
The synchronization mechanism in the kernel is essentially a communication mechanism between threads, and communication between them is achieved through the synchronization mechanism.
In order to ensure the correctness and consistency of the system, the Linux kernel will use blocking queues to handle inter-process communication during the process of inter-process communication. A blocking queue means that an element in the message queue is created when a message is sent, but not all messages will be sent out. Only when the waiting queue for a certain message is full will it be sent out. If there is no message in the receiver's waiting queue, the notification will be received. If there is a message in the receiver's waiting queue, no notification will be received.
In the kernel, the blocking queue is abstracted, that is, when a process sends a message, it is blocked. Therefore, blocking queue is actually a synchronization mechanism. The blocking queue creates a new object through a specific function, which contains a waiting queue pointer (Push/Pop). When the waiting queue is full, the system will use the object pointed to by the waiting queue pointer as the thread of the first process to issue a notification. That is, the process will be notified before it can continue to perform its tasks.
Semaphores can be used to send or receive messages. When a process owns a semaphore, it means that it already owns a semaphore of its own, which is a private variable of its own. This private variable cannot be obtained by other processes. Semaphores are used to represent the number of semaphores owned by a process. When this process owns the semaphore, it can send messages to other processes. This private variable is only allowed to be used by this process itself, and cannot be taken to the processes of other processes.
When a thread has its own semaphore, it can communicate with other threads through shared variables. Shared variables are also used in other threads, and other threads use shared variables to communicate with themselves.
Mutex is mainly for system resources. Mutexes in the Linux kernel can be divided into two types: shared resources and global mutex resources.
Shared resources are shared between processes. For example, if a process has multiple threads, then each thread can access this shared memory space. Global mutually exclusive resources mean that processes and threads can only access the global memory space where they are located. In a system, mutexes can be used to allow multiple processes to execute in memory at the same time. But if you want to implement multiple processes executing at the same time, you need to use a synchronization mechanism to ensure that all processes can run in the same memory. Using a mutex, a process can only access the global memory space where it is located, but cannot access other memory spaces. But a mutex has a big advantage, that is, there will be no process blocking.
The emergence of message queues has greatly expanded inter-process communication. In the kernel, in addition to the synchronization mechanism, there is another asynchronous mechanism, which is the message queue. We all know that the Linux kernel supports message queues. Although there is detailed information about message queues in the kernel, since the kernel does not support user-mode message queues, we still have to start with the application layer to understand message queues.
First of all, let’s understand what a message queue is?
The message queue is a special queue that can meet the synchronization needs between multiple application threads. Message queues are used to provide asynchronous communication between applications and other processes or threads. If we need to communicate asynchronously, we can do it using a message queue. For example, when we call the clear() function, we can directly use a registered message queue.
So how to create a message queue? When we use ext2.json, we can create a message queue using the semaphore command in JAR.json.clear().
In shared memory, we use shared locks, but because shared locks share memory with a certain process, when you want to acquire a shared lock, you need to request it from other processes.
Like in the above example, we access shared memory through the volatile keyword. At this time, you are not requesting from other processes, so when you want to acquire this shared lock, you only need to request from other processes. This avoids competition between the two processes and achieves data synchronization.
Since the shared lock shares memory with a process, you must request the process to access its address. The simplest solution for this situation is to use a thread pool.
There is an object called "byte" in the thread pool, which is also a shared lock. When you want to acquire this lock, you only need to send a request to the byte object. At this time, the byte object will send your request to the queue of the thread. When the thread receives the request, it will return a response message to you.
The thread pool is a very good thread management tool, which allows multiple threads to run at the same time and can also reduce deadlocks and conflicts between threads. One of its most important features is that it can effectively utilize the system's memory to improve efficiency.
The use of thread pools is very simple, which is to allocate the tasks to be executed to the corresponding thread pools. When the task to be executed is assigned to the corresponding thread pool, it can be executed. Using a thread pool will bring us many benefits:
Two synchronization mechanisms are introduced above, so let’s take a look at the synchronization mechanism in the kernel state. There are four synchronization methods in the kernel state:
Through the above analysis, we understand that synchronization is a complex issue. How is synchronization completed in the kernel state?
First of all, there are three processes in the kernel state: these three processes can access each other's resources, and can also synchronize when resources are requested by other processes.
When a process is blocked, all its child processes will take out a child process (or other child process) from the waiting queue and add it to the blocking queue. When all child processes are blocked, there are no child processes in the blocking queue. At this time, other child processes in the waiting queue will add the current thread to the waiting queue. These three processes will not affect each other during the waiting process. The three threads can synchronize with other threads by setting their own priorities.
The above is the detailed content of As an embedded development engineer, you have to know about the Linux kernel synchronization mechanism. For more information, please follow other related articles on the PHP Chinese website!