Difference: 1. Thread is the smallest unit of program execution, while process is the smallest unit of resources allocated by the operating system. 2. A process consists of one or more threads. Threads are different execution routes of code in a process. 3. Thread context switching is much faster than process context switching. 4. Process switching requires maximum resources and is very inefficient; thread switching requires average resources and has average efficiency. 5. The process has its own stack and the stack is not shared between processes; the thread has its own stack and shares the heap.
The operating environment of this tutorial: Windows 7 system, GO version 1.18, Dell G3 computer.
A process
The task scheduling of most operating systems (Windows, Linux) adopts the time slice rotation preemptive scheduling method.
The scheduling method is as follows:
In a process, when a thread task is executed for a few milliseconds, it will be scheduled by the operating system kernel
Interrupt the processor through the hardware counter, force the thread to pause and put the thread's register into the memory
Determine which thread to execute next by looking at the thread list
Then restore the register of the thread from the memory, and finally resume the execution of the thread to execute the next task
This method guarantees Each thread is executed in turn. Since the execution efficiency of the CPU is very high and the time slice is very short, it quickly switches between tasks. It gives the impression that multiple tasks are being performed at the same time. This is what we Talk about concurrency.
Threads have their own stack, share the heap, and are also scheduled by the operating system
refers to integration on one processor Multiple computing cores are installed to improve computing power. That is to say, there are multiple processing cores for true parallel computing, and each processing core corresponds to a kernel thread.
Each processing core corresponds to a kernel thread. For example:
to combine a physical processing core Simulated into two logical processing cores , that is, two kernel threads. So the computers we see are generally dual-core and four-thread, or four-core and eight-thread. In the operating system, we see that the number of CPUs is twice the number of actual physical CPUs. For example, dual-core and four-threads can see 4CPUs.
For example, the mbp I am currently writing the article on is an i7 6-core 12-thread:
Programs generally do not use kernel threads directly, but use the kernel A high-level interface for threads—Lightweight Process (LWP), which is what we often call
threads.
In traditional applications, a thread is usually created for network requests to complete business logic. If there are multiple requests, multiple threads will be created.
If you encounter a time-consuming I/O behavior, the thread will always be in a blocked state. If many threads are in this idle state (waiting for the thread to complete execution before executing), this will cause resource application If it is not thorough, the throughput capacity of the system will decrease.
The most common time-consuming I/O behavior is such as JDBC. The CPU will always wait for the return of the data I/O operation. At this time, the thread does not use the CPU to perform operations at all, but is in an idle state. Using too many threads at the same time will also bring more context switching overhead.
There are two solutions to the above problems:
The process of coroutine :
When I/O blocking occurs, the scheduler of the coroutine will schedule it
By yielding the data stream immediately ( Actively give up), and record the data on the current stack
After the blocking is completed, immediately restore the stack through the thread, and put the blocking result on this thread to run
The thread running in the Coroutine
is called Fiber
. For example, the go
keyword in Golang is actually responsible for opening a Fiber
, let func
logic run on it.
Because the suspension of the coroutine is completely controlled by the program and occurs in the user state; the blocking state of the thread is switched by the operating system kernel and occurs in the kernel state.
Therefore, the overhead of coroutines is much less than that of threads, and there is no overhead of context switching.
Thread | Coroutine | |
---|---|---|
The initial unit is 1MB, fixed and immutable | The initial unit is generally 2KB , can be increased as needed | |
Completed by OS kernel | Completed by user | |
Design mode switching (switching from user mode to kernel mode), refreshing of 16 registers, PC, SP and other registers | Only three register values are modified: PC, SP , DX | |
The resource occupancy is too high, frequent creation and destruction will cause serious performance problems | The resource occupancy is small, and it will not bring Serious performance issues arise | |
Requires mechanisms such as locks to ensure data consistency and visibility | Does not require multi-threaded locking mechanisms , so there is only one thread. There is no conflict in writing variables at the same time. Shared resources are controlled in the coroutine without locking. You only need to determine the status, so the execution efficiency is much higher than that of threads |
The above is the detailed content of What is the difference between threads and processes in go language. For more information, please follow other related articles on the PHP Chinese website!