1. Principle of synchronization
The JVM specification stipulates that the JVM implements method synchronization and code block synchronization based on entering and exiting the Monitor object, but the implementation details of the two are different. Code block synchronization is implemented using the monitorenter and monitorexit instructions, while method synchronization is implemented using another method. The details are not specified in the JVM specification, but method synchronization can also be achieved using these two instructions. The monitorenter instruction is inserted into the beginning of the synchronized code block after compilation, while the monitorexit is inserted at the end of the method and the exception. The JVM must ensure that each monitorenter must be paired with a corresponding monitorexit. Any object has a monitor associated with it. When a monitor is held, it will be in a locked state. When the thread executes the monitorenter instruction, it will try to obtain ownership of the monitor corresponding to the object, that is, it will try to obtain the lock of the object.
2. Java object header
is locked in the Java object header. If the object is an array type, the virtual machine uses 3 words (word width) to store the object header. If the object is a non-array type, the virtual machine uses 2 words (word width) to store the object header. In a 32-bit virtual machine, one word width is equal to four bytes, that is, 32 bits.
The Mark Word in the Java object header stores the HashCode, generation age and lock mark bit of the object by default. The default storage structure of Mark Word for 32-bit JVM is as follows:
During operation, the data stored in Mark Word will change as the lock flag changes. Mark Word may change to store the following 4 types of data:
3. Several types of locks
The blocking and waking up of threads require the CPU to transfer from user mode to core mode. Frequent blocking and waking up are harmful to the CPU. A very taxing job.
In order to reduce the performance consumption caused by acquiring and releasing locks, Java SE1.6 introduced "biased locks" and "lightweight locks", so there are four lock states in Java SE1.6, no lock Status, biased lock status, lightweight lock status and heavyweight lock status, which will gradually escalate with competition conditions.
Locks can be upgraded but cannot be downgraded, which means that after a biased lock is upgraded to a lightweight lock, it cannot be downgraded to a biased lock. The purpose of this strategy of lock upgrade but not downgrade is to improve the efficiency of acquiring and releasing locks.
3.1 Biased locks
The author of Hotspot has found through previous research that in most cases, not only does there not be multi-thread competition for locks, but it is always acquired multiple times by the same thread. The purpose of biased locking is to eliminate the overhead of lock reentrancy (CAS) for a thread after it obtains the lock, which seems to protect the thread.
Further understanding of biased lock
You don’t need to do anything to release the biased lock, which means that the MarkValue that has been added with a biased lock will always retain the state of the biased lock, so even if the same thread continues to add There is no overhead for unlocking the lock.
On the other hand, biased locks are easier to terminate than lightweight locks. Lightweight locks are upgraded to heavy locks when lock competition occurs, while general biased locks are upgraded to lightweight locks when different threads apply for locks. This This means that if an object is first locked and unlocked by thread 1, and then locked and unlocked by thread 2, and there is no lock conflict in the process, biased lock failure will still occur. The difference is that this time it will degenerate to lock-free first. status, plus lightweight locks, as shown in the figure:
In addition, the JVM has also optimized the situation where there are multiple threads locking, but there is no lock competition. It sounds a bit awkward, but in real applications This situation is indeed possible, because in addition to mutual exclusion, threads may also have synchronization relationships before, and the competition for the shared object lock between the two synchronized threads (one in front and one behind) is likely to have no conflict. In this case, the JVM uses an epoch to represent a lock-biased timestamp (it is quite expensive to actually generate a timestamp, so it should be understood as an identifier similar to a timestamp). For epoch, this is the official explanation A similar mechanism, called bulk rebiasing, optimizes situations in which objects of a class are locked and unlocked by different threads but never concurrently. It invalidates the bias of all instances of a class without disabling biased locking. An epoch value in the class acts as a timestamp that indicates the validity of the bias. This value is copied into the header word upon object allocation. Bulk rebiasing can then efficiently be implemented as an increment of the epoch in the appropriate class. The next time an instance of this class is going to be locked, the code detects a different value in the header word and rebiases the object towards the current thread. The thread ID of the lock bias is stored in the lock record in the stack frame. In the future, the thread does not need to spend CAS operations to lock and unlock when entering and exiting the synchronization block, but only needs to simply test whether the Mark Word in the object header Stores the bias lock pointing to the current thread. If the test succeeds, it means that the thread has obtained the lock. If the test fails, you need to test again whether the bias lock flag in Mark Word is set to 1 (indicating that it is currently a bias lock). If not If set, use CAS to compete for the lock. If set, try to use CAS to point the biased lock of the object header to the current thread.
Revocation of biased lock
Biased lock uses a mechanism that waits until competition occurs before releasing the lock, so when other threads try to compete for the biased lock, the thread holding the biased lock will release the lock. The revocation of the biased lock requires waiting for the global safety point (no bytecode is being executed at this point in time). It will first pause the thread holding the biased lock, and then check whether the thread holding the biased lock is alive. If the thread is not active status, set the object header to a lock-free state. If the thread is still alive, the stack with the biased lock will be executed, and the lock record of the biased object will be traversed. The lock record in the stack and the Mark Word of the object header will either be biased to other threads again. , either revert to lock-free or mark the object as unsuitable as a biased lock, and finally wake up the suspended thread. Thread 1 in the figure below demonstrates the process of bias lock initialization, and thread 2 demonstrates the process of bias lock cancellation.
Settings of bias lock
Turn off bias lock: Bias lock is enabled by default in Java 6 and Java 7, but it is not activated until a few seconds after the application starts. If necessary, you can use JVM parameters to turn off the delay - XX: BiasedLockingStartupDelay = 0. If you are sure that all locks in your application are usually in a state of competition, you can turn off biased locking through the JVM parameter -XX:-UseBiasedLocking=false, and then it will enter the lightweight lock state by default.
3.2 Spin lock
Thread blocking and waking up requires the CPU to transfer from user mode to core mode. Frequent blocking and waking up is a heavy burden on the CPU. At the same time, we can find that the lock state of many object locks will only last for a short period of time, such as the self-increment operation of an integer. It is obviously not worthwhile to block and wake up the thread in a short period of time, so spin locks were introduced.
The so-called "spin" is to let the thread execute a meaningless loop, and then compete for the lock again after the loop ends. If there is no competition to continue the loop, the thread will always be in the running state during the loop, but JVM-based threads Scheduling will transfer time slices, so other threads still have the opportunity to apply for and release locks.
Spin lock saves the time and space (queue maintenance, etc.) overhead of blocking locks, but long-term spin becomes "busy waiting", and busy waiting is obviously worse than blocking locks. Therefore, the number of spins is generally controlled within a range, such as 10, 100, etc. After exceeding this range, the spin lock will be upgraded to a blocking lock.
Regarding the selection of the spin lock period, HotSpot believes that the best time should be the time of a thread context switch, but it has not been done so far. After investigation, it is currently only suspending a few CPU cycles through assembly. In addition to spin cycle selection, HotSpot also performs many other spin optimization strategies, as follows:
If the average load is less than CPUs, it will keep spinning
If there is more than (CPUs/2) threads are spinning, then the subsequent threads will block directly
If the spinning thread finds that the Owner has changed, it will delay the spin time (spin count) or enter blocking. If the CPU is in power saving mode, it will stop. Spin
The worst case scenario of spin time is the storage delay of the CPU (the time difference between CPU A storing a data and CPU B learning the data)
3.3 Lightweight lock
Lightweight lock locking
Before the thread executes the synchronization block, the JVM will first create a space for storing the lock record in the stack frame of the current thread, and copy the Mark Word in the object header to the lock record, which is officially called Displaced Mark Word. The thread then attempts to use CAS to replace the Mark Word in the object header with a pointer to the lock record. If successful, the current thread acquires the lock. If it fails, the spin acquires the lock. When the spin acquisition still fails, it means that there are other threads competing for the lock (two or more threads competing for the same lock), then lightweight Level locks will expand into heavyweight locks.
Lightweight Lock Unlocking
When lightweight unlocking, the atomic CAS operation will be used to replace the Displaced Mark Word back to the object header. If successful, it means that the synchronization process has been completed. If it fails, it means that other threads have tried to acquire the lock, and the suspended thread must be awakened while releasing the lock. The figure below is a flow chart in which two threads compete for the lock at the same time, causing lock expansion.
3.4 Heavyweight lock
Weight lock is also called object monitor (Monitor) in JVM. It is very similar to Mutex in C. In addition to having Mutex mutual exclusion function, it is also responsible for implementing Semaphore Function, that is to say, it contains at least a queue for competing locks and a signal blocking queue (wait queue). The former is responsible for mutual exclusion, and the latter is used for thread synchronization.
4. Comparison of advantages and disadvantages of locks