Home>Article>Java> Java multithreading and concurrency interview questions (1~3 questions, with answers)

Java multithreading and concurrency interview questions (1~3 questions, with answers)

(*-*)浩
(*-*)浩 Original
2019-11-23 14:56:59 2679browse

Java multithreading and concurrency interview questions (1~3 questions, with answers)

1. DelayQueue delayed unbounded blocking queue

When talking about the use and principle of DelayQueue, we first introduce DelayQueue, DelayQueue Is an unbounded blocking queue from which elements can be extracted only when the delay expires. The head of the queue is the Delayed element that will be stored the longest after the delay expires. (Recommended study:java interview questions)

DelayQueue blocking queue is also often used in our system development, for example: the design of the cache system, the objects in the cache, exceed the idle time , needs to be moved from the cache; the task scheduling system can accurately grasp the execution time of the task. We may need to process a lot of time-critical data through threads.

If we use ordinary threads, we need to traverse all the objects and check one by one to see if the data has expired, etc. First of all, the execution efficiency will not be too high, and secondly, the design style is also Greatly affects the accuracy of the data. A task that needs to be executed at 12:00 may not be executed until 12:01, which has greater disadvantages for systems with high data requirements. From this we can use DelayQueue.

The following will give an introduction to DelayQueue, and then give an example. And provide an implementation of the Delayed interface and Sample code. DelayQueue is a BlockingQueue whose specialized parameter is Delayed.

(Students who don’t know BlockingQueue, please learn about BlockingQueue first and then read this article) Delayed extends the Comparable interface. The comparison benchmark is the delay time value. The return value of getDelay, the implementation class of the Delayed interface, should be a fixed value. (final). DelayQueue is internally implemented using PriorityQueue.

DelayQueue=BlockingQueue+PriorityQueue+Delayed

The key elements of DelayQueue are BlockingQueue, PriorityQueue, and Delayed. It can be said that DelayQueue is a BlockingQueue implemented using a priority queue (PriorityQueue). The comparison base value of the priority queue is time.

Their basic definitions are as follows

public interface Comparable { public int compareTo(T o); } public interface Delayed extends Comparable { long getDelay(TimeUnit unit); } public class DelayQueue implements BlockingQueue { private final PriorityQueue q = new PriorityQueue(); }

The internal implementation of DelayQueue uses a priority queue. When the offer method of DelayQueue is called, the Delayed object is added to the priority queue q. As follows:

public boolean offer(E e) { final ReentrantLock lock = this.lock; lock.lock(); try { E first = q.peek(); q.offer(e); if (first == null || e.compareTo(first) < 0) available.signalAll(); return true; } finally { lock.unlock(); } }

The take method of DelayQueue takes out the first of the priority queue q (peek). If the delay threshold is not reached, await processing is performed. As follows:

public E take() throws InterruptedException { final ReentrantLock lock = this.lock; lock.lockInterruptibly(); try { for (; ; ) { E first = q.peek(); if (first == null) { available.await(); } else { long delay = first.getDelay(TimeUnit.NANOSECONDS); if (delay > 0) { long tl = available.awaitNanos(delay); } else { E x = q.poll(); assert x != null; if (q.size() != 0) available.signalAll(); //wake up other takers return x; } } } } finally { lock.unlock(); } }

DelayQueue instance application

Ps: In order to have calling behavior, the elements stored in DelayDeque must inherit the Delayed interface. The Delayed interface makes the object a delayed object, which gives the object stored in the DelayQueue class an activation date. This interface enforces the following two methods.

The following will use Delay to implement a cache. It includes a total of three classes: Pair, DelayItem, and Cache

Pair class:

public class Pair { public K first; public V second; public Pair() { } public Pair(K first, V second) { this.first = first; this.second = second; } }

The following is the implementation of the Delay interface:

import java.util.concurrent.Delayed; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicLong; public class DelayItem implements Delayed { /** * Base of nanosecond timings, to avoid wrapping */ private static final long NANO_ORIGIN = System.nanoTime(); /** * Returns nanosecond time offset by origin */ final static long now() { return System.nanoTime() - NANO_ORIGIN; } /** * Sequence number to break scheduling ties, and in turn to guarantee FIFO order among tied * entries. */ private static final AtomicLong sequencer = new AtomicLong(0); /** * Sequence number to break ties FIFO */ private final long sequenceNumber; /** * The time the task is enabled to execute in nanoTime units */ private final long time; private final T item; public DelayItem(T submit, long timeout) { this.time = now() + timeout; this.item = submit; this.sequenceNumber = sequencer.getAndIncrement(); } public T getItem() { return this.item; } public long getDelay(TimeUnit unit) { long d = unit.convert(time - now(), TimeUnit.NANOSECONDS); return d; } public int compareTo(Delayed other) { if (other == this) // compare zero ONLY if same object return 0; if (other instanceof DelayItem) { DelayItem x = (DelayItem) other; long diff = time - x.time; if (diff < 0) return -1; else if (diff > 0) return 1; else if (sequenceNumber < x.sequenceNumber) return -1; else return 1; } long d = (getDelay(TimeUnit.NANOSECONDS) - other.getDelay(TimeUnit.NANOSECONDS)); return (d == 0) ?0 :((d < 0) ?-1 :1); } }

The following is the implementation of Cache, including put and get methods

import javafx.util.Pair; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.DelayQueue; import java.util.concurrent.TimeUnit; import java.util.logging.Level; import java.util.logging.Logger; public class Cache { private static final Logger LOG = Logger.getLogger(Cache.class.getName()); private ConcurrentMap cacheObjMap = new ConcurrentHashMap(); private DelayQueue>> q = new DelayQueue>>(); private Thread daemonThread; public Cache() { Runnable daemonTask = new Runnable() { public void run() { daemonCheck(); } }; daemonThread = new Thread(daemonTask); daemonThread.setDaemon(true); daemonThread.setName("Cache Daemon"); daemonThread.start(); } private void daemonCheck() { if (LOG.isLoggable(Level.INFO)) LOG.info("cache service started."); for (; ; ) { try { DelayItem> delayItem = q.take(); if (delayItem != null) { // 超时对象处理 Pair pair = delayItem.getItem(); cacheObjMap.remove(pair.first, pair.second); // compare and remove } } catch (InterruptedException e) { if (LOG.isLoggable(Level.SEVERE)) LOG.log(Level.SEVERE, e.getMessage(), e); break; } } if (LOG.isLoggable(Level.INFO)) LOG.info("cache service stopped."); } // 添加缓存对象 public void put(K key, V value, long time, TimeUnit unit) { V oldValue = cacheObjMap.put(key, value); if (oldValue != null) q.remove(key); long nanoTime = TimeUnit.NANOSECONDS.convert(time, unit); q.put(new DelayItem>(new Pair(key, value), nanoTime)); } public V get(K key) { return cacheObjMap.get(key); } }

Test the main method:

// 测试入口函数 public static void main(String[] args) throws Exception { Cache cache = new Cache(); cache.put(1, "aaaa", 3, TimeUnit.SECONDS); Thread.sleep(1000 * 2); { String str = cache.get(1); System.out.println(str); } Thread.sleep(1000 * 2); { String str = cache.get(1); System.out.println(str); } }

The output result is:

aaaa null

We see the above result. If the delay time is exceeded, the data in the cache will be automatically lost, and the result will be null.

2. Concurrent (Collection) queue-non-blocking queue

Non-blocking queue

First we need to briefly understand Next, what is a non-blocking queue:

Contrary to the blocking queue, the execution of the non-blocking queue will not be blocked, whether it is the dequeuing of consumers or the joining of producers. Under the hood, non-blocking queues use CAS (compare and swap) to achieve non-blocking thread execution.

Simple operation of non-blocking queue

The same as blocking queue, the common methods in non-blocking queue are dequeue and enqueue.

offer(): A method inherited from the Queue interface, which implements the queue entry operation without hindering the execution of the thread. If the insertion is successful, it returns true; Dequeue method:

poll(): Move the head node pointer, return the head node element, and dequeue the head node element; if the queue is empty, return null;

peek(): Move the head node pointer, return the head node element , the head node element will not be dequeued; if the queue is empty, null will be returned;

3. Non-blocking algorithm CAS

First we need Understand pessimistic locking and optimistic locking

Pessimistic lock: Assume that the concurrency environment is pessimistic. If a concurrency conflict occurs, consistency will be destroyed, so conflicts must be completely prohibited through exclusive locks. There is a classic metaphor, "If you don't lock the door, then the troublemaker will break in and make a mess", so "you can only open the door and let in one person at a time, so you can keep an eye on him."

Optimistic locking: Assume that the concurrency environment is optimistic, that is, although there will be concurrency conflicts, the conflicts can be found and will not cause damage. Therefore, you can not add any protection, and then decide to give up the operation or try again after discovering the concurrency conflicts. . An analogy is, "If you don't lock the door, then although the troublemakers will break in, they will know if they want to destroy you." Therefore, "You can just let everyone in and wait until you find out that they want to destroy." Make a decision".

It is generally believed that the performance of optimistic locking is higher than that of pessimistic locking, especially in some complex scenarios. This is mainly because pessimistic locks will also protect certain operations that will not cause damage while locking; while competition for optimistic locks only occurs at the smallest concurrency conflicts. If we use pessimistic locks to understand it, it is "lock The granularity is the smallest”. However, the design of optimistic locks is often more complex, so pessimistic locks are often used in complex scenarios. Ensure correctness first, and then pursue performance if necessary.

The implementation of optimistic locking often requires hardware support. Most processors implement a CAS instruction to implement the semantics of "Compare And Swap" (swap here means "swapping in", that is, set), forming a basic optimistic lock. CAS contains 3 operands:

The memory location to be read and written V

The value to be compared A

The new value to be written B

If and only if the value of position V is equal to A, CAS will atomically update the value of position V with the new value B; otherwise no operation will be performed. Regardless of whether the value of position V is equal to A, the original value of V will be returned. An interesting fact is that "using CAS to control concurrency" is not equivalent to "using optimistic locking". CAS is just a means to achieve both optimistic locking and pessimistic locking. Optimism and pessimism are just concurrency control strategies.

The above is the detailed content of Java multithreading and concurrency interview questions (1~3 questions, with answers). For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn