Concurrency is the ability to run multiple programs in parallel or run multiple parts of a program in parallel. If a time-consuming task in the program can be run asynchronously or in parallel, the throughput and interactivity of the entire program will be greatly improved. Modern PCs have multiple CPUs or multiple cores in a CPU. Whether the ability to use multiple cores properly will become the key to a large-scale application.
Basic use of threads
There are two ways to write code that is executed when a thread is running: one is to create an instance of the Thread subclass and override the run method, and the second is to implement the Runnable interface when creating the class. Of course, implementing Callable is also a way. The combination of Callable and Future can obtain the return value after completing the task, but the Runnable and Thread methods cannot obtain the result after the task execution.
public class ThreadMain { public static void main(String[] args) { MyThread myThread = new MyThread(); new Thread(myThread).start(); new MyThreas2().start(); } }// 第一种方式,实现Runable接口class MyThread implements Runnable { @Override public void run() { System.out.println("MyThread run..."); } }// 第二种方式,继承Thread类,重写run()方法class MyThreas2 extends Thread { @Override public void run() { System.out.println("MyThread2 run..."); } }
Once the thread is started, the start() method will return immediately without waiting for the run() method to complete execution, as if the run method was executed on another CPU.
Note: A common mistake when creating and running a thread is to call the thread's run() method instead of the start() method, as shown below:
Thread newThread = new Thread(MyRunnable()); newThread.run(); //should be start();
At first you won't feel anything wrong, because run The () method is indeed called as you expected. However, in fact, the run() method is not executed by the new thread just created, but by the current thread. That is, executed by the thread that executes the above two lines of code. If you want the created new thread to execute the run() method, you must call the start method of the new thread.
Callable and Future are combined to achieve the return value after executing the task:
public static void main(String[] args) { ExecutorService exec = Executors.newSingleThreadExecutor(); Future<String> future = exec.submit(new CallTask()); System.out.println(future.get()); }class CallTask implements Callable { public String call() { return "hello"; } }
Set the thread name for the thread:
MyTask myTask = new MyTask(); Thread thread = new Thread(myTask, "myTask thread");
thread.start(); System.out.println(thread.getName());
When creating a thread, you can give the thread a name. It helps us differentiate between different threads.
volatile
Both synchronized and volatile play important roles in multi-threaded concurrent programming. Volatile is lightweight synchronized, which ensures the "visibility" of shared variables in multi-processor development. Visibility means that when one thread modifies a shared variable, another thread can read the modified value. It is less expensive than synchronized in some cases, but volatile cannot guarantee the atomicity of variables.
When a volatile variable is written (there is a lock instruction under assembly), the lock instruction has two functions in a multi-core system:
Write the current CPU cache line back to the system memory.
This writeback operation will cause the data cached by other CPUs to change the address to become invalid.
Multiple CPUs follow the cache consistency principle. Each CPU checks whether its cache value has expired by sniffing the data transmitted on the bus. When it is found that the memory address corresponding to the cache has been modified, the corresponding cache line is set to an invalid state. , the next data operation will be re-read from the system memory. For more knowledge about volatile, please click for an in-depth analysis of the implementation principles of Volatile.
synchronized
Synchronized has always been a veteran in multi-threaded concurrent programming. Many people will call it a heavyweight lock. However, with various optimizations of Synchronized in Java SE1.6, in some cases it is not So heavy.
Every object in Java can be used as a lock. When a thread tries to access a synchronized code block, it must first obtain the lock and release the lock when it exits or throws an exception.
For synchronized methods, the lock is the current instance object.
For static synchronization methods, the lock is the Class object of the current object.
For synchronized method blocks, the lock is the object configured in Synchonized brackets.
The synchronized keyword cannot be inherited, which means that the synchronized method in the base class is not synchronized by default in the subclass. When a thread attempts to access a synchronized code block, it must first acquire the lock and release the lock when exiting or throwing an exception. Every object in Java can be used as a lock, so where is the lock? The lock is stored in the Java object header. If the object is an array type, the virtual machine uses 3 words (word width) to store the object header. If the object is a non-array type, the virtual machine uses 2 words (word width) to store the object header. For more synchronized knowledge, please click Synchronized in Java SE1.6.
Thread Pool
The thread pool is responsible for managing worker threads and contains a queue of tasks waiting to be executed. The task queue of the thread pool is a collection of Runnables, and the worker thread is responsible for taking out and executing Runnable objects from the task queue.
ExecutorService executor = Executors.newCachedThreadPool(); for (int i = 0; i < 5; i++) { executor.execute(new MyThread2()); }executor.shutdown();
Java provides 4 types of thread pools through Executors:
newCachedThreadPool: Create a cacheable thread pool. For new tasks, if there is no idle thread, a new thread will be created. If the idle thread exceeds a certain time, it will be recycled.
newFixedThreadPool: Create a thread pool with a fixed number of threads.
newSingleThreadExecutor: Create a single-threaded thread pool that uses only one thread to execute tasks, ensuring that all tasks are executed in FIFO order.
newScheduledThreadPool: Create a fixed-length thread pool to support scheduled and periodic task execution.
The bottom layers of the above thread pools all call ThreadPoolExecutor to create thread pools.
ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue)
corePoolSize(线程池的基本大小):当提交一个任务到线程池时,线程池会创建一个线程来执行任务,即使其他空闲的基本线程能够执行新任务也会创建线程,等到需要执行的任务数大于线程池基本大小时就不再创建。如果调用了线程池的prestartAllCoreThreads方法,线程池会提前创建并启动所有基本线程。
maximumPoolSize(线程池最大大小):线程池允许创建的最大线程数。如果队列满了,并且已创建的线程数小于最大线程数,则线程池会再创建新的线程执行任务。值得注意的是如果使用了无界的任务队列这个参数就没什么效果。
keepAliveTime(线程活动保持时间):线程池的工作线程空闲后,保持存活的时间。所以如果任务很多,并且每个任务执行的时间比较短,可以调大这个时间,提高线程的利用率。
TimeUnit(线程活动保持时间的单位):可选的单位有天(DAYS),小时(HOURS),分钟(MINUTES),毫秒(MILLISECONDS),微秒(MICROSECONDS, 千分之一毫秒)和毫微秒(NANOSECONDS, 千分之一微秒)。,可以选择的阻塞队列有以下几种:
workQueue(任务队列):用于保存等待执行的任务的阻塞队列。
ArrayBlockingQueue:是一个基于数组结构的有界阻塞队列,此队列按 FIFO(先进先出)原则对元素进行排序。
LinkedBlockingQueue:一个基于链表结构的阻塞队列,此队列按FIFO (先进先出) 排序元素,吞吐量通常要高于ArrayBlockingQueue。静态工厂方法Executors.newFixedThreadPool()使用了这个队列。
SynchronousQueue:一个不存储元素的阻塞队列。每个插入操作必须等到另一个线程调用移除操作,否则插入操作一直处于阻塞状态,吞吐量通常要高于LinkedBlockingQueue,静态工厂方法Executors.newCachedThreadPool使用了这个队列。
PriorityBlockingQueue:一个具有优先级得无限阻塞队列。
当提交新任务到线程池时,其处理流程如下:
1.先判断基本线程池是否已满?没满则创建一个工作线程来执行任务,满了则进入下个流程。
2.其次判断工作队列是否已满?没满则提交新任务到工作队列中,满了则进入下个流程。
3.最后判断整个线程池是否已满?没满则创建一个新的工作线程来执行任务,满了则交给饱和策略来处理这个任务。