Home > Java > javaTutorial > body text

Java NIO: I/O model

大家讲道理
Release: 2017-05-28 11:32:33
Original
1238 people have browsed it

Maybe many friends will find it a bit difficult when learning NIO, and many concepts in it are not so clear. Before entering Java NIOProgramming, let’s discuss some basic knowledge today: I/OModel. The following article starts with the concepts of synchronization and asynchronous, then explains the difference between blocking and non-blocking, then introduces the difference between blocking IO and non-blocking IO, then introduces the difference between synchronous IO and asynchronous IO, and then introduces 5 IO models, and finally introduces two design patterns (Reactor and Proactor) related to high-performance IO design.

The following is the table of contents outline of this article:

1. What is synchronization? What is asynchronous?

2. What is blocking? What is non-blocking?

3. What is blocking IO? What is non-blocking IO?

4. What is synchronous IO? What is asynchronous IO?

Five. Five IO models

Six. Two high-performance IO design patterns

If there are any irregularities, please forgive me and welcome criticism and corrections.

1. What is synchronization? What is asynchronous?

The concepts of synchronization and asynchronousness have been around for a long time, and there are many opinions about synchronization and asynchronousness on the Internet. The following is my personal understanding:

Synchronization means: if there are multiple tasks or events to occur, these tasks or events must be carried out one by one. The execution of one event or task will cause the entire The process is temporarily waiting, and these events cannot be executed concurrently;

Asynchronous means: if multiple tasks or events occur, these events can be executed concurrently, and the execution of one event or task will not cause the entire process to fail. Wait for now.

This is synchronous and asynchronous. To give a simple example, if there is a task that includes two subtasks A and B. For synchronization, when A is executing, B can only wait until A is completed, and then B can execute; for asynchronous, A and B can be executed concurrently, and B does not have to wait for A to finish executing, so that the execution of A will not cause the entire task to wait temporarily.

If you still don’t understand, you can read the following two pieces of code first:




##1

2

3

4

5

6

7

8

9

10

11

12

13

14


##void

fun1() {

}

void fun2() {

}

void function(){

fun1();<p class="line number11 index10 alt2"><code class="java spaces">      fun2()

      .....

      .....

  }


This code is a typical synchronization. In the method function, the execution of fun1 will cause the subsequent fun2 to be unable to execute. Fun2 must wait for the execution of fun1 to complete before it can be executed.

Then look at the following code:





If the data is not ready, it will always be blocked in the read method.

2. Non-blocking IO model

When the user thread initiates a read operation, there is no need to wait, but a result is obtained immediately. If the result is an error, it knows that the data is not ready yet, so it can send the read operation again. Once the data in the kernel is ready and a request is received from the user thread again, it immediately copies the data to the user thread and then returns.

So in fact, in the non-blocking IO model, the user thread needs to constantly ask whether the kernel data is ready, which means that non-blocking IO will not hand over the CPU, but will always occupy the CPU.

The typical non-blocking IO model is generally as follows:




##1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24


#void

fun1() {

}

void

fun2() {

}

void

function(){

new<a href="//m.sbmmt.com/wiki/165.html" target="_blank"> </a>Thread(){

                                                                                          ##fun1();

                                                       

##new Thread(){

                                                                                                                                                

#                                                                                                                  

      .....

                                                                 

##}

<p><br></p> <p> This code is a typical asynchronous. The execution of fun1 will not affect the execution of fun2, and the execution of fun1 and fun2 will not cause the subsequent execution process to be temporary. wait. </p> <p> In fact, synchronization and asynchronousness are very broad concepts. Their focus is on whether the occurrence or execution of an event will cause the entire process to wait temporarily when multiple tasks and events occur. I think an analogy can be made between synchronization and asynchronousness with the syn<a href="//m.sbmmt.com/wiki/1332.html" target="_blank">chr</a>onized keyword in Java. When multiple threads access a <a href="//m.sbmmt.com/wiki/70.html" target="_blank"> variable</a> at the same time, each thread's access to the variable is an event. For synchronization, these threads must access the variable one by one. The process of a thread accessing the variable , other threads must wait; for asynchronous, multiple threads do not have to access the variable one by one, but can access it at the same time. </p> <p> Therefore, I personally feel that synchronization and asynchronousness can be expressed in many ways, but the key to remember is that when multiple tasks and events occur, whether the occurrence or execution of an event will cause the entire process to wait temporarily. Generally speaking, asynchronous can be achieved through multi-threading, but remember not to equate multi-threading with asynchronous. Asynchrony is just a macro pattern, and using multi-threading to achieve asynchronous is just a means, and Asynchronous implementation can also be achieved through multi-processing. </p> <h2>2. What is blocking? What is non-blocking? </h2> <p> The difference between synchronization and asynchronousness was introduced earlier. In this section, let’s take a look at the difference between blocking and non-blocking. </p> <p> Blocking means: when an event or task is being executed, it issues a request operation, but because the conditions required for the request operation are not met, it will wait there until the conditions are met;</p> <p> Non-blocking means: when an event or task is being executed, it issues a request operation. If the conditions required for the request operation are not met, a flag message will be returned immediately to inform that the conditions are not met, and it will not continue Wait there. </p> <p> This is the difference between blocking and non-blocking. That is to say, the key difference between blocking and non-blocking is that when a request is made for an operation, if the conditions are not met, whether it will wait forever or return a flag message. </p> <p> Give a simple example: </p> <p> If I want to read the contents of a file, if there is no readable content in the file at this time, for synchronization, it will always wait there. Until there is readable content in the file; for non-blocking, a flag message will be returned directly to inform that there is no readable content in the file. </p> <p> Some friends on the Internet equate synchronization and asynchronousness with blocking and non-blocking respectively. In fact, they are two completely different concepts. Note that understanding the difference between these two sets of concepts is very important for understanding the subsequent IO model. </p> <p> The focus of synchronous and asynchronous is on whether the execution of one task will cause the entire process to wait temporarily during the execution of multiple tasks; </p> <p> The focus of blocking and non-blocking is on issuing a request During operation, if the conditions for the operation are not met, whether a flag message will be returned to inform that the conditions are not met. </p> <p> Understanding blocking and non-blocking can be understood by analogy with thread blocking. When a thread performs a request operation, if the condition is not met, it will be blocked, that is, waiting for the condition to be met. </p> <h2>3. What is blocking IO? What is non-blocking IO? </h2> <p> Before understanding blocking IO and non-blocking IO, let’s first look at how the specific IO operation process is performed. </p> <p> Generally speaking, IO operations include: reading and writing to the hard disk, reading and writing to the socket, and reading and writing to the peripherals. </p> <p> When a user thread initiates an IO request operation (this article takes a read request operation as an example), the kernel will check whether the data to be read is ready. For blocking IO, if the data is not ready, it will continue to Wait there until the data is ready; for non-blocking IO, if the data is not ready, a flag message will be returned to inform the user thread that the data currently to be read is not ready. When the data is ready, the data is copied to the user thread, so that a complete IO read request operation is completed. That is to say, a complete IO read request operation includes two stages: </p> <p> 1) View data Is it ready? </p> <p> 2) Copy data (the kernel copies the data to the user thread). </p> <p> Then the difference between blocking (blocking IO) and non-blocking (non-blocking IO) is that in the first stage, if the data is not ready, whether to wait all the time in the process of checking whether the data is ready, or to return directly A flag message. </p> <p> Traditional IO in Java is blocking IO, such as reading data through a socket. After calling the read() method, if the data is not ready, the current thread will be blocked at the read method call until there is data. If it is non-blocking IO, when the data is not ready, the read() method should return a flag message to inform the current thread that the data is not ready, instead of waiting there all the time. </p> <h2>4. What is synchronous IO? What is asynchronous IO? </h2> <p> Let’s first take a look at the definitions of synchronous IO and asynchronous IO. The definitions of synchronous IO and asynchronous IO in the book "Unix Network Programming" are as follows: </p> <p> <strong>A synchronous I/O operation causes the requesting process to be blocked until that I/O operation completes.</strong><br> <strong>An asynchronous I/O operation does not cause the requesting process to be blocked.</strong></p> <p> It can be seen from the literal meaning: synchronous IO means that if a thread requests an IO operation, the thread will be blocked before the IO operation is completed; </p> <p> Asynchronous IO means that if a thread requests an IO operation, the thread will be blocked. IO operations, IO operations will not cause the request thread to be blocked. </p> <p> In fact, the synchronous IO and asynchronous IO models are aimed at the interaction between the user thread and the kernel: </p> <p> For synchronous IO: after the user issues an IO request operation, if the data is not ready, It is necessary to continuously poll whether the data is ready through the user thread or the kernel. When the data is ready, the data is copied from the kernel to the user thread; </p> <p> Asynchronous IO: Only the IO request operation is issued by the user thread To carry out, both stages of the IO operation are completed automatically by the kernel<a href="//m.sbmmt.com/code/6615.html" target="_blank"></a>, and then a notification is sent to inform the user thread that the IO operation has been completed. That is to say, in asynchronous IO, there will be no blocking of user threads. </p> <p> This is the key difference between synchronous IO and asynchronous IO. The key difference between synchronous IO and asynchronous IO is reflected in whether the data copy phase is completed by the user thread or the kernel. Therefore, asynchronous IO must have underlying support from the operating system. </p> <p> Note that synchronous IO and asynchronous IO are two different concepts from blocking IO and non-blocking IO. </p> <p> Blocking IO and non-blocking IO are reflected in the fact that when the user requests an IO operation, if the data is not ready, the user thread will still receive a flag message if it waits for the data to be ready. In other words, blocking IO and non-blocking IO are reflected in the first stage of the IO operation, how it is processed when checking whether the data is ready. </p> <h2>5. Five IO models</h2> <p> Five IO models are mentioned in the book "Unix Network Programming", namely: blocking IO, non-blocking IO, multiplexed IO, Signal<a href="//m.sbmmt.com/php/php-tp-actuate.html" target="_blank">Driver</a>IO and asynchronous IO. </p> <p> Now let’s introduce the similarities and differences of these 5 IO models respectively. </p> <p>1. Blocking IO model</p> <p> The most traditional IO model, that is, blocking occurs during the process of reading and writing data. </p> <p> When the user thread issues an IO request, the kernel will check whether the data is ready. If not, it will wait for the data to be ready, and the user thread will be in the blocked <a href="//m.sbmmt.com/code/8243.html" target="_blank"> state</a>, and the user thread will hand over CPU. When the data is ready, the kernel will copy the data to the user thread and return the result to the user thread, and then the user thread will release the block state. </p> <p> A typical example of a blocking IO model is: </p> <p class="cnblogs_Highlighter sh-gutter"><br></p> <p><br></p> <p class="syntaxhighlighter java"><br></p> <table border="0"><tbody> <tr class="firstRow"> <td class="gutter">##1<p class="line number1 index0 alt2"></p> </td> <td class="code"> <p class="container"><br></p>##data = socket.read();<p class="line number1 index0 alt2"><code class="java plain">


But there is a very serious problem with non-blocking IO. In the while loop, you need to constantly ask whether the kernel data is ready, which will cause CPU usage The rate is very high, so in general, while loops are rarely used to read data.

3. Multiplexed IO model

The multiplexed IO model is a model that is currently used more frequently. Java NIO is actually multiplexed IO.

In the multiplexed IO model, there will be a thread that continuously polls the status of multiple sockets. Only when the socket actually has read and write events, the actual IO read and write operations are actually called. Because in the multiplexed IO model, only one thread can be used to manage multiple sockets, the system does not need to create new processes or threads, nor does it need to maintain these threads and processes, and only when there are actual socket read and write events IO resources will be used only when the time is up, so it greatly reduces resource usage.

In Java NIO, selector.select() is used to querywhether there is an arrival event for each channel. If there is no event, it will always be blocked there, so this method will cause User thread blocking.

Maybe some friends will say that I can use multi-threading + blocking IO to achieve similar effects. However, in multi-threading + blocking IO, each socket corresponds to a thread, which will cause a lot of resource usage. , and especially for long connections, thread resources will never be released. If there are many connections later, it will cause a performance bottleneck.

In the multiplexed IO mode, multiple sockets can be managed through one thread. Only when the socket actually has read and write events, resources will be occupied for actual read and write operations. Therefore, multiplexed IO is more suitable for situations where the number of connections is large.

In addition, the reason why multiplexed IO is more efficient than the non-blocking IO model is because in non-blocking IO, the socket status is constantly inquired through the user thread, while in multiplexed IO , polling the status of each socket is performed by the kernel, and this efficiency is much higher than that of user threads.

However, it should be noted that the multiplexed IO model uses polling to detect whether an event has arrived, and responds to the arriving events one by one. Therefore, for the multiplexed IO model, once the event response body is large, subsequent events will not be processed for a long time, and new event polling will be affected.

4. Signal-driven IO model

In the signal-driven IO model, when the user thread initiates an IO request operation, a signal function will be registered for the corresponding socket. Then the user thread will continue to execute. When the kernel data is ready, a signal will be sent to the user thread. After receiving the signal, the user thread will call the IO read and write operations in the signal function to perform the actual IO request operation.

5. Asynchronous IO model

The asynchronous IO model is the most ideal IO model. In the asynchronous IO model, when the user thread initiates the read operation, it can start doing it immediatelyOther things. On the other hand, from the kernel's perspective, when it receives an asynchronous read, it will return immediately, indicating that the read request has been successfully initiated, so no block will be generated for the user thread. Then, the kernel will wait for the data preparation to be completed, and then copy the data to the user thread. When all this is completed, the kernel will send a signal to the user thread to tell it that the read operation is completed. In other words, the user thread does not need to know how the entire IO operation is actually performed. It only needs to initiate a request first. When it receives the success signal returned by the kernel, it means that the IO operation has been completed and the data can be used directly.

In other words, in the asynchronous IO model, the two phases of the IO operation will not block the user thread. Both phases are automatically completed by the kernel, and then a signal is sent to inform the user thread that the operation has been completed. There is no need to call the IO function again in the user thread for specific reading and writing. This is different from the signal-driven model. In the signal-driven model, when the user thread receives the signal, it indicates that the data is ready, and then the user thread needs to call the IO function to perform the actual read and write operations; in the asynchronous IO model, Receiving the signal indicates that the IO operation has been completed, and there is no need to call the iO function in the user thread to perform actual read and write operations.

Note that asynchronous IO requires the underlying support of the operating system. In Java 7, Asynchronous IO is provided.

The first four IO models are actually synchronous IO, and only the last one is truly asynchronous IO, because whether it is multiplexed IO or signal-driven model, the second stage of IO operation will cause The user thread is blocked, that is, the process of data copying by the kernel will cause the user thread to be blocked.

6. Two high-performance IO design patterns

Among the traditional network service design patterns, there are two classic patterns:

One is multi-threading, One is the thread pool.

For multi-threaded mode, that is to say, when the client comes, the server will create a new thread to handle the read and write events of the client, as shown in the following figure:

Although this mode is simple and convenient to handle, because the server uses a thread to process each client connection, it takes up a lot of resources. Therefore, when the number of connections reaches the upper limit, and another user requests a connection, it will directly cause a resource bottleneck, and in severe cases, it may directly cause the server to crash.

Therefore, in order to solve the problem caused by one thread corresponding to one client mode, the thread pool method is proposed, which means to create a thread pool of a fixed size, and when a client comes, it starts from The thread pool takes an idle thread for processing. When the client completes the read and write operations, it hands over the occupation of the thread. Therefore, this avoids the waste of resources caused by creating threads for each client, so that threads can be reused.

But the thread pool also has its drawbacks. If most of the connections are long connections, it may cause all the threads in the thread pool to be occupied for a period of time. Then when another user requests a connection, because there is no If the available idle threads are used for processing, the client connection will fail, thus affecting the user experience. Therefore, the thread pool is more suitable for a large number of short connection applications.

Therefore, the following two high-performance IO design patterns have emerged: Reactor and Proactor.

In the Reactor mode, events of interest will be registered for each client first, and then a thread will poll each client to see if an event occurs. When an event occurs, each client will be processed sequentially. Events, when all events are processed, they will be transferred to continue polling, as shown in the following figure:

As can be seen from here, the above The multiplexed IO among the five IO models adopts the Reactor mode. Note that the above figure shows that each event is processed sequentially. Of course, in order to improve the event processing speed, events can be processed through multi-threads or thread pools.

In Proactor mode, when an event is detected, a new asynchronous operation will be started and then handed over to the kernel thread for processing. When the kernel thread completes the IO operation, a notification will be sent to inform that the operation has been completed. It can be known that the asynchronous IO model uses the Proactor mode.

##1

2

3

4

5

6

7


#while<a href="//m.sbmmt.com/wiki/121.html" target="_blank"></a>(true){

data = socket.read();

if<a href="//m.sbmmt.com/wiki/109.html" target="_blank"></a>(data!= error){

                

Processing data

      

break<a href="//m.sbmmt.com/wiki/130.html" target="_blank"></a>;

      

}

}

The above is the detailed content of Java NIO: I/O model. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!