This article will share with you some of ByteDance’s favorite front-end interview questions about computer networks. It has certain reference value. Friends in need can refer to it. I hope it will be helpful to everyone.
Note: The (xx) number appearing in front of each question represents the frequency of this question. This computer network foundation is based on 30 front-end Questions and corresponding answers, reference links, etc. compiled from the interview. The content of the article is compiled by the person who got the Offer.
HTTP cache is divided into strong cache and negotiation cache:
First verify whether the strong cache is available through Cache-Control. If the strong cache is available, then read the cache directly.
If not, then enter the negotiation cache phase and initiate an HTTP request. , the server checks whether the resource is updated by including the conditional request fields If-Modified-Since and If-None-Match in the request header:
If the resource is updated, then the resource and 200 are returned Status code
If the resource has not been updated, tell the browser to directly use the cache to obtain the resource
301 is similar, it will jump to a new website, but 301 represents that the resource of the accessed address has been permanently removed. This address should not be accessed in the future. Search engines will also use the new one when crawling. Replace this old one with the address. The returned address can be obtained from the location header of the returned response. The scenario of 301 is as follows:
(3 ) Q: What is HTTPS? Specific process
The browser transmits a client_random and encryption method list. After the server receives it, it passes it to the browser a server_random, encryption method list and digital certificate (including the public key), and then the browser performs legal verification on the digital certificate. , if the verification is passed, a pre_random is generated, and then encrypted with the public key and transmitted to the server. The server uses client_random, server_random and pre_random to encrypt the secret using the public key, and then uses this secret as the secret key for subsequent transmissions to encrypt and decrypt data. .
(4) Question: Three-way handshake and four-way waveThree-way handshakeThree-way handshake main process:
At the beginning, both parties are in CLOSED state, and then the server starts to listen to a certain port and enters the LISTEN state
Then the client actively initiates a connection, sends SYN, and then It becomes SYN-SENT, seq = x
After the server receives it, it returns SYN seq = y and ACK ack = x 1 (for the SYN sent by the client), it becomes After becoming SYN-REVD
, the client sends ACK seq = x 1, ack = y 1 to the server again, and becomes EASTABLISHED. When the server receives ACK, it also enters ESTABLISHED.
SYN requires peer confirmation, so the serialization of ACK needs to be increased by one. Anything that requires peer confirmation will consume one point of TCP message serialization
Why not twice?
Unable to confirm the client's receiving capabilities.
If the client sends a SYN message first, but it stays in the network, TCP will think that the packet has been lost, and then retransmit, and the connection will be established with two handshakes.
Wait until the client closes the connection. But if the packet arrives at the server later, the server will receive it, then send the corresponding data table, and the link will be established. However, the client has closed the connection at this time, resulting in a waste of link resources.
Why not four times?
More than four times are fine, but three times is enough
Waving four times
It is in the ESTABLISH state at the beginning, and then the client sends a FIN message with seq = p, and the state changes to FIN-WAIT-1
Server After receiving it, send ACK to confirm, ack = p 1, and then enter the CLOSE-WAIT state
After the client receives it, it enters the FIN-WAIT-2 state
After a while, wait for the data to be processed, send FIN and ACK again, seq = q, ack = p 1, enter the LAST-ACK stage
The client receives After FIN, the client enters TIME_WAIT (wait for 2MSL) after receiving it, and then sends ACK to the server ack = 1 1
After the server receives it, it enters the CLOSED state
The client still needs to wait for two MSLs at this time. If it does not receive a resend request from the server, it means that the ACK has arrived successfully. The wave ends and the client changes to the CLOSED state. Otherwise, the ACK is retransmitted.
Why do you need to wait for 2MSL (Maximum Segment Lifetime):
Because if you don’t wait, if the server still has many data packets to send to the client, and this When the client port is occupied by a new application, useless data packets will be received, causing data packet confusion. Therefore, the safest way is to wait until all data packets sent by the server are dead before starting the new application.
1 MSL guarantees that the last ACK message from the active closing party in four waves can finally reach the peer
1 MSL guarantees If the end does not receive ACK, then the retransmitted FIN message can reach
Why four times instead of three times?
**If it is three times, then the ACK and FIN of the server are combined into one wave. Such a long delay may prevent a TCP FIN from reaching the server, and then the client will continue to resend it. FIN
##References
send buffer area, and put the received data intoReceive buffer area. There are often situations where the sender sends too much and the receiver cannot digest it, so flow control is needed, which is to control the sending of the sender through the size of the receive buffer. If the other party's receiving buffer is full, it cannot continue to send. This flow control process requires maintaining a sending window at the sending end and a receiving window at the receiving end.
TCP sliding windows are divided into two types:sending windowandreceiving window.
References
##Essence DifferentAjax, which is asynchronous JavaScript and XML, is a web development technology for creating interactive web applications
websocket is a new protocol of HTML5 that implements Real-time communication between browser and server
Different life cycles:
Websocket is a long connection, and the session is always maintained
ajax will be disconnected after sending and receiving
Scope of application:
Initiator:
For example, in an e-commerce scenario, the inventory of goods may change, so it needs to be reflected to the user in time, so the client will keep sending requests, and then the server will keep checking for changes, regardless of Regardless of the change, everything is returned. This is short polling.
The performance of long polling is that if there is no change, it will not return, but wait for the change or timeout (usually more than ten seconds) before returning. If there is no return, the client does not need to keep sending requests. So the pressure on both sides is reduced.
Reference link
Reference link
Reference resources
The reason is that the entire network environment may be particularly poor and packet loss is easy, so the sender should pay attention.
Mainly use three methods: Slow start threshold congestion avoidanceFor congestion control, TCP Mainly maintains two core states:
Congestion window (cwnd)Then a relatively conservative slow start algorithm is used to slowly adapt to the network. During the initial transmission period, the sender and receiver will first establish a connection through a three-way handshake to determine the size of their respective receiving windows, and then Initialize the congestion window of both parties, and then double the size of the congestion window after each round of RTT (receiver and transmit delay) until the slow start threshold is reached.
Then start congestion avoidance. The specific method of congestion avoidance is to double the congestion window in each previous round of RTT, and now add one in each round.
Fast retransmission
During the TCP transmission process, if packet loss occurs, the receiving end will send a repeated ACK, such as 5 packets are lost, 6 and 7 are reached, and then the receiving end will send the ACK of the fourth packet for 5, 6, and 7. At this time, the sending end receives 3 repeated ACKs. When it realizes that the packet is lost, it will Retransmit immediately without waiting for RTO (timeout retransmission time)
Selective retransmission: optionally add the SACK attribute to the message header, mark those packets that have arrived through the left edge and right edge, and then Retransmit undelivered packets
Quick recovery
If the sender receives 3 duplicate ACKs and discovers packet loss, it feels like now The network condition has entered the congestion state, then it will enter the rapid recovery phase:
will reduce the congestion threshold to half of the congestion window
Then the congestion window size becomes the congestion threshold
Then the congestion window increases linearly to adapt to the network conditions
Aims to send a probe request to determine what constraints a request for a certain target address must have, and then send the real request according to the constraints.
For example, pre-checking for cross-domain resources is sent first using the HTTP OPTIONS method. Used to handle cross-domain requests
Flexible and scalable, except for the stipulations that spaces separate words and newlines separate fields, there are no other restrictions. It can not only transmit text, but also Transmit any resources such as pictures and videos
Reliable transmission, based on TCP/IP, so it inherits this feature
Request-response, there is There are responses
Stateless, each HTTP request is independent, irrelevant, and does not need to save context information by default
Disadvantages:
Clear text transmission is not safe
Reusing a TCP link will cause peer congestion
None In a long connection scenario, a large amount of context needs to be saved to avoid transmitting a large amount of repeated information
OSI seven-layer model
Application Layers
Presentation layer
Session layer
Transport layer
Network layer
Data link layer
Physical layer
TCP/IP four-layer concept:
Application layer: application layer, presentation layer, session layer: HTTP
Transport layer: Transport layer: TCP/UDP
Network layer: Network layer: IP
Data link layer: Data link layer, physical layer
TCP is a connection-oriented, reliable, transport layer communication protocol
UDP is a connectionless transport layer Communication protocol, inherits IP characteristics, and is based on datagram
Why is TCP reliable? The reliability of TCP is reflected in the stateful and controlled
which will accurately record which data was sent, which data was received by the other party, and which was not received, and ensures that the data packets arrive in order. No mistakes are allowed, this is stateful
When it realizes that a packet has been lost or the network environment is poor, TCP will adjust its behavior according to the specific situation, control its own sending speed or restart This is controllable
On the contrary, UDP is stateless and uncontrollable
Improved performance:
Header compression
Multiple channel multiplexing
Server Push
Programming Video! !