Talk about HTTP pipeline
Talk about HTTP pipeline
HTTP pipelining (HTTP pipelining) is a technology that sends multiple HTTP requests (requests) in batches without waiting for the server's response during the transmission process.
Pipelining of request results results in a dynamic improvement in HTML page load times, especially on specific high-latency connections such as satellite Internet. In the broadband connection, the acceleration is not so significant, because the HTTP/1.1 protocol needs to be applied on the server side, and at the same time, the server side must reply to the request in accordance with the order of the client's request, so that the entire connection is still first-in-first-out, HOL blocking (HOL blocking) ) may occur, causing delays. Future HTTP/2.0 or asynchronous operations in SPDY will solve this problem. Because it is possible to fill multiple HTTP requests into one TCP packet, HTTP pipelining requires fewer TCP packets to be transmitted over the network, reducing network .
The pipeline mechanism must be completed through a persistent connection, and only requests such as GET and HEAD can be pipelined, and non-idempotent methods, such as POST, will not be pipelined. Successive GET and HEAD requests can always be pipelined. Whether a sequence of idempotent requests, such as GET, HEAD, PUT, DELETE, can be pipelined depends on whether the sequence of requests depends on each other. In addition, the pipeline mechanism should not be activated when the connection is established for the first time, Because the other party (server) does not necessarily support the HTTP/1.1 protocol.
HTTP pipelining relies on both client and server support. Servers conforming to HTTP/1.1 support pipelining. This does not mean that the server needs to provide pipelined replies, only that it does not fail to receive pipelined requests.
What is http pipelining
Normally, http requests are always sent sequentially, and the next request is sent only when the response of the current request is fully accepted. Due to network latency and bandwidth constraints, this will cause a large delay in the middle of the server sending the next response.
HTTP/1.1 allows multiple http requests to be output simultaneously through a socket without waiting for the corresponding response. The requesters then wait for their respective responses, which arrive sequentially in the order of the previous requests. (me: All requests maintain a FIFO queue. After a request is sent, the next request can be sent again without waiting for the response of this request to be received; at the same time, the server returns the response of these requests according to the FIFO order). Pipelining performance can greatly improve the speed of page loads, especially on high-latency connections.
Pipelining can also reduce tcp/ip packets. Usually the size of MSS is 536-1460 bytes, so it is possible to put many http requests in one tcp/ip packet. Reducing the number of packets needed to load a web page can benefit the network as a whole, because fewer packets put less burden on routers and networks.
HTTP/1.1 requires the server to also support pipelining. But that doesn't mean that the server needs to pipe the response, but that when the client makes a piped request, the server doesn't fail to respond. This obviously has the potential to introduce a new category of evangelism bugs (will not turn over...), because only modern browsers support pipelining.
When should we pipeline requests
Only idempotent requests (see note 1) can be pipelined, such as GET and HEAD. POST and PUT should not be piped. We also should not pipeline requests when establishing new connections, because it is not sure whether the origin service or proxy supports HTTP/1.1. Therefore, pipelining can only utilize existing keep-alive connections.
How many requests should be pipelined
If the connection is closed prematurely, it is not worthwhile to pipe many requests, because we will spend a lot of time writing the request to the network, and then have to rewrite it in the new connection. Also, if earlier arriving requests take a long time to complete, an overly long pipeline can actually cause the user to perceive a longer delay. The HTTP/1.1 standard also doesn't provide any guidance on the ideal number of requests to pipeline. In fact, we recommend no more than 2 keep-alive connections per server. Obviously, this depends on the application itself. For the reasons above, browsers probably don't need a particularly long-lasting pipeline. 2 is probably a good value, but has yet to be tested.
What happens if a request is canceled?
If a request is canceled, does it mean that the entire pipeline has been cancelled? Or, does it mean that the response to the canceled request should simply be discarded so that other requests in the pipeline are not forced to resend? The answer depends on many factors, including how many responses to the canceled request have yet to be received. The most primitive solution might be to simply cancel the pipeline and resend all requests. Only available if the request is idempotent. This primitive approach can also have a good effect, because the requests being sent in the pipeline may belong to the same page load group that is being canceled.
What happens if the connection fails?
If the connection fails or the server is interrupted while downloading a response in a pipeline, the browser must have the ability to restart sending the lost request. This situation can be equated to the canceled example discussed above.
Note
- The idempotency of HTTP/method: It means that one and multiple requests for a certain resource should have the same side effects.
Idempotent requests are actually requests that do not change the result after multiple operations, such as GET. I can get resources from the same place multiple times, but the resources themselves will not change. I get 10 times and GET 100 times, nothing changes to the resource. The post is different. I submit the form 10 times and 100 times, and the results are different, at least the new data in the database is different.
explain
- In fact, HTTP pipeline is to move the client's FIFO queue to the server. After the client can send all the requests to be sent in sequence (of course, these requests are under the same domain), after a request is sent, the next request can be sent again without waiting for the response of this request to be received. The FIFO queue maintained on the server side is arranged according to the importance of resources. For example, HTML is returned before CSS, and JS and CSS are returned before images.
- There will be a buffer on the server side to store responses that have been processed but haven't yet been sent. For example, the server has received two requests from A and B successively. A resource has a higher priority than B resource. It takes 10ms to process A and 1ms to process B. Assuming that the server can process requests in parallel, then the response of B must be processed first. Yes, but the B response cannot be sent first, it must stay in the buffer, wait for the A response to be processed, and then send the A response first, and then the B response can be sent. Because the server must follow the principle of FIFO.
- HTTP pipelining is not part of HTTP2, but an improvement over HTTP1.1, where the server cannot handle parallel requests well.
- The order of pipeline and the order of TCP are essentially different. The order of pipeline is the order between messages. The order in TCP is the order among the multiple segments that make up a message. To make an inappropriate analogy, it’s like if student A eats lunch or student B eats lunch, whoever finishes eating first can go to play on the computer, assuming that student A enters the cafeteria first, and student B enters the cafeteria again, they both eat at the same speed , then according to the FIFO principle, whether student A ate apples, pears, rice, vegetables, or student B only ate apples and rice. Although student B finishes eating first and he eats less, in the pipeline, it must be student A who goes to play computer games first, and student B follows. In TCP, it seems to be describing, for this meal, whether student A ate apples, pears, rice, and vegetables first, or vegetables, rice, and pears first, such an internal order.
- What the pipeline does, my understanding is to create a place where the next request can be sent without waiting for the response of the previous request. As for what to pay attention to, apart from knowing that some devices do not support it, I have no practical experience in other things