Did you get the QUIC protocol in 10 minutes?

 With the continuous updating of the protocol, HTTP/2.0 was introduced. Here is a review of the development of HTTP. First, we wanted a protocol that could get the content of documents on the network, through a method called GET requests, which were later written into the official documents, and HTTP/1.0 was born. However, this version of HTTP had some obvious flaws, such as the fact that it did not support persistent connections, and each time a request was answered, the connection had to be disconnected, which was inefficient. Not long after, the HTTP/1.1 standard was developed, which is one of the most frequently used standards on the Internet. HTTP/1.1 solved the previous defect of not supporting persistent connections, and HTTP/1.1 also added caching and control modules. However, even though HTTP/1.1 solves some of the connection performance problems, it is still not very efficient, and HTTP has a queue blocking problem (which I described in the article on HTTP 2.0). If five requests are sent at the same time, if the first request is not processed, the subsequent requests will not be processed, as shown in the following figure If the first request is not processed, then the four requests 2, 3, 4, and 5 will be blocked directly on the client side until request 1 is processed, and then they can be sent one by one. The performance impact is not significant when the network is smooth, but once request 1 does not reach the server for some reason, or if the request is not returned in time due to network blocking, it affects all subsequent requests, causing them to block indefinitely, and the problem becomes more serious. Although HTTP/1.1 uses pipling design to solve the queue head blocking problem, in pipling design, each request is still sent first in order, which does not fundamentally solve the problem. With the continuous update of the protocol, HTTP/2.0 was introduced. HTTP/2.0 HTTP/2.0 solves the problem of queue head blocking by using stream and frame splitting. HTTP/2.0 splits a TCP connection into multiple streams, each with its own stream id, which can be sent from the client to the server or from the server to the client. HTTP/2.0 can also split the information to be transmitted into frames and encode them in binary format. That is, HTTP/2.0 will split the Header and Data data separately, and the split binary format will be located in multiple streams. Here's a diagram. As you can see, HTTP/2.0 divides multiple requests into different streams through these two mechanisms, and then bifurcates the requests for binary transmission, so that each stream can be sent in random order without guaranteeing the order. QUIC Protocol Although HTTP/2.0 solves the queue blocking problem, each HTTP connection is established and transmitted by TCP, which has strict sequencing requirements when processing packets. This means that when a packet-segmented stream is lost for some reason, the server will not process other streams and will wait for the lost stream to be sent by the client first. For example, if a request has three streams, and stream 2 is lost for some reason, processing of stream 1 and stream 2 will also block, and the server will only process it again after receiving a resent stream 2. This is the crux of the TCP connection. In light of this problem, let's put TCP aside for a moment and get to know the QUIC protocol. QUIC is lowercase quic, which means "quick". It is a UDP-based transport protocol proposed by Google, so QUIC is also known as Quick UDP Internet Connection. First of all, the first feature of QUIC is fast, why say it is fast, what exactly is it fast? As we all know, the HTTP protocol uses TCP for message transmission at the transport layer, and HTTPS and HTTP/2.0 also use the TLS protocol for encryption, which results in three handshake connection delays: the TCP three handshake (once) and the TLS handshake (twice), as shown in the figure below. For many short connection scenarios, this handshake delay has a large impact and cannot be eliminated. In contrast, the QUIC handshake connection is faster because it uses UDP as the transport layer protocol, which reduces the time delay of three handshakes. Moreover, QUIC's encryption protocol uses the latest version of the TLS protocol, TLS 1.3. Compared to the previous TLS 1.1-1.2, TLS 1.3 allows the client to start sending application data without waiting for the TLS handshake to complete, and supports both 1 RTT and 0 RTT, thus achieving fast connection establishment. As we mentioned above, HTTP/2.0 solves the queue blocking problem, but the connection established is still based on TCP and cannot solve the request blocking problem. UDP itself does not have the concept of connection establishment, and the streams used by QUIC are isolated from each other and do not block the processing of other streams, so using UDP does not cause queue head blocking. In TCP, TCP uses a sequence number + confirmation number mechanism to ensure data reliability. Once a packet with a synchronize sequence number is sent to the server, the server will respond within a certain period of time. The client will retransmit the packet until the server receives the packet and responds. So how does TCP determine its retransmission timeout? TCP generally uses an adaptive retransmission algorithm, and this timeout is dynamically adjusted according to the round-trip time RTT. Each time the client uses the same syn to determine the timeout, the RTT result is not calculated accurately. Although QUIC does not use the TCP protocol, it does guarantee reliability. The mechanism by which QUIC achieves reliability is the use of Packet Number, which can be thought of as a replacement for synchronize sequence number, which is also incremental. The difference with syn is that the Packet Number is + 1 regardless of whether the server has received the packet or not, while syn is + 1 only after the server sends an ack response. For example, if a packet with PN = 10 is sent to the server late for some reason, then the client will retransmit a packet with PN = 11, and after a period of time, the client will receive the response with PN = 10 and then send back the response message, at which time the RTT is the survival time of the packet with PN = 10 in the network, which is relatively accurate. Although QUIC guarantees the reliability of the packet, how is the reliability of the data guaranteed? QUIC introduces the concept of stream offset, a stream can transmit multiple stream offsets, and each stream offset is actually a PN-identified data. When all PN-identified data is sent to the server, it will be reassembled to ensure data reliability. The stream offset arriving at the server is assembled in order, which also ensures the sequential nature of the data. As we all know, the specific implementation of the TCP protocol is done by the kernel of the operating system, which can only be used by the application and cannot be modified by the kernel. Although the mobile network is developing very fast, but the client side is very slow to update, I still see many areas where many computers are still using the xp system, even though it has been developed for many years. The server-side system is not dependent on user upgrades, but is conservative and slow because operating system upgrades involve updates to the underlying software and runtime libraries. An important feature of the QUIC protocol is pluggability, which enables dynamic updates and upgrades. QUIC implements congestion control algorithms at the application layer, which does not require the support of the operating system or the kernel, and when it encounters a switch in the congestion control algorithm, it only needs to be reloaded on the server, without the need for shutdown and reboot. If you are not familiar with sliding windows, you can read this article I wrote TCP Basics Some of the concepts of sliding windows are mentioned later in the article. QUIC also implements traffic control, which also uses window_update to tell the other side the number of bytes it can accept. Unlike TCP protocols where the headers are not encrypted and authenticated, so they are likely to be tampered with during transmission, QUIC messages are authenticated and encrypted. In this way, if there is any modification to the QUIC message, the receiver will be able to detect it in time, ensuring security. In summary, QUIC has the following advantages over HTTP/2.0 Using the UDP protocol, there is no need for three connection handshakes, and the time required to establish a TLS connection is shortened. Solves the queue blocking problem Dynamic pluggability is achieved, and the congestion control algorithm is implemented at the application layer, which can be switched at any time. The message header and message body are authenticated and encrypted separately to ensure security. Smooth migration of connections Smooth connection migration means that your phone or mobile device can switch between 4G signals and networks such as WiFi without disconnection or reconnection, even without any perception by the user, enabling smooth signal switching. QUIC related information The QUIC protocol is relatively complex, and it is difficult for the author to fully implement a set. Readers interested can first see what open source implementations are available. 1) Chromium: https://github.com/hanpfei/chromium-net This is the official support. The advantages are naturally many, Google official maintenance basic no pit, at any time you can follow chrome update to the latest version. However, compiling Chromium is more troublesome, it has a separate set of compilation tools. It is not recommended to consider this option for the time being. 2) proto-quic: https://github.com/google/proto-quic A part of the QUIC protocol stripped from chromium, but its github home page has announced that it is no longer supported and is only for experimental use. It is not recommended to consider this option. 3) goquic: https://github.com/devsisters/goquic goquic encapsulates the go language package of libquic, which is also stripped from chromium and is not maintained for several years, and only supports up to quic-36. goquic provides a reverse proxy, which is tested and found to be unsupported by the latest chrome browsers due to the low version of QUIC. Not recommended to consider this option. 4) quic-go: https://github.com/lucas-clemente/quic-go quic-go is completely written with go QUIC protocol stack, development is very active, has been used in Caddy, MIT license, currently see is better than the program. So, for small and medium teams or individual developers, the last recommended solution is to use caddy https://github.com/caddyserver/caddy/wiki/QUIC to deploy and implement QUIC. caddy is not intended to be used exclusively to implement QUIC, it is used to implement a It is intended to be used to implement a visa-free HTTPS web server (caddy will automatically renew the certificate). QUIC is just an ancillary feature of it (but the reality is - it seems to be used by more people to implement QUIC).