Google says its homegrown QUIC networking protocol can speed up web browsing – enough so that it's planning to propose it to the IETF standards body to make it part of the next-generation internet.
The online advertising giant has been quietly working on QUIC since 2013, after successfully having its work on the SPDY protocol form the groundwork for the IETF's HTTP/2 standard.
The idea behind QUIC is to speed up latency-sensitive web applications (like, say, search) by reducing the network round-trip time (RTT) that it takes to establish a connection to a server.
"The standard way to do secure web browsing involves communicating over TCP + TLS, which requires 2 to 3 round trips with a server to establish a secure connection before the browser can request the actual web page," Google's Chrome team explained in a Friday blog post. "QUIC is designed so that if a client has talked to a given server before, it can start sending data without any round trips, which makes web pages load faster."
QUIC achieves this by running a flavor of TLS (Transport Layer Security) over the internet's UDP protocol instead of the customary TCP. This allows it to avoid some of the bottlenecks that can happen when a TCP connection loses packets, as explained in Google's FAQ, here.
With QUIC enabled, a web browser connecting to a server it has never communicated with before takes the same amount of time as an ordinary TCP connection.
Fewer connection requests means a faster web: how QUIC speeds up browsing
The reduction in latency from this "zero-round-trip" feature is even more significant when compared to a secure TCP connection. Establishing an initial connection via TLS is typically three times slower than setting up an ordinary TCP connection, and each subsequent connection, while faster, is still twice as slow as plain TCP. QUIC, meanwhile, offers equivalent security to TLS over TCP but with lower latency than plain TLS.
Chrome users: You're the guinea pigs
It's a clever idea, but how well does it actually work? To find out, Google has added QUIC support to recent builds of Chrome and enabled it for some of its online services, making it possible to test the protocol's real-world performance at scale.
The early results are encouraging. Google says that even on highly optimized sites like Google Search it has seen a 3 per cent reduction in average page load times – which, while not huge, is nothing to sneeze at.
The effects are more dramatic over poor or slow internet links; think mobile networks. Google says the very crappiest connections saw their Google Search page load times reduced by one full second when using QUIC instead of TCP/TLS. And YouTube videos streamed over QUIC had to rebuffer 30 per cent less often.
So when do we all get to benefit from this? The answer is someday, maybe. Google says around half of all requests from Chrome browsers to Google's servers are already being served over QUIC. Convincing the rest of the world to get on board may take some time.
"We plan to formally propose QUIC to the IETF as an Internet standard but we have some housekeeping to do first," the Google team said.
For one thing, Google's current implementation of QUIC still uses SPDY instead of HTTP/2 – tut tut, Google – and the development team still has some more work to do to improve efficiency, error-correction, and congestion control, and to add support for multipath connections.
Anyone who would like to give the reference implementation a good knocking around, however, is invited to grab the source code and take it for a spin. More information is available at the official QUIC project site, here. ®