A Quick Introduction To HTTP/2

30th July, 2019 by

We all know that it’s important to keep up with advances in technology. However, whether you’re a web developer, a small business owner, or even a sole trader, the pace of development in the digital world makes it tough to keep fully up to date. For many people, the development and spread of HTTP/2 have gone under the radar. Indeed, you’d be forgiven if you didn’t know that HTTP/2 even existed, let alone how it’s transforming internet data transfers…

 

Getting hyper

HTTP/2’s origins lie in a Google development called SPDY (an abbreviation of speedy). The idea behind SPDY was to create a network connection faster than HTTP 1.1 or HTTPS. Hypertext Transfer Protocol is the language your browser speaks to your server when opening a network connection, and any website address will rely on this protocol for distribution. HTTP 1.1 was the standard version employed in network connections for over 20 years. HTTPS is the same as HTTP 1.1, except that it produces secure connections, and both connections can suffer from high latency.

 

The challenge of high latency

In the context of network connections, latency is the time it takes for the connection to open over a distance. That’s basically how long it takes information to pass from your computer to a server and back. This is an issue with network connections because information makes that trip at the speed of light – but no faster. Opening a web page requires many of those ‘trips’, as your browser sends multiple requests to the server and receives responses in turn.

An HTTP 1.1 or HTTPS connection only allows one request or response to travel at a time. The numerous requests needed to open a website happen consecutively, not concurrently. Even if a browser opens more than one HTTP 1.1 connection, requests often have to wait in a virtual queue. When requests or responses become queued, it’s called ‘head of line blocking’, and it’s the main reason why HTTP 1.1 connections suffer from high latency. It was also the main issue Google was looking to solve in developing HTTP/2.

 

Your local multiplex

There are a number of facets of HTTP/2 which allow it to effectively eliminate the issue of latency. The most notable (and most effective) is a process called multiplexing. This means a connection can carry more than one request and/or response at once. It’s like adding an extra lane or two to a freeway. Browsers don’t have to open extra connections, and requests and responses don’t have to ‘queue up’ for a free connection.

One HTTP/2 connection can handle all the transfers of information needed to open a web page far more quickly, without impacting the data that’s being transferred. For that reason, it can be quick and easy to implement HTTP/2 for your site. You don’t need to write any new code – often, all you have to do is to update your server software. The improvements may be profound, especially in terms of SEO, since page loading times are a key metric in calculating ranking results.

 

Following protocol

Along with bandwidth, high latency is one of the two things most likely to slow down network connections. HTTP/2 allows for the concurrent transfer of requests and responses between a browser and a server, via multiplexing. That dramatically reduces latency and makes the new kind of connection much faster. And that is exactly what’s needed in today’s always-on streaming society, where an estimated 80% of all online content will be generated by just two platforms in the near future (the platforms in question being Netflix and YouTube).