Although the internet is underpinned by plenty of cutting-edge technology, its structure is surprisingly basic. The unexpected love child of a 1960s’ military defense system and a 1990s’ data distribution protocol, today’s World Wide Web is actually quite brittle. Despite being designed to offer robust and dependable connections along a spider’s web of nodes, we’ve ended up with an inherently fragile network where everything from server outages to distributed denial of service attacks can knock key services offline.
This fragility is typified by the individual data packets that bolster everything that happens online. Digital data is comprised of binary bits – the on-or-off instructions computers rely on to process information. Bytes, each containing eight bits, are bundled together into standalone packets containing up to 1,500 bytes. When you consider a typical smartphone photograph or music file contains several million bytes of information, the logistics of transporting an HD movie or FPS game over a domestic internet connection come into sharp focus. Multiply that by several billion, and the challenges facing today’s internet are clear.
In a traditional Internet Protocol model, data packets all follow distinct routes between a host server and a recipient device. Each route might be the fastest available at that precise millisecond, or it could simply be the most direct path on offer. It may be allocated randomly, or micromanaged by a packet switching protocol to avoid blockages or dropped connections. Packets generally pass through a number of waypoint devices and routers (known as nodes), which forward the packet onto the next leg of its journey.
Packets are physically transferred along fiber cabling at something approaching the speed of light. They could travel half a mile from the nearest data center to the destination device, or they may have to journey halfway around the world. And the final leg of this journey could either be along ultra-fast Fiber to the Premises broadband, or down a much slower telephone-based connection which throttles achievable speeds.
Once they’ve been disassembled from the original file, packets depart from a host server or machine in a logical order. Inevitably, the arrivals process is more chaotic. That’s why data packets are distributed with unique identifiers, explaining their position within the mass of arriving packages. A header file will summarize the distribution protocol and conduct a roll call of delivered packets, while a trailer file confirms everything has arrived safely. If it has, the destination device will reassemble the document or file into its original form. If not, remedial steps will be undertaken within fractions of a second…
If a packet has gone AWOL, the trailer file should flag up its absence and request a replacement. If subsequent transfers also fail, the packet’s continuing absence might result in a stuttering media file, missing graphics on a web page, or some other visible sign of malaise. Voice over IP (or VoIP) calls are particularly susceptible to packet loss, resulting in synthesized dialog and pixelated video footage. However, thanks to clever technologies, conversations should remain audible even if 20% of packets are lost along the way.
The process of distributing data packets gets a little more complicated when information is being sent securely. HTTPS websites provide unique identification keys for recipient machines, ensuring sensitive data will be distributed through an encrypted connection for optimal security. Data may arrive more slowly because routers can’t drop data packets to optimize the quality of service, or to prioritize more important transfers. Nevertheless, HTTPS is rightly regarded by search engines as the gold standard for secure web-based transactions.