Latency is an inevitable by-product of the internet. The World Wide Web’s intricate network of connections encompasses everything from speed-of-light fiber optic cabling to sluggish copper telephone cables. Individual data packets have to travel from servers to screens through an ever-changing roster of nodes and connection points, each incurring a fractional delay. The time between a user device issuing a request and a response being received is known as latency.
While nominal quantities are unavoidable, too much latency can cripple the prospects of an online business or service provider. It directly affects almost every online experience – ecommerce or gaming, SEO or communications. You are witnessing the effects of latency whenever a live news feed becomes pixelated, or when an online game unexpectedly glitches.
Too much, or just right?
Establishing whether an online service is incurring too much latency depends on the service itself. Different industries have varying levels of tolerance regarding server-to-screen delays:
- For online gamers, latency north of 45 milliseconds (ms) has been proven to shorten game session lengths. However, while some games can survive up to 120 ms of delays, a quarter of this could represent too much latency in FPS and MMORPG titles.
- In terms of database access, Microsoft and Oracle have both published best practice guides suggesting latency shouldn’t rise above 50 ms. This spills into SEO, as Google and Bing will downgrade websites which are slow to respond.
- On VoIP calls, fractional delays of 20 ms are unavoidable, though anything over 150 ms will represent too much latency for a stable data stream. This comparatively high threshold is thanks to clever techniques like replicating packets to fill gaps.
It’s worth noting that other industries have been able to successfully tackle server-to-screen delays in different ways. As an example, the explosive growth in streaming media services has been driven by clever technologies like adaptive bitrates. Individual data packets are encoded at multiple bitrates, responding to bandwidth fluctuations in real time to ensure the highest quality file is sent to client devices at each precise moment.
If adaptive bitrates aren’t applicable in your chosen industry, don’t despair. There are other ways to ensure too much latency doesn’t become a major issue:
1. Use domestic rather than international servers.
Fewer nodes and shorter distances mean data distribution and a reduction in latency. Multinational enterprises could benefit from 100TB’s global network of efficient and secure data centers.
2. Ensure those servers are able to cope with data spikes.
From press coverage to product debuts, various events might trigger a surge in visitor levels. The best server hosts will offer expandable bandwidth, sharing traffic loads with other servers.
3. Employ routing protocols.
Conventional data distribution paths calculate the shortest routes, rather than the ones experiencing least latency. Routing algorithms may help to determine more efficient protocols for dispensing information across a network.
4. List minimum client-side requirements.
Latency may be caused by factors outwith a provider’s control, including overtaxed CPUs and sluggish broadband. Recommend hardwired connections rather than wifi, and publish minimum system specs.
5. Employ universal file formats.
MPEG-DASH is to streaming video services what JPGs are to photography. These industry standards won’t just display on the highest number of client devices – they’ll be compressed and optimized for efficient transfers.
6. Test systems prior to launch.
Beta testing a new website or service might reveal issues in certain browsers or across wifi networks. Recruit a network of people using different hardware and software, identifying issues or performance drop-offs.