What Is Edge Computing?

14th August, 2019 by

The IT industry is renowned for complicated or unnecessary jargon, and there are times when it more than lives up to those clichés. Wikipedia describes ‘edge computing’ as ‘a distributed computing paradigm which brings computation and data storage closer to the location where it is needed’. In other words, local data processing.

That’s perhaps an overly simplistic definition, though it’s more relatable than Wikipedia’s alternative. To fully understand edge computing, it’s important to consider how the cloud works. The concept of drawing information from remote web servers and displaying it on user terminals has underpinned social media sites and webmail services from the internet’s formative years. What’s new is our growing reliance on the cloud, for everything from entertainment to smart speakers. Most services that can be transferred online and hosted remotely already have been. The focus is now switching to delivering information more quickly.

 

Living on the edge

Cloud computing often relies on a centralized server or a network of servers. Data travels between these servers and recipient devices, like tablets or smartphones, at speeds limited by the technology of our time. And since our computers and mobile devices are increasingly useless without rapid internet connectivity, any delays caused by slow information transfer materially affect the user experience. The delay between an action being requested and a response being received is known as latency. Latency of just 100 milliseconds could hinder activities as diverse as online gaming and video calling.

At its simplest, edge computing involves moving the data processing involved with cloud-based servers closer to (or even onto) end-user devices. Consider a smart speaker. You ask it to check the weather. It has to record your voice and feed a digitized file down a fiber broadband line to a server, which then takes a guess at what you’re asking, summons the relevant data in response, pipes it back, and finally broadcasts it as computerized speech. That’s a lot of processing, even if the server is located next door. Imagine the logistical challenges if the server is a thousand miles away. Slashing the total distance data has to travel minimizes latency, providing a smoother UX more suited to our busy lives.

 

The edge of reason

This might sound like a complete reversal of the cloud, back to the dark days of C drives and POP email accounts, but it’s highly selective. Edge computing delegates as much processing as possible to local devices, yet it retains centralized connectivity. Think of a production line capable of instantly identifying a rogue part on a conveyor belt and summoning human assistance. The cloud alternative might involve having to send a request to a software control system hosted halfway around the world, potentially receiving a response only after the item has moved beyond the reach of line workers.

Of course, for web-enabled devices to process data locally, they need more sophisticated hardware than some of today’s cut-price dumb terminals. Devices must be able to process and interpret data themselves or relay it to a local hub, rather than automatically sending it to a central server. The hub might include a central IT repository in an office building, with companies like Hewlett Packard already marketing highly customizable hardware designed to extract insights in data-rich local environments.

 

Cloud forecast

It’s important to note that the cloud isn’t being abolished in this model. Instead, data transfers are reduced to minimize latency. Security is also enhanced by limiting the information distributed online, with local devices easier to protect against eavesdropping. And given the limitations on global internet bandwidth, web traffic reductions could be crucial in ensuring edge computing streamlines our online activities in the coming years. Learn more about how cloud computing can transform your business at 100TB.com.

(Visited 3 times, 1 visits today)