Edge Computing Definition and Examples
edge computing: [ehg com-pew-ting] noun.
According to the National Institute of Standards (NIST), edge computing is “the network layer encompassing the end-devices and their users, to provide, for example, local computing capability on a sensor, metering or some other devices that are network-accessible. This peripheral layer is also often referred to as (Internet of Things / IoT) network.” In other words, edge computing puts computing power closer to edge devices where it’s needed in order to reduce latency.
For example, Instead of centralizing server processing power in a data center in a faraway location, an edge computing implementation might employ many smaller, closer-in data centers set up to serve individual neighborhoods.
For context, edge computing is not a new term, but according to Google, interest in this concept spiked in 2016. The recent popularity of the term has to do with increasing reliance on sensors and IoT devices — which often create vast amounts of data. And as more IoT devices come there will be more and more data that needs to be processed quickly.
Source: Google Trends
And while use of cloud computing resources has many advantages, use of edge computing resources has increasingly been seen as a way to address some shortcomings of cloud computing, such as potentially low bandwidth, high latency, and disruptions to service.
According to many, edge computing will become increasingly important as more everyday objects contain computer chips.(Consider the recent smart-device boom, which has included cars, drones, autonomous robots, sensors, and more.)
Edge Computing Examples
Now that we know the concept behind edge computing, let’s examine some real-world use cases that help explain when edge computing implementations make sense.
Autonomous vehicles — Modern, self-driving cars are basically huge computers on wheels that are packed full of sensors. When you’ve got a vehicle flying down the road at high speeds, constantly analyzing its surroundings and communicating with other cars, a lot of data-crunching has to happen, and time is of the essence. If a hazard arises, the autopilot system will need to respond instantaneously. There’s no time to send data to a cloud server for processing when a split-second could make a world of difference.
Real-time facial-recognition systems — For advanced security solutions that use cameras to identify suspicious people or potentially dangerous behaviors on the fly, it will be important for the system to identify threats quickly. By either processing information in device or using a server located very close to the source, insights might be gained that can prevent security incidents before they occur. Additionally, due to the sensitive nature of this type of data, some organizations might hesitate to process or store this information in a public cloud providers’s data center.
Remote safety monitoring — In some far-flung industrial centers, cloud computing might not make sense, due to inadequate internet connectivity. For example, say you’ve got sensors monitoring an oil rig in the middle of nowhere. If you rely on processing this data in a centralized data center hundreds of miles away, there’s going to be a lot of latency. But if you process and analyze data on site, you might be able to react to a situation as it develops, potentially preventing a safety disaster.
Edge Computing Challenge: Comment below!
Now it’s your turn. How would you explain edge computing simply, to others who aren’t necessarily tech savvy?
What experiences have you had with edge computing, and how do you feel about this term? Does it accurately match the concept it describes?