Although latency avoidance is the most widely cited reason for placing workloads at the edge, there are other potential benefits.
- Reduced bandwidth across wide area networks
- Need for limited autonomy or disconnected operation
- Privacy and security
- A requirement for local interactivity
To be sure, Gartner estimates that by 2022, more than 50 percent of enterprise-generated data will be created and processed outside the data center or cloud. Edge computing places content, data and processing closer to the applications, things and users that consume and interact with them.
That is a clue to possible value. If there are 25 billion to 30 billion internet of things sensors in operation at some point in the next decade, and if half to 75 percent of those sensors actually communicate with some external computing resource, then it is perhaps obvious that processing load across the wide area network could be substantial.
Neither deployment at the edge, nor internet of things necessarily requires edge computing, though some apps requiring ultra-low latency might. Devices that must work even when internet or other network connectivity is intermittent provide other use cases when edge processing is useful.
In other cases, privacy or security issues might lend themselves to local processing. Sensors might connect only to local computing devices including PCs, programmable logic controllers on an assembly lines or a self-driving and autonomous car.
Sensors can stream data to remote data centers, the same way many devices and apps now interact with hyperscale and remote data centers. But there may be advantages to pre-processing or processing at the edge.
No comments:
Post a Comment