Monday, February 18, 2019

What is New, Not New, with "Edge Computing"

Every computing device and location, by definition, sits on the edge of some wide area network. Whether we are talking about a hyper-scale data center, an enterprise data center, a client-server network or a single end user device, all are at the edge of a physical network, which in turn is interconnected to all other networks.

So one practical issue with edge computing is that not everybody uses the same definition of “edge,” even if everyone seems to agree edge computing means putting compute capabilities at the logical edge of a network.

But that is not new, and not even technically correct, as all computing happens at the edge of some network, with the possible exception of specialized network devices that operate at baseband, and perform routing or switching operations in the "core" of a network. Core routers and Class 4 voice switches come to mind.

So something other than "computing at the edge" or "computing on the premises" or "computing by the device" is new, here.

The Linux Foundation defines edge computing as “the delivery of computing capabilities to the logical extremes of a network in order to improve the performance, operating cost and reliability of applications and services.” The key concept there is the plurality of “extremes.”

And note the subtlety: personal computers, smartphones and controllers compute at the "edge." Mainframes sit "at the edge." Even hyper-scale data centers sit on the edge of some wide area network.

What is important is that some computing operation occurs remotely, over the WAN, instead of close to where you--or the computing device--are physically located.

“Edge computing" as a new concept only makes logical sense if what is new is "near" versus "remote" or "on-board" or "on the premises" computing.

On-board computing is not new. "On the premises" computing is not new. Remote computing is not new.

In a real sense, the singular new capability is latency performance for real-time operations that cannot be performed on the device, on the premises or at a remote cloud data center.

When you move from reliance on WAN-based hyperscale cloud centers to use of metro-level cloud facilities, and the issue is latency performance.

“By shortening the distance between devices and the cloud resources that serve them, and also reducing network hops, edge computing mitigates the latency and bandwidth constraints of today's Internet, ushering in new classes of applications,” the Linux Foundation definition states.


In practical terms, this means distributing new resources and software stacks along the path between today's centralized data centers and the increasingly large number of devices in the field, concentrated, in particular, but not exclusively, in close proximity to the last mile network, on both the infrastructure and device sides.

That has likely implications for where and how augmented intelligence (artificial intelligence) gets implemented in wide area networks. Just how much applied machine learning or augmented intelligence gets deployed in the core, as opposed to the edge, to create new service capabilities or gain operational advantages, remains an open question.

By definition, the whole advantage of edge computing is to avoid “hairpinning” (long transits of core networks), when local access can be provided. If so, when edge computing is widely deployed, the upside to using AI to groom traffic or reduce latency is less.

Nor is it entirely clear what new network-based capabilities can be created in the core network, using AI (for example), and which AI-based features actually are possible only, or implemented easier, at the edge. Some security features or location features might be possible only at the edge, some argue.


How to apply new technologies such as AI will remind you of similar debates and decisions that happened around “making core networks smarter” or relying on “smart edge” and simply building high-performance but “dumb” core networks. Those of you with long memories will recall those precise debates around use of ATM and IP for core networking.

No comments:

Post a Comment