Application latency varies almost directly with how far away processing happens, which is the argument for greater amounts of edge computing in the future. On device processing has almost no latency. An on-premises server might have latency of a millisecond or so.
Edge computing at nearby locations in a metro area might have latency as low as a millisecond, or as high as six milliseconds, depending on where the processing happens in a metro area. Far end processing at a hyperscale data center has latency between 50 milliseconds and 55 milliseconds.
But that, of course, refers only to transmission latency, not processing latency.
That noted, there are other business advantages for edge computing, including the reduction of spending and cost for long-haul, wide area networking. In other cases, edge computing supplies value in terms of non-interrupted processing, in cases where WAN bandwidth connections are intermittent, or must be designed to continue operating if WAN connectivity is momentarily lost.
In other cases, security can be an advantage, essentially allowing apps to run “sandboxed” and compartmentalized from each other. That might be especially true where there are regulations about the movement of data out of area.
Processing efficiency also can be an advantage, when a huge amount of data--most of it essentially noise--has to be filtered before analytics can be performed. Likewise, if data formatting is necessary, and volumes of data are significant, doing that at the edge, before sending it across the WAN, might make sense.
No comments:
Post a Comment