Sunday, October 20, 2019

Who Will Own the Edge Data Center?

It remains unclear how the ownership of edge computing facilities will develop. For starters, much depends on which “edge” is the venue. As often has been the case, computing modes determine whether there is a role for connectivity providers or not. 

On-device computing and on-premises computing might not create unusual needs for wide area network connectivity. Remote data center interactions (remote data center and cloud computing; metro data center or other distributed, off-premises computing) necessarily creates a demand for WAN or other outside-the-building communications. 

But even where edge computing is required, it is not yet clear how ownership and operation of such edge computing facilities will develop. Connectivity providers, cell tower companies, specialized third party venue communications providers, new wholesale providers (neutral host) or existing cloud computing providers might have logical roles. It also is possible that some suppliers of edge compute infrastructure might eventually assume some broader roles in edge computing, despite the channel conflict. 

Some believe existing cloud stakeholders are not the logical candidates to supply edge computing infrastructure, in part because edge computing will virtually require app providers to use facilities owned by hundreds of suppliers globally. That suggests an open model, with standardized interfaces, perhaps along the lines of the developing multi-cloud capability.

Perhaps the same argument might be made for the legacy content delivery network providers, which might face the same constraints as the hyperscale platforms: openness and scale.  

Other logical candidates include cell tower companies, which own distributed real estate assets, or mobile operators, with central office and mobile switching facilities that could be converted into edge data centers. 

Edge computing facilities must provide somewhat distinct capabilities for use cases of various types, a study by STL Partners finds. 

Artificial reality and virtual reality application developers want graphics processor units GPUs persistent, fast solid state storage. As has been the case for content delivery networks in general, standard application programming interfaces and guaranteed processing speeds, plus controlled latency, are the requirements. 

Unmanned aerial vehicle apps require sub-10 millisecond  latency. Location-based services require ubiquity.

Even if latency reduction is the obvious and primary advantage of edge computing, as was the case for content delivery networks in general, there might be any number of use cases where the value actually lies in reducing wide area network hairpinning, where traffic traverses the wide area network to communicate with another device located on the same local network. 

In perhaps a third class of use cases, there could be security or privacy advantages, as when anonymized and aggregated data sets are sent back to data centers for analysis, while actual user data remains local. 




No comments:

Post a Comment