Friday, February 28, 2020

Is Walmart Looking at Some Forms of 5G Support for its Edge Operations?

Walmart has been in talks with some mobile operators about installing 5G antennas on Walmart store roofs. Precisely what the perceived advantage might be is not clear, but if Walmart also operates edge computing facilities open to third party customers, it might also want to ensure that  ultra low latency 5G is available to those edge computing customers.

Walmart might also be looking at millimeter wave connectivity for shoppers inside its store locations, perhaps to enable video-heavy applications inside the stores. 

Walmart also plans to build edge computing facilities in its stores, not only for its own use, but as a commercial service available to third party customers. Third parties also will be allowed to buy use of the warehouses and shipping services.

Thursday, February 27, 2020

Global Carriers Work to Develop Common Edge Computing Framework

China Unicom, Deutsche Telekom, EE, KDDI, Orange, Singtel, SK Telecom, Telefonica and TIM have joined forces, with the support of the GSMA, to develop a multi-access edge computing platform that is interoperable. 

The platform, to be developed in 2020, will make local operator assets and capabilities, such as latency, compute and storage available to application developers and software vendors enabling them to fulfil the needs of enterprise clients.

There are other interoperability efforts also underway, including 3GPP efforts, in addition to work by ETSI

In the end, standards are not a business model, but an enabler of business models. Some note that the edge is a possible business battleground, as many in the ecosystem hope to capitalize on the opportunity as providers of computing as a service, computing platform as a service or colocation. 

Hyperscale cloud computing firms now are moving into the incipient business at the same time 5G suppliers hope to secure a position. But tower owners and some retailers might also expect they have a role as well. 

In the past, connectivity providers have often failed to compete with cloud computing suppliers or with independent data center providers, either. Verizon and AT&T. for example, already seem to be taking different roles in an effort to monetize existing assets for edge computing. 

Colocation and connections might be the role telcos eventually are forced to assume, their aspirations notwithstanding. Interoperability will be important for connectivity providers as they seek to serve global enterprises, allowing a single “throat to choke” offer for facilities scattered around the globe. 

At least so far, though, such capabilities seem more advanced at the level of colocation and hosting than actual computing as a service, at least in part because the current standards mostly apply to the ways different telcos operate their networks to support edge computing. 

The ETSI GS MEC 003 addresses the implementation of MEC applications as software-only entities that run on top of a virtualization infrastructure, which is located in or close to the network edge. A key new aspect of the Phase 2 version of this specification is the addition of MEC-in-NFV reference architecture, which defined how MEC-compliant edge deployments can be part of an overall NFV cloud architecture. 

ETSI aims for a unified edge compute architectural framework and reference platform, to be deployed across multiple markets in Europe and progressively extended to other operators and geographies to achieve global reach, ETSI says. 

The Framework and Reference Architecture standard includes the functional elements, the reference points between them, and a number of MEC services. In the updated version from 2019, a MEC variant was described with network functions virtualization (NFV) functional elements.

The Technical Requirements specification covers generic principles of MEC, such as NFV alignment and deployment independence. The 2018 update, Use Cases and Requirements, added application mobility to and from an external system, as well as 13 additional possible MEC use cases.

In 2019, the MEC ISG added 16 terms to the Terminology paper, including those often used in the ETSI MEC standards.

Mobile Edge Management has a two-part standard. The first, on system, host and platform management, defines the management protocol of the mobile edge system, hosts, and platforms. 
The second, on application lifecycle, rules, and requirements, outlines the application lifecycle management protocol in this standard. The document lists the rules and management requirements.

The User Equipment (UE) application interface standard details how to manage the application’s lifecycle on the connected device’s application interface.

The Bandwidth Management API standard primarily deals with bandwidth concerns when multiple devices use the same edge network. It focuses on application policy information, and how to address certain application program interface (API) scenarios that affect bandwidth usage and the network edge.

ETSI’s standard for the UE Identity API mainly focuses on a way to tag and track the user’s equipment in the network to enforce traffic rules. The standard for the Location API establishes guidelines for detecting the user’s device location information on the edge network.

In order to have a uniform Radio Network Information Service (RNIS) in MEC deployments, ETSI made the Radio Network Information API. It informs edge applications of the radio network’s condition to optimize network usage.

The Mobile Edge Platform Application Enablement document focuses on how the mobile platform functionality one (Mp1) reference point enables applications to communicate with the mobile edge system.

The General Principles for MEC Service APIs standard is a glossary of the RESTful API mobile edge service’s design principles, and highlights API guidelines and templates. In the 2019 update more RESTful API patterns were included.

The Support for Regulatory Requirements standard describes infrastructure to allow for Lawful Interception and Retained Data when implementing MEC into a larger network. This standard gives full support to these two practices’ regulatory requirements.

Saturday, February 15, 2020

Is Edge Computing a Substitutute for Network Slicing?

Virtual private networks are not new in the core network. But network slicing, which allows 5G mobile networks to create end-to-end virtual private networks, is new. Up to this point, best effort has been the only possible quality of service level for mobile networks. 

Network slicing creates the ability to add quality of service, end to end, for mobile devices and networks. That will allow 5G service providers to create virtual networks, operating end to end, with defined network performance or features. 

Much will depend on how other trends, such as edge computing, play out. If guaranteed throughput or specified low latency are requirements, one way of satisfying those needs is to put local computing into place. 

In many proposed use cases--factory automation, IoT sensors for agriculture or connected car apps--edge computing is an alternative to network slices that run through the mobile core network. A high-bandwidth local network with local servers then becomes an alternative to a network slice. 

To be sure, a network slice might also be set up to connect a metro edge computing asset to “in the metro” sensors, assuring a minimum level of bandwidth and latency performance. But it might often be possible to use premises computing and local area networks to provide the required levels of bandwidth or latency performance. 

About 75 percent of service provider executives surveyed by Amdocs believe the internet of things, connected cars and smart homes represent early use cases for network slicing. 

About 20 percent of respondents also believe there will be network slicing use cases in industries such as mining, agriculture and health.


There are other alternatives to network slicing as well. Enterprises long have created their own VPNs for security purposes, purchased wholesale capacity or built and run their own wide-area optical networks. 

So the issue is whether mobile operators will want to supply enterprise customers with control of their own network slices, or whether some functional substitute for enterprise control can be created instead. In other words, is there a possible role for network slicing as the enabler of a new type of wholesale service?

Often other service providers are the customers, and that could be attractive. 

In principle, a network slice could be purchased by a mobile operator as an alternative to some other capacity arrangement, possibly with the upside of quality of service guarantees to the network edge. 

Perhaps that happens in a horizontal way, as often is the case, where functions are supplied. As 3GPP defines a slice, it is “an end to end logical communication network, within a Public Land Mobile Network (PLMN) and includes the Core Network (CN) Control Plane, User Plane Network Functions and 5G Access Network (AN).”

That allows the functions necessary to create a data connection between devices and servers. 

At least in principle, a private 5G network, operated by a single enterprise, on its own facilities, would offer full vertical control of the whole network. The analogy is a complete 5G network featuring only five or six cells. 

Some might see this as a threat to the mobile operator wide area business. Some tend to think even a vertical 5G private network is an extrapolation of the local area network concept, and so not a threat to WAN services. 




Friday, February 7, 2020

Streamed Apps Would Favor Edge Computing

Local computing on devices or local servers and remote computing using communications to reach servers have in the past been partial product substitutes. That will be true for edge computing as well. In fact, for new classes of applications, edge computing will provide the same business value as content delivery networks have provided for cloud apps generally: better performance and lower wide area network charges. 


As latency performance and cost savings on bandwidth were the reason content delivery networks have had value, so edge computing will allow a reduction of north-south traffic. That might be especially true for use cases involving huge amounts of raw data to be processed and requiring artificial intelligence including real-time use cases

One development that could accentuate the tradeoffs is if apps could be streamed to devices as video and audio now are streamed. That would substitute remote servers and communications for local processing and storage on a device.

As always, the precise trade offs would depend on processing intensity and distance from the remote servers. Edge computing obviously would be helpful for processing intensive apps and use cases.

Tuesday, February 4, 2020

Cloud Infrastructure Revenue Grew 38% in 2019, Edge Impact Comes Next

AWS remains the biggest provider of cloud infrastructure services with $34.6 billion revenue. Microsoft generated  $18.1 billion, according to Canalys.

Global cloud infrastructure services spending grew 37.6 per cent to US $107.1 billion in 2019.

Google grew revenue the fastest, at 87.8 percent in 2019, albeit from a relatively low base, to $6.2 billion. 

Microsoft grew 64 percent to $18.1 billion, while Alibaba grew 64 percent as well,to $5.2 billion. AWS grew 36 percent to $34.6 billion. 

The “law of a few” or “winner take all” trend also seems to hold in the cloud infrastructure market. 

All four cloud kings  made their gains at the expense of "others", which lost collective market share of nearly five percent.


In terms of edge computing, AWS seems to be moving most aggressively at the moment, launching three different edge computing services.  AWS Wavelength embeds AWS compute and storage services within telecommunications provider data centers at the edge of the 5G networks.

AWS Local Zone extends edge computing service by placing  AWS compute, storage, database, and other select services closer to large population, industry, and IT centers where no AWS Region exists today. 

AWS Local Zones are designed to run workloads that require single-digit millisecond latency, such as video rendering and graphics intensive, virtual desktop applications. Local Zones are intended for customers that do not want to operate their own on-premises or local data center.

Likewise, AWS Outposts puts AWS servers directly into an enterprise data center, creating yet another way AWS becomes a supplier of edge computing services. “AWS Outposts is designed for workloads that need to remain on-premises due to latency requirements, where customers want that workload to run seamlessly with the rest of their other workloads in AWS,” AWS says.  

AWS Outposts are fully managed and configurable compute and storage racks built with AWS-designed hardware that allow customers to run compute and storage on-premises, while seamlessly connecting to AWS’s broad array of services in the cloud.