Thursday, October 29, 2020

IBM, AT&T Collaborate on Mobile Edge

IBM and AT&T are working together to enable hybrid cloud environments that use mobile network devices (4G or 5G). The collaboration mates AT&T Multi-access Edge Computing with IBM Cloud Satellite enables multi-cloud computing controlled from a single dashboard. 


Installed on-site, the AT&T multi-access edge computing server acts as an intelligent traffic controller and data processor, channeling mobile device data based on enterprise rules. 


High-priority, mission-critical data is processed and immediately sent back to the appropriate end-point within the enterprise private wireless network environment, rather than forwarded to a remote location for processing. 


AT&T’s MEC service essentially is a specialized way of supporting local mobile device processing where the enterprise runs mobile applications supported by local and remote servers. If I understand correctly, the other important element is that the IBM Cloud Satellite service also applies only to IBM cloud computing customers.


Sunday, October 25, 2020

How Much Revenue Upside for Telcos From Edge Computing?

Connectivity service providers are quite hopeful about possible roles in edge computing. Analysts tend to concur. 


Analysys Mason there is a $17 billion market opportunity for service providers between 2020 and 2025, if operators are able to insert themselves into some of three roles: suppliers of edge real estate, access or enabling applications at the edge. 


About 59 percent of that opportunity is in user-facing application and service platforms, which means operators need to offer services closer to the end user to gain a greater revenue share, Analysys Mason says. Some of us would argue that is an unlikely avenue for success. 


Connectivity providers--as befits their role in the ecosystem--are good at connectivity, not end user line of business or other apps. That’s not a slam; simply a recognition that telco skills and capabilities are aligned with their core business model. 


Research by Analysys Mason suggests the difficulty. As you would expect, only about six percent of respondents polled by the firm suggested they would select a telecom service provider to support their edge computing deployment. 


Fully 41 percent said they would prefer a technology company to manage their edge cloud implementation and another 31 percent said they would trust a public cloud provider most of all. 


That should be expected. Edge computing still is computing. The trusted suppliers still are likely to be the suppliers trusted with either cloud computing or premises computing. 


The upshot is that edge computing is likely to generate less connectivity provider revenue than many expect.


Wednesday, October 21, 2020

As Video Drives Need for CDNs, New Apps Drive Edge Computing Demand

Global traffic now is dominated by video, all reports suggest. According to Rethink Research, between 60 percent and 75 percent of total wide area network traffic consists of video. As entertainment video created the demand for edge caching and content delivery networks, so new applications are driving demand for edge computing. 


source: Rethink Research


Though the edge computing value proposition is clearest for apps that require ultra-low latency, privacy, minimized WAN costs or protection from intermittent connectivity all are reasons to use edge computing. 


Improved isolation and security, possibly to support regulatory or corporate compliance rules, can be the edge computing value driver. Workloads running on-premises do not send data into the public cloud. Edge-processed data also can be obscured, transformed, or encrypted prior to sending it upstream to a far-edge data center for archiving or backup. 


Improved jitter performance is another advantage of edge computing, for applications that require predictable packet arrival. That might be similar to the value provided by edge caches of entertainment video, which do not traverse the WAN, and which generally consist of non-real-time content. 


Edge facilities might not help for video conferencing using the public internet, which must, almost by definition, traverse the WAN. 


In some instances, where connectivity suffers from intermittent availability, on-premises processing provides continuity. Venues such as cruise ships, airplanes, oil rigs and vehicles provide examples. 


Tuesday, October 20, 2020

Edge Value and Cost: Lots of Moving Parts

The oft-stated value of edge computing is support for ultra-low latency applications. Still, as content delivery networks have shown, reduction of wide area network load or cost also can be a primary value of edge storage or computing. 


But improved isolation and security, possibly to support regulatory or corporate compliance rules, also provide value. Workloads running on-premises do not send data into the public cloud. Edge-processed data also can be obscured, transformed, or encrypted prior to sending it upstream to a far-edge data center for archiving or backup. 


Improved jitter performance is another advantage of edge computing, for applications that require predictable packet arrival. That might be similar to the value provided by edge caches of entertainment video, which do not traverse the WAN, and which generally consist of non-real-time content. 


Edge facilities might not help for video conferencing using the public internet, which must, almost by definition, traverse the WAN. 


In some instances, where connectivity suffers from intermittent availability, on-premises processing provides continuity. Venues such as cruise ships, airplanes, oil rigs and vehicles provide examples. 


But those advantages come at a cost of higher per-cycle or per-workload unit costs. Far-end hyperscale data centers benefit from huge economies of scale. Retail pricing for Amazon Web Services “Wavelength,” which supplies AWS functionality at the edge is about 20 percent to 30 percent higher than the same operations conducted at an AWS region facility, for example. 


Analysts at AvidThink note that the cost of an on-premises AWS Outposts deployment--which drops a rack of AWS servers into an on-premises data center, runs between 20 percent to 50 percent higher than the equivalent volume of workloads conducted remotely at an AWS hyperscale site. 


So the decision to use remote far end or on-premises edge computing is not simple. Information technologists have to consider:


• Where the data is generated

• Where data should be processed

• Where processed data needs to be consumed (by end-users or other services)

• Where information is eventually stored

• Performance needs of the application in terms of response times, throughput

• Security and compliance

• Capabilities of the underlying infrastructure platform, especially if specialized hardware (GPUs or FPGAs) are required

• Cost of data transport

• Cost of transient and permanent storage of data

• Cost of computing and memory at each location

• Availability constraints for the application and reliability of infrastructure at each location

• Budget issues


The value and cost of edge computing therefore is not a simple matter of functionality or cost.


Thursday, October 15, 2020

Is There Really a Telco Cloud Market?

The decision by Nokia to outsource its internal cloud computing requirements to Google Cloud is designed to shift Nokia’s internal information systems from an “on-premises and owned” model to computing as a service. 


But that move also illustrates the fuzziness of efforts to characterize “telco cloud” as a distinct market, as well as the impact of virtualization on telco internal information technology architecture. 


Virtualization essentially means that computing support can be abstracted from network elements and placed anywhere it makes sense. In Nokia’s case, it seems that “place” is Google Cloud data centers, for business applications that support the business.


It is unclear how much of the computing workload to support Nokia’s service provider or enterprise customers actually will follow, as major connectivity service providers seek to become “cloud native” in their computing operations. 


For mobile operators, cloud native is required to support 5G, which uses a software-driven architecture and Network Functions Virtualization.


source: VMware 


As often is the case, virtually every major product category (sold or bought) already is counted somewhere, as part of some other “market.” Cloud computing is not different in that regard. 


The term might include both internal computing (to support virtualized networks, for example) and cloud computing offered “as a service” to external customers (in the manner of Amazon Web Services, Google Cloud or Microsoft Azure). 


Internal computing using cloud mechanisms include support for business operations, such as billing, provisioning, traffic management, marketing, customer support, inventory management, customer relationship management and so forth. 


Cloud commuting as a service can be thought of as retail products sold to business customers, and could include computing, storage, platform, infrastructure or software sold “as a service.”


One sometimes sees a market forecast for “telco cloud” that could include both expressions of computing: internal information technology to support the business as well as a retail product sold to customers. 


The issue in the former instance is that the “market” in question is already counted as part of the enterprise software and hardware revenue stream. Fulfillment of most software products now is by “cloud” delivery. 


The issue in the latter instance is relatively few telcos actually are in the retail cloud computing business as brand-name suppliers, though some revenue might be earned supplying edge computing data center real estate. 


There arguably was more optimism about the retail upside a decade ago, before the ecosystem began to stabilize. Not many still believe telcos will be major retail suppliers of cloud computing services to business or consumer end users.

Monday, October 12, 2020

After Edge, "Split Computing?"

Edge computing, most agree, is among the hottest of computing ideas at the moment. But technologists at Samsung believe more distribution of computing chores is possible. They use a new term “split computing” to describe some future state where computing chores are handled partly on a device and partly at some off-device site.


In some cases a sensor might compute partially using a phone. In other cases a device might augment its own internal computing with use of a cloud resource. And in other cases a device or sensor might invoke resources from an edge computing resource. 

source: Samsung


Conventional distributed computing is based on a client-server model, in which the implementation of each client and server is specific to a given developer, Samsung notes. 


To support devices and apps using split computing, an open source split computing platform or standard would be helpful, Samsung says.


With split computing, mobile devices can effectively achieve higher performance even as they extend their battery life, as devices offload heavy computation tasks to computation resources available in the network.


5G May Well be the Era of Edge Computing

At least as Samsung sees matters, if 4G was intertwined with cloud computing, then 5G will be the era of edge computing. The reason is that mobile devices used by humans and machine sensors will have limited computation capability.


source: Samsung 


As in the past, computing and communications are functional substitutes: Communications can be used to provide access to computing, or computing can be used to avoid use of communications.


So as computational intensity continues to grow, one solution is to offload computation tasks to more powerful devices or servers from actual end user devices or sensors.


In the case of real-time intensive computation tasks, hyper-fast data rate and extremely low latency communications are required. And that means edge computing, in the 5G era. 


As with all prior digital generations, better latency performance and higher bandwidth--by at least an order of magnitude--can be expected from 6G. As we will approach zero in terms of air latency, we might have to start thinking about “negative latency,” the ability of the network and computing infrastructure to anticipate problems and prevent them from occurring. 


source: Samsung 


That obviously will be a virtual concept, as the latency performance advantage will not derive in a physical sense from the network but from avoided latency issues. Samsung notes that the interest in multi-access edge in the telecom industry is precisely this ability to support real-time and mission-critical functions with computing at the edge of the network.


Wednesday, October 7, 2020

Data Center Spending Dips in 2020, Returns to Growth in 2021

End-user spending on global data center infrastructure is projected to reach $200 billion in 2021, an increase of six percent from 2020, according to Gartner analysts. 


Despite a 10.3 percent decline in data center spending in 2020 due to restricted cash flow during the pandemic, the data center market is still expected to grow year-over-year through 2024, Gartner predicts. 


Global Data Center Infrastructure End-User Spending

 

2019

2020

2021

Spending ($B)

210

188

200

Growth (%)

0.7

-10.3

6.2

source: Gartner 


Bare Metal as a Service: Manage Apps Instead of Infrastructure?

Can data center operations become an outsourced function, allowing data center owners to focus on managing apps instead of infrastructure? That is what Dell suggests with its bare metal as a service pitch, illustrating the growing range of “as a service” computing functions an enterprise can purchase. 


source: Dell Technologies 


Besides the existing infrastructure as a service, platform as a service, software as a service and bare metal as a service, suppliers now speak of function as a Service (FaaS) and database as a service.


Using FaaS, users manage only functions and data. The cloud service provider manages the applications. This option is especially popular among developers, Intel argues, since customers do not pay for services when your code isn’t running. 


Common functions include data processing, data validation or sorting, and back ends for mobile and IoT applications. FaaS providers include AWS Lambda, Azure Functions, and Google Cloud Functions.


Database as a Service (DBaaS), such as Microsoft Azure SQL Database, is useful for hybrid cloud, since applications can be moved between on-premise and cloud infrastructure without disrupting user experience. 


Bare Metal as a Service provides a way for enterprises to complement virtualized cloud services with a dedicated server environment with the same agility, scalability, and efficiency as the cloud. 


It also is useful to enterprises for short-term, data-intensive processing such as media encoding or render farms. Shifting such activities to an owned bare metal solution also avoids payment to cloud computing suppliers for those workloads.


Friday, October 2, 2020

Cloud, Edge Cloud and Edge Computing

Most of us are familiar with the paradigm of “cloud computing, fog computing, edge computing” originally developed by firms such as Cisco. In that topology, cloud computing happens remotely, at some distant end of a wide area network. Fog computing happens closer to the end points, but on some more-centralized server, possibly someplace in a metro network. 


Edge computing might further include processing on a device or on a server located on the same premises as the devices. That might be phrased “device edge, premises edge or metro edge.” Multi-access edge computing or infrastructure edge computing provide examples. 


source: Simform 


As hyperscale cloud computing giants move into the edge computing space, some additional notions arise. Though the notion of “cloud computing” (far end of a wide area network) still holds, there is a newer concept, where many of the same functions typically performed at a cloud data center happen locally: on a premises or within a metro area, perhaps. 


The Amazon Web Services Wavelength service illustrates the metro computing example. AWS Outposts is an example of premises-based functions normally supplied at an AWS cloud data center. 


Some might call that “edge cloud.”


source: TechTarget

New Twilio Platform, Microsoft Azure Sphere Module Show How IoT can be Deployed Brownfield

Twilio’s new Microvisor platform and Microsoft’s new Guardian module illustrate one way the internet of things might be implemented. 


With the caveat that it typically is easier for an application provider to add new features and create new platforms, compared to a connectivity provider, Twilio’s new Microvisor platform shows how an enabler of voice, text, chat, video and email services in any application is migrating towards support for internet of things. 


The move illustrates how a communications enabler adapts its current business to developing new markets, much as connectivity providers hope to add support for internet of things and edge computing. 


Microvisor is said by Twilio to be “an IoT connectivity and device management platform that offers embedded developers a one stop shop for building connected devices, keeping them secure, and managing them through their lifetime.”


As was the case with the original Twilio platform that basically would voice-enable any application, the Microvisor platform abstracts away many of the common infrastructure challenges burdening IoT developers, the company says.


That includes functions such as security, debugging and updates and language and embedded operating system support, including hardware such as the Cortex-M microprocessors and select STMicroelectronics chips.


Microsoft’s Azure Sphere guardian module provides a hardware element. A guardian module is add-on hardware that attaches to a port on a "brownfield" device that already is in use.  


Starbucks uses guardian modules to track the types of beans used, water temperature, and water quality, for example. 


Beyond predictive maintenance, Azure Sphere allows Starbucks to transmit new recipes directly to machines in 30,000 stores rather than manually uploading recipes via thumb drives,


source: Microsoft 


A guardian module adds IoT capabilities to equipment that either doesn't support internet connectivity or doesn't support it securely. 


A guardian module can cull data from the brownfield device, process it, and transmit it securely to a cloud endpoint or to multiple endpoints. 


It also can provide environmental data for use with operating data from the brownfield device, or act as a a data storage device in case connectivity momentarily is lost.


A guardian module requires at least one high-level application. The high-level application communicates upstream with the internet (including the Azure Sphere Security Service and other cloud services) and downstream with the brownfield device.

Thursday, October 1, 2020

Edge Computing Service Revenue $7 Billion by 2025

Mobile Experts expects connected edge computing services to grow to about $7 billion worth of revenue in 2025.  As much as 66 percent of service revenue will be earned by cloud computing firms. 


Telcos, neutral host providers, and enterprises themselves will play important roles in hosting sites and providing connectivity, Mobile Experts believes. 


source: Mobile Experts 


Telcos and other internet service providers will attempt to add value through wholesale aggregation of local computing capacity, as well as by offering connectivity, the firm says. Some mobile operators will create offers based on ultra-low-latency or high-reliability connections, while others will be satisfied with local real estate hosting of edge data centers, Mobile Experts says. 


"In 2025, we predict that more than half of 'edge data centers' will be on-premises, hosted by an enterprise,” says Mobile Experts. “Another 20 percent to 25 percent will be hosted by the telcos or ISPs. 


Perhaps the key takeaway is that the edge computing business will be led by computing-as-a-service hyperscalers. Edge computing is, after all, computing. Much of the 5G or other access provider upside could hinge on how much the mobile network is used to directly connect sensors and servers to wide area networks.