Monday, November 29, 2021

MEC Opportunities for Mobile Operators Might Vary from Region to Region

Multi-access Edge Computing, as envisioned by mobile operators, has been viewed as a way to leverage 5G to create a revenue-creating role in edge computing. To say it is complicated is to acknowledge the truth. Connectivity providers can supply the access and often real estate. 


More complicated is the creation of a role in the actual computing as a service function. SK Telecom, for example, is creating its own branded MEC service, including the actual compute cycles and analytics; operations; vertical applications plus connectivity. 

source: RCR 


To do so it builds on Intel and Dell servers, VMware for compute cycles and computing-as-a-service management. SK contributes the connectivity, real estate and some vertical market solutions as well. 


Another model involves mobile operators supplying connectivity and real estate, with a hyperscale computing-as-a-service supplier providing the actual compute cycles. 

source: RCR 


Some mobile operators are likely to try the former approach; others will choose the latter. Much could depend on the maturity of hyperscale cloud computing in different regions. Where cloud computing services are immature, there arguably are greater opportunities for a telco-developed role in providing edge computing. 


In markets where hyperscale cloud computing is well developed, there arguably are fewer opportunities for a mobile operator created edge computing services business.


"I'm 64, I Don't Know What the Cloud Is"


Tuesday, November 16, 2021

How Will Edge Change Data Traffic Patterns?

Data center and consumer end user data usage are virtually mirror images of each other: most end user data consumption involves remote server access, while most data center data consumption is based on other local servers. In other words, consumer data usage now is WAN centric, while data center demand is LAN centric. 


How those patterns might change once edge computing becomes ubiquitous is not yet clear. By definition, more computing and server access will come from a local source (on the premises or within a metro area) rather than from a remote data center.


Computation-focused operations (image analysis, virtual reality, augmented reality) will still rely on external server access, only those servers will be closer to the end user.


Many other content-related operations will rely on the existing metro data center architecture (content delivery networks), while highly-customized operations relying on large data stores will still happen msotly remotely.


The huge amount of “within the data center traffic” is partly caused by applications that involve lots of queries. Many internet applications are extremely “chatty”. A single search query within the data center might involve hundreds of server requests, for example. 


A social networking transaction has a similar multiplier effect, as it draws in an entire social graph to respond to a single query. 

source: Cisco 


The architecture of data centers can contribute to the amount of traffic as well, using with separate storage arrays, development or  production server pods and application server clusters that all need to talk to one another.


Data center traffic moving to end users was a decade ago a larger percentage of total wide area network data volume. That has been steadily changing, with more traffic moving between data center locations.  


source: Cisco 


In 2021, the volume of data moving between data centers is about equal to the amount of data moving to end users. Content caching accounts for some of the data center to data center increase. Content mirroring accounts for an additional amount of inter-data-center traffic. 

source: Cisco 


Still, wide area network bandwidth now is about equally composed of traffic heading for end users and traffic moving between data centers, a trend itself driven by the dominance of content as a driver of network capacity. 

source: Telegeography 


Content drives as much as 83 percent of transAtlantic traffic and 66 percent of transPacific traffic, for example. 

source: Telegeography 


Monday, November 15, 2021

Which Edges Will American Tower Pursue, After CoreSite Acquisition?

American Tower’s acquisition of data center provider CoreSite is intended to support American Tower’s edge computing ambitions. It is not yet clear how many of the multi-access edge computing segments the acquisition will support, at least initially. 


source: American Tower 


What might be most obvious are ways to support the access edge (tower sites), aggregation edge or regional data center (metro edge) venues.


Friday, November 12, 2021

Edge Computing Sales $17.8 Billion Globally by 2025

The sale of edge computing products, services and solutions will grow to reach US$17.8 billion in 2025, up from an estimated US$8 billion in 2019, at a compound annual growth (CAGR) rate of 15.6 percent, according to GlobalData.


As always, this forecast includes sales of infrastructure to create edge computing capabilities, system integration and installation, real estate investments, computing-as-a-service services and connectivity. 


In North America, sales of edge computing products, services and solutions will amount to US$6.85 billion by 2025, which is equivalent to 38 percent of the total global market. 


Sales in Asia Pacific and Western Europe will amount to US$4.65 billion and US$3.39bn, respectively, equivalent to 26.4 percent and 19.3 percent of the total global market, the firm estimates.


Tuesday, November 2, 2021

Metaverse Should Drive Edge Computing

The name change from Facebook to Meta illustrates why remote computing and computing as a service are driving computing to the edge. 


“The metaverse is a shared virtual 3D world, or worlds, that are interactive, immersive, and collaborative,” says Nvidia. 


Facebook says “the metaverse will feel like a hybrid of today’s online social experiences, sometimes expanded into three dimensions or projected into the physical world.”


As 3D in the linear television world has been highly bandwidth intensive, so are metaverse applications expected to feature needs for lots of bandwidth. As fast-twitch videogaming has been reliant on low latency response, metaverse applications will require very low latency. As web pages are essentially custom built for each individual viewer based on past experience, so metaverse experiences will be custom built for each user, in real time, often requiring content and computing resources from different physical locations. 


All of that places new emphasis on low latency response and high bandwidth computing and  communications network support. Multiverse experiences also will be highly compute intensive, often requiring artificial intelligence. 


As with other earlier 3D, television, high-quality video conferencing apps and immersive games, metaverse experiences also require choices about where to place compute functions: remote or local. Those decisions in turn drive decisions about required communications capabilities. 


Those choices  always involve cost and quality decisions, even as computational and bandwidth costs have fallen roughly in line with Moore’s Law for the last 70 years. 


source: Economist, Whats the Big Data


As low computational costs created packet switching and the internet, so low computational costs support remote and local computing. Among the choices app designers increasingly face are the issues of latency performance and communications cost. Local resources inherently have the advantage for latency performance and also can be a material issue when the cost of wide area bandwidth is added. Energy footprint also varies (local versus remote computing).  


On the other hand, remote computing means less investment in local servers. The point is that “remote computing plus wide area network communications” is a functional substitute for local computing, and vice versa. When performance is equivalent, designers have choices about when to use remote computing and local, with communications cost being an integral part of the remote cost case. 


Metaverse use cases, on the other hand, are driven to the edge (local) for performance reasons. Highly compute-intensive use cases with low-latency requirements are, in the first instance, about performance, and then secondarily about cost. 


In other words, fast compute requirements and the volume of requirements often dictate the choice of local computing. And that means metaverse apps drive computing to the edge. 


source: Couchbase

Why "Ecosystem" has Grown in Importance for Data Centers

The data center and  interconnection markets have changed over the past couple of decades. Early on, it was connectivity providers that needed to interconnect their networks. With the internet, new requirements were created for internet service providers and internet transport providers to connect to each other. 


Now application providers often must connect with other app providers and connectivity providers. All that is why there is a growing emphasis on “ecosystems” in the data center collocation market. 


source: Canalys 


As service provider interconnection drove the business model for older business models, over time additional interconnection participants have emerged, including application, platform and content providers. In some cases, the newer participants are engaged in direct counterparty trades between themselves. 

source: Equinix 


Monday, November 1, 2021

Cloud Infrastructure Service Revenue Grew 35% in 3Q 2021, Says Canalys

Global spending on cloud infrastructure services increased 35 percent in the third quarter of 2021 to reach US$49.4 billion for the quarter, and implying full-year revenue for cloud infrastructure service providers in the range of $185 billion or so. 


source: Canalys 


Canalys defines cloud infrastructure services as those that “provide infrastructure as a service and platform as a service, either on dedicated hosted private infrastructure or shared infrastructure.”


One source of possible definitional difference is how to count “application as a service” revenue, particularly important when evaluating Microsoft, Oracle or IBM figures, for example. 


Canalys says it excludes the value of  “software as a service”, but includes revenue generated from the infrastructure services being consumed to host and operate them.


Other analyses tend to confirm the Canalys market share figures, which rank Amazon Web Services with 32 percent market share; Microsoft Azure at 21 percent and Google Cloud at eight percent share. 


source: Statista


sources: Synergy Research Group, Statista