Friday, October 21, 2022

AT&T Deployes Edge Zones

AT&T says it has deployed 5G edge zones in 10 areas, with plans to expand to 12 zones by the end of 2022. Those data centers “will be located…close to cross connect facilities that have fast connections to nearby cloud facilities run by the ‘hyperscaler’ cloud providers,” says Jeremy Legg, AT&T chief technology officer. 


Presumably AT&T is referring to the edge zones it has created using Azure and Google resources. The objective is to bring computing locations closer to where end users are


Undoubtedly the concept is similar to the way AWS and Verizon have created Wavelength zones


source: AWS 

Global Interconnection Bandwidth Growing at 40% Per Year, Says Equinix

Global interconnection bandwidth is forecast to grow at a 40 percent five-year compound annual growth rate,  reaching 27,762 Tbps, which is equivalent to 110 zettabytes of data exchanged annually, according to the Equinix Global Interconnection Index. 


source: Equinix Global Interconnection Report

Thursday, October 20, 2022

Will Edge Computing be Essential for Either Mass-Scale AR or Metaverse in a Decade?

The Telecom Infra Project has formed a group to look at metaverse-ready networks. Whether one accepts the notion of “metaverse” or not, virtually everyone agrees that future experiences will include use of extended, augmented or virtual reality on a wider scale. 


And widespread use of edge computing is likely to be crucial for latency reduction as well as possible limitations on access bandwidth. Fully-immersive and persistent environments will be highly compute-intensive. So local computing is likely to be required for at-scale VR or metaverse use cases.


The metaverse or just AR and VR will deliver immersive experiences that will require better network performance, for both fixed and mobile networks, TIP says. 


And therein lie many questions. If we assume both ultra-high data bandwidth and ultra-low latency for the most-stringent applications, both “computing” and “connectivity” platforms will be adjusted in some ways. 


Present thinking includes more use of edge computing and probably quality-assured bandwidth in some form. But it is not simply a matter of “what” will be required but also “when” resources will be required, and “where?”


As always, any set of performance requirements might be satisfied in a number of ways. What blend of local versus remote computing will work? And how “local” is good enough? What mix of local distribution (Wi-Fi, bluetooth, 5G and other) is feasible? When can--or should--remote resources be invoked? 


And can all that be done relying on Moore’s Law rates of improvement, Edholm’s Law of access bandwidth improvement or Nielsen’s Law of internet access speed? If we must create improvements at faster rates than simply relying on historic rates of improvement, where are the levers to pull?


The issue really is timing. Left to its own internal logic, the headline speed services in most countries will be terabits per second by perhaps 2050. The problem for metaverse or VR experience providers is that they might not be able to wait that long. 


That means the top-end home broadband speed could be 85 Gbps to 100 Gbps by about 2030. 

source: NCTA  


But most consumers will not be buying service at such rates. Perhaps fewer than 10 percent will do so. So what could developers expect as a baseline? 10 Gbps? Or 40 Gbps? And is that sufficient, all other things considered? 


And is access bandwidth the real hurdle? Intel argues that metaverse will require computing resources 1,000 times better than today. Can Moore’s Law rates of improvement supply that degree of improvement? Sure, given enough time. 


As a rough estimate, vastly-improved platforms--beyond the Nielsen’s Law rates of improvement--might be needed within a decade to support widespread use of VR/AR or metaverse use cases, however one wishes to frame the matter. 


Though the average or typical consumer does not buy the “fastest possible” tier of service, the steady growth of headline tier speed since the time of dial-up access is quite linear. 


And the growth trend--50 percent per year speed increases--known as Nielsen’s Law--has operated since the days of dial-up internet access.


The simple question is “if the metaverse requires 1,000 times more computing power than we generally use at present, how do we get there within a decade? Given enough time, the normal increases in computational power and access bandwidth would get us there, of course.


But metaverse or extensive AR and VR might require that the digital infrastructure  foundation already be in place, before apps and environments can be created. 


What that will entail depends on how fast the new infrastructure has to be built. If we are able to upgrade infrastructure roughly on the past timetable, we would expect to see a 1,000-fold improvement in computation support perhaps every couple of decades. 


That assumes we have pulled a number of levers beyond expected advances in processor power, processor architectures and declines in cost per unit of cycle. Network architectures and appliances also have to change. Quite often, so do applications and end user demand. 


The mobile business, for example, has taken about three decades to achieve 1,000 times change in data speeds, for example. We can assume raw compute changes faster, but even then, based strictly on Moore’s Law rates of improvement in computing power alone, it might still require two decades to achieve a 1,000 times change. 


source: Springer 


And that all assumes underlying demand driving the pace of innovation. 


For digital infrastructure, a 1,000-fold increase in supplied computing capability might well require any number of changes. Chip density probably has to change in different ways. More use of application-specific processors seems likely. 


A revamping of cloud computing architecture towards the edge, to minimize latency, is almost certainly required. 


Rack density likely must change as well, as it is hard to envision a 1,000-fold increase in rack real estate over the next couple of decades. Nor does it seem likely that cooling and power requirements can simply scale linearly by 1,000 times. 


So the timing of capital investment in excess of current requirements is really the issue. How soon? How Much? What Type?


The issue is how and when to accelerate rates of improvement? Can widespread use of AR/VR or metaverse happen if we must wait two decades for the platform to be built?

Thursday, October 6, 2022

Ofcom to Study Cloud Computing Market Structure

Cloud computing market concentration is something Ofcom says it will study. The obvious issue is market power. In the U.S., U.K., Europe markets, for example, a few hyperscale cloud services providers dominate. Just three firms generate about 81 percent of cloud computing “as a service” revenues in the United Kingdom, for example. 


source: Ofcom 


Beyond that, Ofcom also will examine other digital markets, including online personal communication apps and devices for accessing audiovisual content. Among the issues Ofcom says it will explore are the ways services such as WhatsApp, FaceTime and Zoom are affecting the role of traditional calling and messaging, and how competition and innovation in these markets may evolve over the coming years. 


Beyond the obvious fact that communications and computing are scale-dependent businesses, there also are industrial policy considerations. Many in Europe are worried that the continent has “fallen behind” the United States and China in global innovation related to applications and computing. 


So efforts to address market competition will tend to take measures that increase the likelihood that local suppliers can win market share. The long-term outcomes are anything but assured. 


Winners in scale businesses, by definition, have scale. In the case of the cloud computing business, that advantage tends to be global in nature. Government policy aimed at restricting the growth of market leaders can provide some breathing room for local competitors. 


Still, in the end, if global scale really does matter, it will always be hard for local contestants to create such global scale. It is not impossible; merely hard. 


And to a greater extent than competition authorities might like to acknowledge, eventual emergence of scale competitors often requires other scale competitors to enter a market. 


Pro-competition policies designed to support new entrants can stimulate market entry, up to a point. Long term significant market share gains often happen only when local firms partner with, or are acquired by, other firms with existing scale. 


Fostering competition often is a compelling policy goal. But it is frighteningly difficult to achieve, in terms of market share outcomes. On the other hand, such policies almost always allow smaller firms to gain scale that such firms ultimately monetize by exiting the market, as part of a sale to other larger contestants. 


So even when policies to promote competition essentially fail at disrupting market structure, such policies often provide many opportunities for new entrants and smaller firms to monetize those opportunities.


Saturday, October 1, 2022

Digital Infra Acqusitions by Private Equity Grow

Synergy Research Group says 87 data center mergers or acquisitions happened in the first six months of 2022,  with an aggregate value of $24 billion. 


Some $18 billion of deals are pending. 


Synergy logged 209 deals that closed in 2021 with an aggregate value of over $48 billion.  A notable trend in the industry has been the recent influx of private funds.


From 2015 to 2018, private equity buyers accounted for 42 percent of deal value. In 2019 to 2021, private equity share of the total deal value increased to 65 percent, while in the first half of 2022 private equity share has jumped to over 90 percent, Synergy Research notes. DC MandA June 2022

source: Synergy Research 


Dealmaking has been led by a few big transactions, including the $15 billion acquisition of CyrusOne by investment firms KKR and Global Investment Partners, and the pending acquisition of Switch by DigitalBridge for $11 billion. In 2021 the acquisitions of CoreSite and QTS, each for around $10 billion, were the big transactions. 


Prior to these four transactions, the biggest data center deals were Digital Realty’s $8.4 billion acquisition of Interxion, Digital Realty’s $7.6 billion acquisition of DuPont Fabros, the Equinix acquisition of Telecity for $3.8 billion, the Equinix acquisition of Verizon’s data centers for $3.6 billion and the acquisition of Global Switch by the Jiangsu Shagang Group of China.


Apart from these mega deals, some of the most notable serial acquirers have been Equinix, Digital Realty, EQT, DigitalBridge/Vantage, CyrusOne, GDS, GI Partners, Keppel, Macquarie, Mapletree and NTT, Synergy says. 


Data Center Colocation Market Remains Fragmented

The data center colocation market remains fragmented globally, Synergy Research Group suggests, even if the six leading colocation providers account for 37 percent of the worldwide market. 


Chinese operators have 13 percent share, “thanks to virtually controlling their home market,” Synergy Research says, leaving half the market contestable by a wide range of suppliers.


The market is led by Equinix, Digital Realty and NTT that have about 30 percent of all colocation revenues. 


CyrusOne, DigitalBridge and KDDI have single-digit shares. 


The largest of the Chinese operators are China Telecom, GDS and VNET, according to Synergy. 


Smaller operators with high growth rates include STACK Infrastructure, Mapletree, Chindata, Iron Mountain, Switch and H5 Data Centers. 


The United States and China account for almost half of the world market. They are followed by Japan, UK, Germany, Singapore and India, which together represent another 24 percent of the total. 


The large country markets with the highest growth rates are China, Brazil, India and Singapore.

source: Synergy Research