Wednesday, April 19, 2023

Edge Computing Necessarily Drives Additional Interconnection Demand

The definition of “interconnection,” and “interconnection revenue” in the data center space might be changing, especially as edge computing grows.


Traditionally, such functions might mostly have been applied to in-building connections between servers. In such instances, even when there was a fee for such connections, the operations fell clearly within the realm of local area networking. 


Edge computing pushed connectivity needs out from the hyperscale and tier-one data centers to many local edge connections, by definition increasing the necessity of additional interconnectiions. And it does not stretch credulity to argue that non-traditional connectivity service providers including data centers themselves will benefit.


Today, interconnection increasingly refers to operations and potential revenue connecting servers and buildings across public networks or wide area networks. Equinox, for example, reported 2022 full year interconnection revenues representing about 17.5 percent of total revenue. 


Granted, much of that revenue comes from in-building cross connects, not “outside the building” connections between data centers, peering or interconnect locations. In the following chart, amounts are in dollars billion. So Equinix in 2022 earned about $1.27 billion in interconnection revenue. 


Assume 47 percent of that was earned supplying cross connects inside Equinix facilities, suggesting $597 million in such revenue. That leaves about $673 million in wide area network or access network interconnection revenue. 



source: Equinix 


Back in 2020, Equinix said Equinix Fabric, one of the firm’s “outside the building” connectivity lines of business, represented about 10 percent of total interconnection revenues. But that part of the business has grown. “Virtual” connections were said to be growing at about 32 percent annually.  At such rates, 2022 contributions from that source could have reached 17 percent or so of total interconnection revenue. 


But Equinix has several product lines within the interconnection area. 


Equinix Fabric provides secure, on-demand, software-defined interconnection globally. 


The biggest current revenue driver is likely the Cross Connects service, which provides a point-to-point cable link between two Equinix customers in the same data center.


Equinix Internet Exchange enables networks, content providers and large enterprises to exchange internet traffic through peering points. 


Equinix Internet Access operates in 40 markets. Fiber Connect provides dark fiber links between customers and partners between multiple Equinix data centers.


Assume cross connects still represent as much as 47 percent of interconnection revenue. That also means as much as 53 percent of revenue comes from services that traditionally are provided by telcos, internet service providers or wide area networking specialists. 


Those products include lit fiber services, access to dark fiber, Ethernet transport or wavelength transport. 


The point is that Equinix earns a substantial amount of “connectivity” revenue as part of its data center services operations. 

source: Paolo Gorgo


Of note is the value Equinix, for example, seems to derive from its interconnection function. Where all “pure play”  connectivity suppliers grudgingly expect lower prices virtually every year as a main trend, Equinix seems to derive much more value from its interconnection services. 


To be sure, interconnection is not the primary driver of revenue. But Equinix experience also illustrates that some interconnections are deemed more valuable than others. 


source: Hoya Capital 


It is not unrealistic to predict that, with the growth of edge computing, interconnection will drive even more activity and revenue. 

 

According to Viavi, interconnection “refers to the technology used to link together two or more individual data centers to pool resources, balance employee workloads, replicate data, or implement disaster recovery plans and provide business data closer to the edge.”


Data center interconnections, in other words, are now distinct from in-building interconnections connecting campus locations up to about five kilometers; metro connections up to 100 km or traditional WAN connections spanning hundreds of kilometers. 

Data Center Interconnect Network Diagram TIA 942-A source: Viavi 


As always, infrastructure providers are direct beneficiaries of new needs for data center interconnections as they supply the appliances required to create the connections. Data center operators might monetize their DCI capabilities directly or indirectly, perhaps separately from in-building cross connects and peering


Tenants might be charged for use of ports or higher-level networking protocols, for example, whether direct (cross connects) or virtual (WAN connections).


Perhaps the point here is that interconnection revenue seems to have higher value in the context of application and computing performance, compared to more generic enterprise data networking. 


For an industry perpetually concerned with declining profit margins caused, in large part, by perceived commodity status, the higher value apparently connected with Equinix ecosystem connections is significant.


Tuesday, April 18, 2023

Interconnection Needs Grow with Edge Computing

The definition of “interconnection,” and “interconnection revenue” in the data center space might be changing. That is especially true as we move towards greater use of edge computing, which will both increase the demand for local and metro connections as well as add some amount of wide area network interconnection demand between data centers.


Traditionally, such functions might mostly have been applied to in-building connections between servers. In such instances, even when there was a fee for such connections, the operations fell clearly within the realm of local area networking. 


Today, interconnection increasingly refers to operations and potential revenue connecting servers and buildings across public networks or wide area networks. 


According to Viavi, interconnection “refers to the technology used to link together two or more individual data centers to pool resources, balance employee workloads, replicate data, or implement disaster recovery plans and provide business data closer to the edge.”


Data center interconnections, in other words, are now distinct from in-building interconnections connecting campus locations up to about five kilometers; metro connections up to 100 km or traditional WAN connections spanning hundreds of kilometers. 

Data Center Interconnect Network Diagram TIA 942-A source: Viavi 


As always, infrastructure providers are direct beneficiaries of new needs for data center interconnections as they supply the appliances required to create the connections. Data center operators might monetize their DCI capabilities directly or indirectly, perhaps separately from in-building cross connects and peering


Tenants might be charged for use of ports or higher-level networking protocols, for example, whether direct (cross connects) or virtual (WAN connections). 


Friday, April 14, 2023

Nobody Knows What to Call the Next Era of Computing

That we cannot yet agree on what to call the present era of computing likely speaks to the embedding of computing operations into most areas of life and the economy. Most would agree that the mainframe, minicomputer and PC eras were characterized by the types of computing devices. If so, some might suggest we are now in the era of mobile computing. 


Others might focus on the ubiquity of computing devices, moving from just a few mainframes to widespread mobile devices to embedded sensors. That might be said to imply a shift from “few, monolithic” to “many, distributed” to “everywhere, embedded.” 


source: Jeremy Abbett 


But architecture began to be important in the client-server era many consider the evolution from the PC era. The big change is the role of connectivity and communications. The “device” no longer is the essential foundation for computing. Instead, it is the networked nature of computing, or the lead applications or use cases that define the era. 


Some might broadly characterize the movement as from tabulating to programmable systems to cognition.  


So some might characterize the present era as being about “big data” or “cloud,” while the coming era might be foundationally about artificial intelligence, quantum computing or autonomous systems. 


Using that framework, we have moved from devices to the internet and web. Some might point to the coming era as based on artificial intelligence. Some might prefer to focus on “how” we do computing, harkening back to an earlier set of descriptors based more on devices. Those who believe we are headed for an era of “quantum” computing might take this view. 

source: Morgan Stanley 


Yet others might see the progression, since the time of the PC, as moving from “one computer, one user” to “one user, many computers” to “many computers, many users.” The movement there is from personal to collective


In that typology, we might refer to the mainframe and minicomputer eras as “few computers, few users.”


Most could agree that cloud computing is the era we presently inhabit. Again, it is the architecture that is foundational, not the devices. Others might call it the internet era, but the idea is the same: computing now is routinely conducted remotely, and computing operations extend to support for transactions and content consumption, beyond “work” functions. 


We are likely to have even broader suggested appellations for what comes next, as we could see changes in devices, locations, applications, architectures, use modes or core computing foundations, all at once. 


But one unifying theme is that connectivity or communications has become foundational for any of the characterizations since the “internet” era dawned. We have moved to remote computing, but are evolving towards heterogeneous computing as well.


Thursday, April 13, 2023

Data Center Investment Dipped in 2022

Data centers are the hidden infrastructure underpinning all digital activities, it is safe to say, as remote computing now is the standard method for supporting most content and internet apps and use cases. That, in turn, has fueled the inclusion of digital infrastructure assets to the class of alternative assets investors consider. 


The global colocation data center market size is forecast to grow with a five-year compound annual growth rate of 11.3 percent from 2021 to 2026, and the hyperscale market is expected to grow even faster, at approximately a 20 percent CAGR, according to JLL. 


source: JLL 


With the caveat that investment capital raised is not the same as sums invested, it seems as though data center investment also dipped in 2022.

Tuesday, April 11, 2023

Do You Pay for Transport or Interconnection When Linking Domains?


In pre-internet times, carriers and enterprises mostly paid for transport when buying capacity services across wide area networks. These days, carriers and enterprises are more likely to pay for interconnection of domains than transport, as such. 

That is one example of the "death of distance" trend in connectivity, where moving bits is less dependent on distance, and more dependent on other elements of the interconnection arrangements. 

Cloud Computing a Material Profit Driver for AWS, Not So for Google Cloud

Amazon and Google share at least one characteristic: their respective cloud “computing as a service” operations make up a small part of total revenue. For Google, that segment represented about nine percent in 2022. 

Google segment revenue

source: Eric Sprague


For Amazon, AWS represents about 13 percent of total revenue, though the bulk of Amazon profits, at the moment. So far, as Google Cloud still seems to be losing money, it makes no direct contribution to Alphabet profits. 


source: Visual Capitalist


Saturday, April 8, 2023

True Platform Business Models Slowly Developing for Data Center, Cloud Computing Suppliers

Network as a service, computing, storage or infrastructure as a service might easily be confused with a platform business model. After all, platform business models tend to involve use of remote or cloud computing, an application for ordering, provisioning, payments and customer service. So do many XaaS offerings. 


XaaS can provide value including reduced cost; greater agility and security that is maintained at industry leading levels. Sometimes XaaS also can provide advantages in terms of innovation or potential customer scale. 


But the difference in business models is not “buy versus build” or “virtualized” access, scale or innovation but the mechanism by which revenue is earned. A virtualized service offered by a “pipeline” business model provider is still an example of a traditional pipeline model: the seller creates the service and sells it to the customer. 


Amazon Web Services computing and storage functions, for example, are “sold as a service,” but that does not make those AWS products part of a platform business model. The Amazon Web Services Marketplace, on the other hand, is an example of a platform business model.


The marketplace supports transactions between third-party sellers to Amazon customers where Amazon earns a commission on each sale.


The general observation is that, at this point, though many firms are trying to add platform business model operations, those operations remain at a low level, compared to traditional pipeline operations. 


Classically, a platform earns revenue by earning a commission for arranging a match between buyer and seller. AT&T’s online marketplace, for example, allows third parties to offer internet of things products available for purchase from AT&T customers. 


AT&T also once hosted its own advertising platform Xander, which was sold to Microsoft. It allowed firms to place advertising on AT&T’s websites and apps. 


So far, revenue contributions have been small enough not to identify as distinct revenue streams. 


Likewise, Verizon once operated Verizon Media that placed ads on Verizon content assets, but that business was sold to Apollo Funds.


Some might consider the use of application programming interfaces evidence that a platform business model is in operation, but that is incorrect. APIs might be used to support a platform business model, but use of APIs, in and of itself, does not change the business model. 


APIs, though, are often a capability exploited by business model platforms, to connect users of the platform; to allow third-party developers to contribute value; to collect user data or to create revenue by charging fees for use of the APIs.


The GSMA Open Gateway initiative supporting APIs usable across networks supports a traditional pipeline model, where the firms create, support and sell their products directly to customers. 


So at least so far, few tier-one connectivity providers have shifted a significant portion of their operations to platform business models, or made it the key strategic direction. Recent asset dispositions by AT&T and Verizon suggest that approach remains experimental and non-core. 


Thursday, April 6, 2023

Edge and 5G Network Slicing are Supposed to Control Latency, Reduce Bandwidth Demand: Will They Always?

Electronic newsgathering and remote capture of distributed video feeds have been use cases that often require specialized backhaul networks, such as fixed wireless microwave-equipped vans or satellite links on those vans. 


In other cases, where venues are fixed, such as sports stadiums or concert venues, dedicated optical backhaul connections often can be used. 


Remote backhaul of video feeds are one possible use of network slices on 5G networks, Ericsson and others might note. 

source: Ericsson 


Such use cases also might highlight a shift in expected new revenue sources for 5G and possibly 6G and future networks. At least so far, fixed wireless has been the clear early driver of new revenue on 5G networks that can be clearly attributed to the network itself, and that is a business-to-business or business-to-consumer use case. 


For consumers, 5G has actually supplied benefits mostly in the area of “faster connection speed.” But that might be the case for most consumer 5G value for some time, and perhaps for the whole duration of 5G use as the primary mobile network.


Early speculation about 6G centers on new or exotic apps, but might similarly result in new use cases emerging in the business and enterprise spheres, while consumer benefits largely remain driven by faster speeds and lower latency. 


And users might need both, especially if use of virtual private networks increases. The issue with VPNs, as users quickly find out, is a hit to performance. Though the rule of thumb is that VPNs can reduce experienced connection speed by 10 percent to 20 percent, I have found that speeds can be reduced as much as 50 percent or more. 


It is unclear to me whether the VPN speed tax can be eliminated (I cannot see this happening) or vastly reduced, when using a 5G network slice. In principle, a network slice would have to increase processing, which would add some latency, and also reduce connection speed, as do all other VPNs.  


The good news for remote video backhaul is that latency is not an issue for non-real-time or real-time video that is streamed, once the steam is launched. So long as the connection has enough capacity, the VPN-imposed processing should not be a troublesome issue, either. 


That might not be the case for latency-sensitive, highly-interactive sessions, however. Which is ironic, if you consider the amount of hype about network slicing and its ability to control and limit latency and also provide guaranteed bandwidth.

 

Wednesday, April 5, 2023

Edge Networks, Content Delivery Networks, Multicasting All are Ways to Deal with Video Entertainment Capacity

Unicasting of entertainment video is the primary reason why consumer data demand keeps growing. Edge computing and content delivery networks are one method for containing wide area network capacity demand.


So is multicasting.


In the days of broadcast TV and audio, essentially one copy of an item was sent out to all potential users at one time (one to all). 

source: Biamp 


In other words, unicast delivery is demand sensitive, while multicasting is not. If N users request a particular file, unicast delivery must create N separate streams. A multicast system, in principle, creates just one stream. 


As a simple example, a single unicast file requested by three users requires three separate streams. At bandwidth N, total consumed bandwidth is 3N. 


When three users request a single file and multicasting is possible, one copy is launched to three addresses. Capacity consumed across the wide area network is just one stream. Total WAN bandwidth consumed is N. 

source: iSchool 


If one video stream requires 4 Mbps, then a broadcast system consumes only 4 Mbps of total capacity. A unicast alternative requires 4 Mbos times N, in terms of bandwidth, where N is the number of users. 


The network capability implications are enormous. As a one-to-one delivery mechanism, unicast networks require discrete network consumption for every identical file delivered to any number of end users who request it, whether demand is synchronous (all want the file at the same time) or asynchronous (at different times). 


A multicast network operates more like a broadcast network, in the sense that all potential users of one piece of content, requesting it at the same time, are sent one copy, but addressed to all users requesting the file. Multicasting is a one to many delivery mechanism. 


The point is that unicast content delivery is why consumer data demand keeps growing. As linear video consumption decreases, much more bandwidth is required to deliver content to the same number of active viewers. 

source: ESDS 


As a practical matter, broadcasting and multicasting are functionally the same: all viewers who want to watch a particular event or item essentially receive the same copy. In the former case the selection is made by the user device and channel selection (device on or off; tuned to a particular channel or not). 


In the latter case, copies are delivered to those user addresses who have requested delivery. The capacity implications are roughly the same, either way. 


So content delivery networks and edge computing, storage and delivery are ways of containing unicast delivery costs. Signal compression, offline delivery and non-real-time delivery are methods for alleviating some of the peak capacity demand. 


At the very least, serving up content closer to end users reduces capacity demands on core networks. Edge content delivery also improves latency performance.


Tuesday, April 4, 2023

AWS, Alibaba, Microsoft, Google Cloud Operations Profit Contributions Vary

Alibaba's cloud computing operations do not produce as much value for that firm as Amazon Web Services does for Amazon. Cloud operations represent about nine percent of total revenue for Alibaba, while AWS constitutes about 16 percent of Amazon total revenue. 


AWS arguably represents all of the profit for Amazon, while cloud computing drives about 24 percent of Alibaba profit. 


Cloud computing revenues are perhaps 22 percent of Microsoft revenue. It is unclear to me what percentage of profit cloud computing contributes. Google's cloud operations generate perhaps nine percent of total revenue, and nothing (yet) to Google profits. 


Alibaba's impending breakup into six different companies sheds light on the valuation of conglomerate firms. The Chinese e-commerce giant is splitting into six distinct firms: Cloud Intelligence Group, Taobao Tmall Commerce Group, Local Services Group, Cainiao Smart Logistics Group, Global Digital Commerce Group and Digital Media and Entertainment Group. 


So the result would be firms focused on domestic e-commerce, cloud computing, international e-commerce and media, among others. The smallest unit might be conceptually related to X lab, Alphabet’s development unit. And there would be firms focusing on food delivery and logistics. 


Even as Alibaba carries a single valuation figure, no matter which metric is used, each of the six lines of business has a different set of potential metrics, based on existing market evaluations of media, e-commerce and cloud computing, for example. 


source: Reuters  


A sum of the parts valuation of the broken-up Alibaba, using a price/sales method, might 

Value Alibaba's core e-commerce business at $100 billion, while the cloud computing business is valued at $50 billion. 


The media business might be valued at about $25 billion, while the logistics unit is valued at perhaps $15 billion.


Using an EV/EBITDA method, Alibaba’s e-commerce business might be worth $250 billion, while the cloud computing business is valued at $100 billion. 


The media business might be worth about $50 billion, while the development unit is valued at about the same level. 


One might make the same observation about Amazon, which might be broken out into perhaps six distinct lines of business, each presumably also potentially valued differently from the other components. Growth rates, also a key factor in valuation, also differs wildly, from the online e-commerce growth rate of about two percent to the 20 percent growth of Amazon Web Services, 23 percent growth rate of advertising and 24 percent growth rate of third party fulfillment services. 


source: Deep Tech Insights 


source: Deep Tech Insights 


Some, such as Ben Alaimo of Deep Tech Insights, would value the e-commerce operations at a price/sales ratio of 2.4, while AWS is valued at a six times P/S ratio, for example. 


Other valuation metrics might include free cash flow generation, enterprise value or discounted cash flow. In 2022, for example, AWS might have had an EV/EBITDA ratio in the range of 43, while the rest of Amazon carried a ratio closer to 21. 


In 2022, AWS produced about $53 billion in free cash flow, while the rest of Amazon produced perhaps $14 billion. 


In 2022, AWS produced a bit more than twice the discounted cash flow as did the rest of Amazon. 


The point is that many firms in the media, software, online services and data center or access businesses are functionally conglomerates, with lines of business with distinct growth profiles, revenue contributions and profit margins. 


Blending all those sources shows the possible value of creating new pure play assets.


Monday, April 3, 2023

What is a Data Center?


A building full of servers, to be sure. But also a server site connected to the rest of the global internet and many private networks, internet peering points, cloud computing providers, enterprises and application providers. Some of those connections are inside the building, going server to server. Others require use of wide area networks. 

Traditionally there has been a big business model separation between private local networks and public wide area networks. Some of that has converged, as connections between internet domains now generate connectivity revenue both inside and outside the building, blurring the distinctions between local and WAN connections and revenue models. 

In a virtualized computing environment, the boundaries will become even more porous.