Friday, September 30, 2022

IoT Priorities Shift, Eseye Survey Suggests

A study of U.K. and U.S. executives dealing with internet of things deployments finds perceptions of  value have shifted over the last year. A year ago, respondents said “competitive advantage” was the IoT value proposition, Eseye says. 


This year, it appears a majority of respondents are looking for operational efficiencies and lower costs. Where a year ago the strategic rationale was prevalent, today’s emphasis seems more focused on the tactical: revenue and profit gains (though what is the point of competitive advantage if not higher revenue and profit?). 


source: Eseye


The 500 surveyed organizations operate between 1,000 and 5,000 IoT devices. There are differences between respondents in the U.S. and U.K. markets, however. 


The top benefit cited for IoT projects in the U.S. was  operational efficiencies (31%). The  

top benefit cited by UK respondents was increasing profit (30%).


The biggest hurdle for UK respondents to overcome with their IoT initiatives was managing multiple carrier or provider contracts (24%). US respondents had three biggest challenges in equal shares with security of the devices and environment; device onboarding, testing and certification; and importing existing MNO contracts into the IoT estate (all 23%), Eseye says. 


When comparing the biggest year-on-year challenges, security of the devices and environment was the biggest hurdle for all respondents in 2021, and for U.S. respondents. Whereas U.K. respondents said their biggest challenge in 2021 was cellular connectivity across countries, regions and locations.


In the U.K., private 5G/LTE networks were the top technology driver (36%), while intelligent edge hardware (37%) was the most popular in the U.S. market. 


It might also be hard to clearly separate IoT from edge computing. The desire to gain operational efficiencies also was viewed as linked to the value of computing at the edge, the report notes.


Wednesday, September 28, 2022

Interconnection Rules are in Play, Again, and They Affect Edge Computing Economics

You might think new debates about application providers paying fees to internet service providers mostly affects the funding mechanisms for internet access networks or the costs of doing business for hyperscale content providers. 


In fact, it also affects edge computing, in particular content delivery network economics. In South Korea, such rules--requiring payments to terminating networks by content sources--actually affects the economics of creating and using content delivery networks.


Arbitrage is the problem (or strategy, for some providers). Arbitrage is the business strategy of exploiting price differences between at least two different markets. In the case of termination rates, arbitrage can occur when networks are of vastly-different sizes, in terms of traffic exchanged. 


In South Korea, internet and content providers are subject to a “Sending Party Network Pays” regime that requires  ISPs and some content providers to pay to send traffic to another ISP. 


In essence, that is a codification of interconnection rules that used to apply only to carriers (connectivity service providers) and which exists between internet domains as well, though those arrangements normally are the result of commercial deals, not government regulations. 


If a large ISP hosts a content delivery network, then the large ISP would pay to send traffic to a smaller ISP. That same dynamic has been seen in voice markets, when similar “calling party pays” rules have applied to large and small voice providers in about the same way: larger providers send more traffic to small providers than the small providers send to the larger networks. Arbitrage is the result. 


If that CDN is then moved outside of the country, then all ISPs have to pay for the transit to get the data from the CDN. This motivates smaller ISPs to get a CDN on their own network because they could save on International IP transit fees. 


source: RIS


The point is that interconnection rules can shape the economics of CDNs serving South Korea, and simply the business models of access and content providers. 

 

In 2017, two ISPs, SK Broadband and LG Uplus, asked for payment from KT for the Facebook traffic KT was sending them using the Facebook CDN KT was hosting. In other words, KT had to pay for sending excessive traffic to the other networks.


KT tried to pass the charges onto Facebook, who refused to pay. After the breakdown of negotiations between KT and Facebook, Facebook disabled the CDN, which meant that Korean telcos had to access other CDNs, such as the one in Hong Kong. 


As a result, connections to Facebook took 4.5 times longer on SK’s network and 2.4 times longer on LG Plus’s network. 


KT complained to the regulator, KCC, which issued Facebook a fine of 396 million won (US$ 321,000), arguing that the removal of the cache had significantly harmed users. On 11 September 2020, the Seoul High Court sided with Facebook and dismissed the KCC case. 


But the issue continues to loom large. 


The debate over how to fund access networks, as framed by some policymakers and connectivity providers, relies on how access customers use those networks. The argument is that a disproportionate share of traffic, and therefore demand for capacity investments, is driven by a handful of big content and app providers. 


It is a novel argument, in the area of communications regulation. Business partners (other networks) have been revenue contributors when other networks terminate their voice traffic, for example. 


But some point to South Korea as an example of cost-sharing mechanisms applied to hyperscale app providers.


South Korean internet service providers levy fees on content providers representing more than one percent of access network traffic or have one million or more users. Fees amount to roughly $20/terabyte ($0.02/GB).


The principle is analogous to the bilateral agreements access providers have with all others: when a traffic source uses a traffic sink (sender and receiver), network resources are used, so compensation is due. 


Such agreements, in the past, have been limited to access provider networks. What is novel in South Korea is the notion that some application sources are equivalent to other historic traffic sources: they generate remote traffic terminated on a local network. 


So far, such claims are not officially bilateral, which is how prior arrangements have worked. The South Korean model is sender pays, similar to a “calling party pays” model. 


Those of you with long memories will recall how the vested interests play out in any such bilateral agreements when there is an imbalance of traffic. Any payment mechanisms based on sender pays (calling party pays) benefit small net sinks and penalize large net sources. 


In other words, if a network terminates lots of traffic, it gains revenue. Large traffic generators (sources) incur substantial operating costs. 


Of course, as with all such matters, it is complicated. There are domestic content implications and industrial policy interests. In some quarters, such rules might be part of other strategies to protect and promote domestic suppliers against foreign suppliers. 


At the level of network engineering, the imbalances and costs are a direct result of choices about network architectures, namely the shift of content delivery from broadcast or multicast to unicast or “on demand” delivery. 


This is a matter of physics. Some networks are optimized for multicast (broadcast). Others are optimized for on-demand and unicast. Satellite networks, TV and radio broadcast networks are optimized for multicast: one copy to millions of recipients. 


Unicast networks (the internet, voice networks) are optimized to support one-to-one sessions. 


So what happens when we shift broadcast traffic (multicast) to unicast and on-demand delivery is that we change the economics. In place of bandwidth-efficient delivery (multicast or broadcast), we substitute bandwidth “inefficient” delivery.


In place of “one message, millions of receivers” we shift to “millions of messages, millions of recipients.” Instead of launching one copy of a TV show--send to millions of recipients-- we launch millions of copies  to individual recipients. 


Bandwidth demand grows to match. If a multicast event requires X bandwidth, then one million copies of that same event requires 1,000,000X. Yes, six orders of magnitude more bandwidth is needed. 


There are lots of other implications. 


Universal service funding in the United States is based on a tax on voice usage and voice lines. You might argue that made lots of sense in prior eras where voice was the service to be subsidized. 


It makes less sense in the internet era, when broadband internet access is the service governments wish to subsidize. Also, it seems illogical to tax a declining service (voice) to support the “now-essential” service (internet access). 


The point is that what some call “cost recovery” and others might call a “tax” is part of a horribly complicated shift in how networks are designed and used.


Thursday, September 15, 2022

Backblaze Study Confirms SSD Advantages over HDD When Used for Boot Drives in Data Centers

One of the touted advantages of solid state devices, compared with mechanical machines of similar function, is that with fewer moving parts, solid state machines should be less likely to fail. A study by data center operator Backdrive suggests that is the case. 


A year-long study of boot drive reliability including both hard disk drives and solid state storage showed lower average failure rates for SSD. 


“At this point we can reasonably claim that SSDs are more reliable than HDDs, at least when used as boot drives in our environment,” says Backdrive.

source: Backblaze 


Boot drives store log files and temporary files produced by storage servers. Each day a boot drive reads, writes, and deletes files depending on the activity of the storage server itself. 


Backblaze began replacing its HDD boot drives in late 2018 with SSD machines. 


source: Backblaze 


What Backblaze does not yet know is how the SSD aging process will affect average failure rates for boot drives. It simply does not have the same amount of longevity yet, for those devices. 


In this case, it appears that intuition or opinion about failure rates of SSD compared to HDD is confirmed. 


Of course, there are trade offs. SSDs cost more than HDDs. Some might also argue that, despite the Backblaze findings, HDDs might still have longer useful lives. Also, some will point out that SSDs are arguably “better” for boot drive use cases, while HDD might still be deemed superior for long term storage. 

source: Avast 


All engineering decisions  involve choices.


Wednesday, September 14, 2022

Oracle Cloud Growing Fast from a Smallish Base

Though Oracle often does not show up in tallies of global cloud infrastructure market share, registering about two-percent share. But Oracle says its cloud revenues are growing fast. 


For the fiscal 2023 first quarter, Oracle reported cloud services and license support revenues were up 14 percent in U.S. dollars and up 20 percent in constant currency to $8.4 billion. 


Cloud license and on-premise license revenues were up 11 percent in USD and up 19 percent in constant currency to $0.9 billion. 


As with Microsoft’s cloud revenues, Oracle earns a mix of revenue from infrastructure services and application services. First quarter infrastructure as a service plus software as a service revenues were  $3.6 billion, up 45 percent in USD, up 50 percent in constant currency. 


But most of that was SaaS. First quarter IaaS revenue was $0.9 billion, up 52 percent in USD, up 58 percent in constant currency. SaaS revenue was $2.7 billion, up 43 percent in USD, up 48 percent in constant currency. 


Historically, Oracle cloud revenue has been concentrated among smaller firms. 


source: Enlyft 


That could change as Oracle pushes multi-cloud support.


Friday, September 9, 2022

Real Estate Often Drives Value Connectivity Service Providers Bring to the Edge

Many connectivity industry participants have been hoping that edge computing might create new roles for access providers--beyond connectivity--in the same way that app stores, data centers, cloud computing, content ownership or devices sometimes have been seen in that same light. 


It would be fair to say that success has been quite mixed. It might also be fair to say that “real estate” has emerged as a clear business model for data center operators, and might develop as a key driver of edge computing value for connectivity providers. 


At a functional level, data center real estate adds value by shifting compute and storage operations to remote locations, essentially substituting remote racks, cabinets and rooms for on-premises facilities. 


Computing services drive revenue but the real estate investment trust often drives the asset model. That already seems to be emerging for much of the edge computing opportunity for connectivity providers as well. 


“The world’s network operators have the most valuable real estate in the world,” says Dennis Hoffman, SVP and GM, Dell Technologies Telecom Systems Business. “At the end of the day, they own an awful lot of edge.” 


source: Xenonstack 


And it is the real estate (location) that is meant, and not the ability to provide connectivity. Perhaps it also is fair to note that “ownership” of the computing services and ownership of real estate already are bifurcating.


“Some large enterprises will likely build their own edge infrastructure, but most of the world’s businesses are going to be renting it from telcos,” Hoffman says. The implication is that enterprises will choose between their own infrastructure and reliance on somebody else’s facilities. 


But that already often is not the case. Enterprises buy their compute services from AWS or another cloud computing provider. Connectivity providers benefit largely as the suppliers of some amount of edge real estate (racks, cabinets, power, air conditioning, security). 


For the most part, it seems as though connectivity provider roles will often not be as the branded supplier of edge computing services but as the supplier of edge “data center” real estate. 


Wednesday, September 7, 2022

How Do We Count "Edge" Computing Growth?

Some new and emerging markets are very hard to quantify in terms of growth. “Edge computing,” for example, includes a range of market segments, from devices and sensors to apps, computers and servers to remote “computing as a service” purchases. 


And since many existing categories of products can plausibly also be considered to engage in edge computing, one has to separate functions. Content delivery networks are an existing form of edge computing. But so are personal computers and smartphones. 


We face similar problems trying to quantify the growth of the “internet of things.” Even restricting the definition to include only things that communicate over the internet, many “things” already do so: smartphones, PCs, gaming devices, TVs, printers and so forth. 


Presumably the big nearer-term growth of IoT comes from newer use cases such as industrial sensors. Eventually all sorts of “things” might connect using the internet. 


source: Techspot 


The issue with market forecasts for edge computing and IoT is that so much existing economic activity and sales are for products that have internet access functionality. By that standard the IoT market--just for devices--is substantial. In 2022 alone possibly 1.3 billion smartphone  units will be shipped. 


Some forecasts even include instances  of devices connected to any LAN and WAN as part of the IoT market. The methodological  issue with that approach is that it simply takes a lot of existing markets and essentially rebrands them, often mixing such rebranded activity with incrementally new use cases. 


source: IoT Analytics


Edge computing is going to pose the same sorts of questions. On-device computing is probably not what most people think of when evaluating edge computing. Whether it makes sense to rebrand premises computing of any sort as “edge” computing also is an issue. 


By definition, enterprise computing facilities “on the premises” are a form of edge computing. Perhaps most would agree that edge computing centers on the use of more-distributed computing someplace within a metro area. 


But even there, cloud computing service suppliers are developing “edge computing” systems that can be deployed on any enterprise’s property. Such systems tend to run the same protocols as the remote hyperscale data centers provide, but execute code locally. 


How we account for such systems is a question any forecaster has to answer. What counts as “edge” computing?


Friday, September 2, 2022

Why Edge Computing is a Functional Substitute for Remote Computing

The cost of local versus remote computing often varies  volume. At low volumes, remote computing often is more affordable in terms of total cost of ownership. At high volumes, local computing often makes more sense. 


This example of the electrical energy requirements for computing costs--done locally versus remotely--illustrates the principle. 


source: Researchgate 


The same sort of trade off often exists for latency-dependent apps, where the time to analyze data and then cause a system response is stringent. In such cases the trade off is not financial but performance related: distant computing might simply be inadequate because of total network transmission delay between a local controller and a distant computing resource. 


The point is that, to a large extent, communications and local computing are essentially substitutes for each other for a computing solution. The communications enables remote computing (hyperscale cloud computing, for example) and now also some forms of edge computing. 


You can see that in this chart from IBM of various computing functions that are virtualized by the use of remote functions. 

source: IBM 


In principle, such trade offs are embedded in choices made between network slicing and private networks as well; or wide area private networks and local computing; or edge computing versus remote hyperscale facilities use. 


The point is that the physical siting of computing resources allows use of communications networks to support remote computing as a driver of computational cost, when app performance requirements do not dictate the choices.  


That contributes to the range of drivers transforming traditional “telco” networks into “data” networks” and making communications a part of the computing value chain, just as communications networks now are part of the internet ecosystem.