Wednesday, November 27, 2019

AT&T, Microsoft Trialing Edge Computing

AT&T’s software-defined and virtualized 5G core now is capable of delivering Microsoft Azure services, and AT&T is making the capabilities available for a limited set of select customers in Dallas. Next year, Los Angeles and Atlanta are targeted for select customer availability.

The move is part of AT&T’s pursuit of edge computing opportunities. Through AT&T Foundry, AT&T and Microsoft are exploring proofs-of-concept including augmented and virtual reality scenarios and drones. 

For example, both companies are working with Israeli startup Vorpal, helping its VigilAir product track drones in commercial zones, airports, and other areas with near-instant positioning. 

The companies also recently demonstrated the use of Microsoft HoloLens to provide 3D schematic overlays for technicians making repairs to airplanes and other industrial equipment.

5G Embedded and Edge Solutions



Bryan Jones, Dell Technologies SVP & GM, OEM | Embedded & Edge Solutions

Monday, November 25, 2019

From Pipes to Platforms?

At least some mobilke service providers fond hope that edge computing could eventually become a platform for connectivity service providers, generating new revenues beyond data processing services, as helpful as that would be.

In fact, the ability to generate revenue from acting as an intermediary or marketplace for different sets of market participants is the functional definition of whether some entity is a platform, or not. 

That can be glimpsed in service provider video subscription businesses, where revenue is earned directly from subscribers, but also from advertisers and in some cases from content suppliers. It is the sort of thing eBay must do, daily, in a more direct way. 

But it seems logical to predict that few such platform opportunities will emerge early and directly. Instead, the more likely path is that some initial direct product sold to one type of customer becomes the foundation for creation of the marketplace or platform.

The first million people who bought VCRs bought them before there were any movies available to watch on them. That might strike you as curious, akin to buying a TV when there are no programs being broadcast. 

In fact, though commercialized about 1977, it was not firmly legally established that sales of VCRs were lawful until 1984, when the U.S. Supreme Court ruled that Sony could sell VCRs without violating copyright law, as Hollywood studios alleged. 

So what were those people doing with their VCRs? Taping shows to watch later. Time shifting, we now call it. Only later, after Blockbuster Video was founded in 1985, did video rentals become a mass market phenomenon. 

So here is the point: quite often, a new market is started one way, and then, after some scale is obtained, can develop into a different business model and use case. 

Once there were millions of VCR owners, and content owners lost their fear of cannibalizing their main revenue stream (movie theater tickets), it became worthwhile for Hollywood to start selling and renting movies to watch on them. 

Eventually watching rented movies became the dominant use of VCRs, and time shifting a relatively niche use. 

That strategy might be called stand-alone use, creating a new market by directly satisfying a customer need, before a different two-sided or multi-sided market can be created, where at least two distinct sets of participants must be brought together, at the same time, for the market to exist. Virtually any online marketplace is such a case. 

Others might call it single-player. OpenTable, which today has a marketplace revenue model, originally only provided a reservation system to restaurants, operating in a single-sided market mode, before it then could create a two-sided model where restaurants pay money for the booked reservations made by consumers. 

So OpenTable, which operates in a two-sided marketplace--connecting restaurants and diners--started out selling reservation systems to restaurants, before creating its new model of  acting as a marketplace for diners and restaurants.

The extent to which that also will be true for some internet of things platforms is unclear, but likely, even for single-sided parts of the ecosystem. 

The value of any IoT deployment will be high when there is a robust supply of sensors, apps, devices and platforms. But without many customers, the supply of those things will be slow to grow, even in the simpler single-player markets. Just as likely, though, is the transformation of at least some of the single-player revenue models to two-sided marketplaces. 

In other words, a chicken-and-egg problem will be solved by launching one way, then transitioning to another, more complicated two-sided model requiring scale and mutual value for at least two different sets of participants. In a broad sense, think of any two-sided market as one that earns revenue by creating value for multiple sets of participants.

Amazon makes money from product sellers and buyers, while at the same time also earning revenue from advertisers and cloud computing customers. 

Telcos have faced this problem before. 

Back in the 1870s and 1880s, when the first telephone networks were created, suppliers faced a severe sales problem. The value of the network depended on how many other people a customer could call, but that number of people was quite small. The communications service has a network effect: it becomes more valuable as the number of users grows. 

These days, that is generally no longer the case. The number of people, accounts and devices connected on the networks is so large that the introduction of a new network platform does not actually face a network issue. The same people, devices and accounts that were connected on the older platform retain connectivity while the new platform is built. 

There are temporary supply issues as the physical facilities are built and activated, but no real chicken and egg problem. 


The point is that at least some internet of things or other new services ventures attempted by telcos will eventually require the building of a marketplace of some sort providing value to multiple sets of participants. 

The video entertainment business already provides an example, where service providers earn direct subscription fees from viewers, but also advertising revenues from third parties. 

In some cases, where content subscription providers also own content assets, they may earn revenue from content licensing to other third party distributors as well. 

It remains to be seen whether some connectivity providers also will be able to create multi-sided markets for  internet of things or other new industries. There are potential opportunities around edge computing, for example.

The initial value might simply be edge data center functions. Later, other opportunities could arise around the use of edge computing, the access networks, customer bases and app providers. It would not be easy; it rarely is. But creating new revenue streams for some customers who just want edge computing cycles could create foundation for other revenue streams as well.

Friday, November 22, 2019

Some Apps Moving Back from Cloud to Premises

While applications with unpredictable usage may be best suited to the public clouds offering elastic resources, workloads with more predictable characteristics can often run on-premises at a lower cost than public cloud, a study by Nutanix suggests.

As a result, many organizations plan to move at least some applications back away from the cloud and back on premises. Some 73 percent of respondents reported that they are moving some applications off the public cloud and back on premises. 

While 37 percent of enterprise workloads are running in some type of cloud today, some applications seem candidates for moves back in house:
* desktop and application virtualization
* customer relationship management
* enterprise resource planning
* data analytics and business intelligence
* databases
* development and testing
* data backup and recovery

About 22 percent of those users are moving five or more applications back in house. 

Savings are also dependent on businesses’ ability to match each application to the appropriate cloud service and pricing tier. Since plans and fees change frequently, enterprises and organizations must “remain diligent about regularly reviewing service plans and fees.”

That also suggests the importance of a multi-cloud strategy, to avoid lock-in. Fully 95 percent of survey respondents suggested it is essential or desirable to be able to easily move applications between cloud environments.

Security remains the biggest single issue related to further cloud decisions. Some 60 percent of respondents said that the state of security among clouds would have the biggest influence on their plans. 

Also, data security and compliance were listed by 26 percent of respondents as the top concerns for deciding where an enterprise runs a given workload.

The survey also suggests the shift to some use of cloud computing continues. About 24 percent of respondents are not using cloud computing. In perhaps a year, the number of enterprises with no cloud deployments will plummet to seven percent. 


In two years, the “no cloud” percentage might drop to three percent.

IT Pros Say Edge Computing is Biggest Technology Driver of Thinking on Cloud

Though 72 percent of surveyed  information technology professionals report digital transformation is the business trend having the biggest impact on its cloud deployments, edge computing and internet of things are said by 50 percent to be the biggest technology trends affecting cloud computing strategy.


As you might expect, among the influences driving cloud strategy, beyond security, employee skill sets, regulations and application portability are the need to support low-latency use cases, including the need to evaluate data quickly before the value of such insights are gone, as well as the need to evaluate more data at the edge. 



Monday, November 18, 2019

Data Gravity and Zero Gravity

Gravity and dark matter (dark energy) might now be analogies for information technology forces, concentrating and attracting on one hand, repelling and distributing on the other. 

Data gravity is the notion that data becomes harder to move as its scale and volume increases. The implication is that processing and apps move to where the data resides. 

As the Law of Gravity states that the attraction between objects is directly proportional to their weight or mass, so big data is said also to tend to attract applications and services. 

But we might be overstating the data gravity argument, as perhaps it is the availability of affordable processing or storage at scale that attracts the data, not vice versa. But as in the known universe gravity concentrates, dark matter is seen as pushing the universe to expand. 

At the same time, scale and performance requirements seem also to be migrating closer to the places where apps are used, at the edge, either for reasons of end user experience (performance), the cost of moving data, security and governance reasons, the cost of processing or scope effects (ability to wring more value from the same data). 

Some might call this a form of “anti-gravity” or “dark energy” or “zero gravity” at work, where processing happens not only at remote big data centers but also importantly locally, on the device, on the premises, in the metro area, distributing data stores and processing. 

"Gartner predicts that by 2022, 60 percent of enterprise IT infrastructures will focus on centers of data, rather than traditional data centers,” Digital Reality says. 

It remains to be seen how computing architecture evolves. In principle, either data gravity or zero gravity could develop. In fact, some of us might argue zero gravity is counterbalanced by the  likely emergence of edge computing as key trends. 


Zero gravity might be said to be a scenario where processing happens so efficiently wherever it is needed that gravitational pull, no matter what causes it, is nearly zero. In other words, processing and storage grow everywhere, at the same time, at the edge and center. 

A better way of imagining the architecture might be local versus remote, as, in principle, even a hyperscale data center sits at the network edge. We seem to be heading back towards a balance of remote and local, centralized and decentralized. Big data arguably pushes to decentralization while micro or localized functions tend to create demand for edge and local processing. 

Sunday, November 17, 2019

Digital Realty Launches PlatformDigital

Digital Realty wants to be a platform for enterprise data infrastructure that builds on data gravity at the same time as data storage and processing are intensifying and moving to the edge of the network. 

Data gravity is the notion that data becomes harder to move as its scale and volume increases. The implication is that processing and apps move to where the data resides. As the Law of Gravity states that the attraction between objects is directly proportional to their weight or mass, so big data is said also to tend to attract applications and services. 

But we might be overstating the data gravity argument, as perhaps it is the availability of affordable processing at scale that attracts the data. 

At the same time, scale and performance requirements seem also to be migrating closer to the places where apps are used, at the edge, either for reasons of end user experience (performance), the cost of moving data, security and governance reasons, the cost of processing or scope effects (ability to wring more value from the same data). 

Digital Realty’s new PlatformDigital architecture essentially tries to ride both trends, data gravity that centralizes and also performance and cost issues that push processing to the edge. 

"Gartner predicts that by 2022, 60 percent of enterprise IT infrastructures will focus on centers of data, rather than traditional data centers,” Digital Reality says. 

It remains to be seen how computing architecture evolves. In principle, either data gravity or zero gravity could develop. In fact, some of us might argue zero gravity is counterbalanced by the  likely emergence of edge computing as key trends. 

Zero gravity might be said to be a scenario where processing happens so efficiently wherever it is needed that gravitational pull, no matter what causes it, is nearly zero. In other words, processing and storage grow everywhere, at the same time, at the edge and center. 

A better way of imagining the architecture might be local versus remote, as, in principle, even a hyperscale data center sits at the network edge. We seem to be heading back towards a balance of remote and local, centralized and decentralized. Big data arguably pushes to decentralization while micro or localized functions tend to create demand for edge and local processing.

Many Other Uses for Edge, Beyond Latency Performance

New technologies and platforms sometimes wind up being used in ways not originally intended. Common Channel Signaling System 7 was developed as an internal network control platform for telecom networks, allowing networks to communicate with each other for purposes including roaming and billing. But SS7 also allowed mobile operators to create short messaging service (texting), which was the first ubiquitous data service sold to consumers. 

In 2007, for example, very few predicted that email would be a major driver of 3G and smartphones such as the Research in Motion Blackberry. Instead, attention was focused on video calling and entertainment video

In the 4G era, perhaps few expected that turn-by-turn directions or the camera function would turn out to be so popular. 

So it might be with edge computing, typically thought of as uniquely supporting ultra-low-latency use cases. Some industry professionals believe edge computing will be important for optimizing network traffic as much as improving app performance. 



Thursday, November 14, 2019

U.K. IT Pros Focusing on Security, Remote Workers, not AI, IoT

Most organizations spend most of their time solving key problems related to their core business processes and objectives, and little effort on "preparing for the future."

U.K. information technology firm Softcat finds this to be true of its customers. This year’s annual study by Softcat included 1,600 customers across 18 different industries. The findings might not surprise you. It seems the top priorities are eminently practical.


Some 83 percent of industries rank cybersecurity as their biggest technology priority. 

But supporting remote workers also is a top priority. More than 66 percent of global employees now work remotely, so on-demand access to secure and optimised data is now a business necessity. This is reflected in the report with 56 percent of industries ranking end user computing and mobility as their second biggest technology priority.

The construction industry, education and healthcare rank end user computing as their number one priority, ahead of cyber security investment.

Investment in the data center and cloud is ranked third overall, highlighting hybrid cloud.

“Surprisingly, emerging technologies are the second lowest tech priority for the third year running,despite the hype surrounding the areas where the UK has the potential to be a global leader,” Softcat says. 

That low ranking of the advanced technologies might not surprise you. Information technology priorities generally must be focused on the most-immediate problems. 

Business continuity and security nearly always rank as "most important" challenges, when IT professionals are asked to rank issues. "Preparing for the future" often ranks much lower, if not lowest, as CDW suggests.

Wednesday, November 13, 2019

U.S. Streaming Video Users Consume 520 GB Per Month on Fixed Networks

So it is that OpenVault now reports that U.S. consumers of streaming video now consume more than half a terabyte of data per month. Average consumption by cord-cutter subscribers was 520.8 GB, an increase of seven percent in the third quarter of 2019 alone.

Average U.S. subscriber usage on fixed networks was about 275.1 GB in the third quarter, a year-over-year increase of 21 percent.

During the same period, the median monthly weighted average usage increased nearly 25 percent from 118.2 GB to 147.4 GB, indicating that consumption is increasing across the market as a whole.

Some believe increased usage is a business opportunity for retail internet access providers. Others are not so sure. Usage once mattered greatly for telecom service provider revenues, as most revenue (and most of the profit) was generated by products billed “by the unit.” 

Capital investment was fairly easy to model, and profits from incremental investment likewise were easy to predict.

All that has changed, as usage (data consumption) of communications networks is not related in a linear way to revenue or profit, all observers will acknowledge. And that fact has huge implications for business models, as virtually all communication networks are recast as video transport and delivery networks, whatever other media types are carried.

Something on the order of 75 percent of total mobile network traffic in 2021, Cisco predicts. Globally, IP video traffic will be 82 percent of all consumer internet traffic by 2021, up from 73 percent in 2016, Cisco also says.

The basic problem is that entertainment video generates the absolute lowest revenue per bit, and entertainment video will dominate usage on all networks. Conversely while all narrowband services generate the highest revenue per bit, the “value” of narrowband services, expressed as retail price per bit, keeps falling, and usage is declining, in mature markets.

Some even argue that cost per bit exceeds revenue per bit, a long term recipe for failure. That has been cited as a key problem for emerging market mobile providers, where retail prices per megabyte must be quite low, requiring cost per bit levels of perhaps 0.01 cents per megabyte.

Of course, we have to avoid thinking in a linear way. Better technology, new architectures, huge new increases in mobile spectrum, shared spectrum, dynamic use of licensed spectrum and unlicensed spectrum all will change revenue per bit metrics.

Yet others argue that revenue per application now is what counts, not revenue per bit  or cost per bit. In other words, as for products sold in a grocery store, each particular product might have a different profit margin on sales, and what matters really is overall sales, and overall profit levels, not the specific profit levels of products sold.

So the basic business problem for network service providers is that their networks now must be built to support low revenue per bit services. That has key implications for the amount of capital that can be spent on networks to support the expected number of customers, average revenue per account and the amount of stranded assets.

Not many who were in the communications business 50 years ago would have believed that would be the case, and so dramatically necessary.

Groups Launch 5-G DIVE for Factory Automation and Drone Use Cases

InterDigital Europe, Ericsson AB, Telefonica, ADLINK, ASKEY Computer Corp., FarEasTone, Industrial Technology Research Institute, Institute for Information Industry, Universidad Carlos III de Madrid, RISE-SICS Swedish ICT AB, National Chiao Tung University, and Telcaria Ideas are working on a research project called 5G-DIVE, aiming to develop intelligent edge solutions for unmanned aerial vehicles and industrial automation. 

“The consortium builds on the success of the H2020 EU-Taiwan Phase 1 5G-CORAL project and adds new layers of distributed data analytics and intelligence over an end-to-end 5G system deployment tailored to each vertical,” says InterDigital. 

The 24-month project, which began October 1, 2019, received a total of 4.3M€ in funding – 2M€ from the European Commission and 2.3M€ from the Ministry of Economic Affairs in Taiwan.

InterDigital Europe will act as the technical manager and leader of the working group.

Tuesday, November 5, 2019

Latency Now is Just One Value of Edge Computing

Time, money, safety, workloads, volume or privacy can be reasons enterprises and organizations can gain value from edge computing. Staff workload reduction, as always, is a key driver. There is little point in forcing operations staff to spend lots of time monitoring spurious events, routine events or normal conditions. 

As sensor and transaction data increase, it often will be the case that much of the data is low value, or close to no value, while a small number of instances are really important. In other words, it is efficient and useful to focus operations attention only on the events that drive action on the part of the operations staff. 

One example is variance, where significant implications and action might flow from any reading that is outside the nominal or intended range. All the data showing conditions are what they are supposed to be is helpful, but also not data that needs to be transmitted. Instead, it often makes more sense only to send reports of variances from nominal. 

So one instance when edge computing is useful is as a means of locally processing irrelevant or low value data, culling and sending only the “something has changed and conditions now are out of tolerance” messages. 

Ultra-low latency often is cited as the unique capability edge computing provides for applications that require extremely low detection and response time. When delay times are significant enough to cause danger or inefficiencies in command, control or actionable intelligence needs, edge computing makes sense. The classic cases are when human life and safety require split-second response, as when people are in autonomous vehicles or surgery is being performed. 

In other cases, bandwidth costs are the rationale. When huge amounts of data are generated, but much of the data is routine, it might not make sense to send all the raw data over the wide area network. Instead, data might be processed locally, sending only information about exceptions to distant data centers for action or review. That might frequently be the case for video surveillance footage or heavy sensor data at a factory, for example. 

In other cases, the device or premises edge might be advantageous if a function, at a particular location, cannot be interrupted by temporary loss of network connectivity, or where interruptions are expected.