Tuesday, May 28, 2019

Nvidia Announces AI Platform for Edge Computing

Nvidia has announced Nvidia EGX a computing platform for low-latency artificial intelligence at the edge. EGX starts with the tiny Nvidia Jetson Nano™, which in a few watts can provide one-half trillion operations per second (TOPS) of processing for tasks such as image recognition.
EGX combines the full range of Nvidia AI computing technologies with Red Hat OpenShift and Nvidia Edge Stack together with Mellanox and Cisco security, networking and storage technologies.

That allows telecom, manufacturing, retail, healthcare and transportation enterprises to quickly stand up state-of-the-art, secure, enterprise-grade AI infrastructures, Nvidia says.

The solution uses Nvidia T4 servers, delivering more than 10,000 TOPS for real-time speech recognition and other real-time AI tasks, the company says.

The solution also uses Red Hat to integrate and optimize Nvidia Edge Stack with OpenShift, the leading enterprise-grade Kubernetes container orchestration platform.

Nvidia Edge Stack is optimized software that includes Nvidia drivers, a CUDA Kubernetes plugin, a CUDA container runtime, CUDA-X libraries and containerized AI frameworks and applications, including TensorRT, TensorRT Inference Server and DeepStream.

Nvidia Edge Stack is optimized for certified servers and downloadable from the Nvidia NGC registry.

Friday, May 24, 2019

IoT Location Information Without Using GPS



Polte Corporation, a supplier of cloud-based location information for mobile networks, has launched a commercial beta of its mobile IoT location platform, allowing IoT devices to supply their location information without using either GPS or GNSS radios. That, in turn, means lower cost of IoT devices and also longer battery life.

Also, Polte’s Strategic Partner Program allows early adopters to get a head start on integrating cloud-based device locatiion into their products and solutions.  

SPP members will have access to Polte-enabled mobile IoT devices, direct engineering support, plus the Polte Cloud API.

Polte’s location solution offers highly accurate tracking indoors, even when objects are inside vehicles or in shipping containers.

Wednesday, May 22, 2019

Lattice Upgrades AI Solution

Lattice Semiconductor Corporation, a  supplier of low power programmable logic devices, announced performance and design flow enhancements for its Lattice SensAI  solutions stack.

The Lattice sensAI stack provides a comprehensive hardware and software solution for implementing low power (1mW-1W), always-on artificial intelligence (AI) functionality in smart devices operating at the Edge.

According to the company, new enhancements to the Lattice sensAI solution stack include:

  • 10 times performance
  • Expanding neural network and ML frameworks support including Keras
  • Support for quantization and fraction setting schemes for neural network training eliminates iterative post-processing
  • Simple neural network debugging via USB
  • New customizable reference designs accelerate time to market for popular use cases like object counting and presence detection

Arrow to Supply IoT Device Support Globally

Arrow Electronics announced a global agreement with Infineon and Arkessa, supplying OEMs, system integrators and enterprises with connectivity for their internet of things devices anywhere in the world.

Infineon supplies security, while Arkessa provides network access and provisioning.

Infineon provides the secured hardware controllers based on GSMA’s Embedded Subscriber Identity Module (eSIM) specification that underpin the new service. Arkessa provides secured mobile data services with the ability to provision and manage IoT devices from the factory into the field easily and effectively, supporting mobile, NB-IoT and LTE-M services.

Infineon’s SLM family of security controllers are optimized for industrial applications, providing higher levels of endurance over an extended temperature range of -40 to +105˚C.

Tuesday, May 21, 2019

Home and Industrial Automation are Top Focus for IoT Developers

A survey of internet of things developers--66 percent of whom reported they are working on IoT projects now, or will be within 18 months--finds industrial and home automation to be the ares where most are working.


AWS, Azure, and GCP are the leading IoT cloud platforms. Also, IoT developers mostly use C, C++, Java, JavaScript, and Python.

MQTT is the dominant communication protocol leveraged by developers. At this point, most of the local network connections use Wi-Fi or Ethernet at the physical layer.



Edge Computing in India



India almost inevitably eventually will be one of the world’s largest edge computing markets, if edge computing develops as expected. Prasanna Sarambale, CEO, Data Center Business and Group Head - Business Development at Sterling and Wilson, discusses how edge computing is shaping the company, and how it deploys and manages modular data centers.

Will Response Time Become a Bigger Issue than Latency?

Latency always has been an issue for users of the internet, but response time arguably is becoming the bigger problem. In a strict sense, “latency” refers to transmission network issues.

Response time is the amount of time a computing system takes to react to a request, once it has received one. The time between invoking an application programming interface and the instant this API returns the result of its computation is an example of response time.

In the era of cloud computing, when the servers are located remotely, issues such as broker service policy, load balancing technique and scheduling scheduling algorithm affect cloud data center response time. But there are multiple sources of potential response time lag.

A study by Decibel suggests latency and response time still are issues. Conversely, web pages that load fast are at the top of the list of attributes of a website that contribute to a positive user experience.

Over time, the amount of network-induced latency has been dropping, for a number of reasons, including use of content delivery networks that cache content at the network edge. But optical wide area networks these days minimize latency, and each mobile network generation also has featured lower latency.

The 5G network will essentially eliminate access network latency as an issue for user experience. But that only shifts the latency problem to application server response, which is why edge computing, with its promise of lower latency, now is becoming an issue.

As network congestion and bandwidth become less problematic, end user experienced latency increasingly becomes a matter of far-end application server response time.

In the past, propagation delay, or delays caused by transmission networks, arguably have been a chief cause of user-experienced delay.

A basic rule of thumb for web page sessions has been that 0.1 second is about the limit for having the user feel that the system is reacting instantaneously.

Latency of about one second is about the limit for the user's flow of thought to stay uninterrupted, even though the user will notice the delay.

Some argue that 10 seconds is about the limit for keeping the user's attention focused on the session.

But it is gaming and other coming applications where latency really gets tested. Though the solution to user experience issues often includes an increase in delivered bandwidth and a reduction in transmission congestion, that does not solve all latency issues.

Observers often focus on network latency, or ping time, the time it takes in milliseconds for your network to connect to the internet host and start uploading or downloading data. That matters, of course.


Most 4G LTE connections have featured an average latency of less than 70 milliseconds, which is far below the recommended maximum latency of 150 milliseconds for Xbox Live.

For many use cases, the time it takes a server to respond now becomes the chief latency culprit, however.

According to a study by Ashraf Zia, M.N.A. Khan Department of Computing and Shaheed of the Zulfikar Ali Bhutto Institute of Science & Technology, Islamabad, Pakistan, when the user base and data centers are located in the same region, the average overall response time is 50.35 milliseconds.

When the user base and data centers are located in different regions, the response time increases significantly, to an average of 401.72 milliseconds. That study probably reflects transmission latency more than cloud data center performance, however.

Still, one advantage of edge computing is that it removes most of the indeterminacy of transport across wide area networks.

Wednesday, May 15, 2019

Ericsson's Edge Gravity Pivots from CDN



Partners include Limelight Networks and Equinix. 

Many Value Drivers for IoT Edge Computing

At least one survey suggests edge computing already is used by 44 percent of enterprises who already use internet of things. Latency reduction appears to be only one of four major values obtained by edge computing. More responsive customer service, lower data transmission costs and lower computing costs all seem to be nearly-equal drivers of value, according to survey data generated by Strategy Analytics.

Strategy Analytics believes that data will be processed (in some form) by edge computing in 59 percent of IoT deployments by 2025.


Monday, May 13, 2019

Edge Computing is Not Really New

It has to be said: edge computing is not really new. In a real sense, nearly all computing happens at the edge: it just might not be your local edge. About the only computing that happens “in the core” are operations conducted by transmission networks to deliver bits from one place to another.

Still, we normally analyze the latest eras of computing as using centralization versus decentralization models. The mainframe era was centralized. The client-server era was more decentralized. The internet era has been centralized. Now, with the emerging internet of things era, we seem to be moving back towards decentralization, in part.

The drivers for edge computing utility tend to center on internet of things use cases, where data must be analyzed really fast (requiring ultra-low latency), or where the sheer volume of data (full motion, very-high definition video) makes transport costs to remote computing centers a cost issue.

We might ultimately even find that many of the latency or transport cost dependent use cases can be handled by metro-level computing, not strictly “at the edge” or at remote hyperscale data centers.

Saturday, May 11, 2019

90% of Edge Data is Ephemeral

At least in principle, perhaps 10 percent of data generated by internet of things sensors (consumer and enterprise) is useful for analysis (perhaps 90 percent being ephemeral and not producing useful insights long term).

Cisco estimates that nearly 850 Zettabyes of data will be generated by all people, machines, and things by 2021, up from 220 ZB generated in 2016.

Most of the more than 850 ZB that will be generated by 2021 will be ephemeral in nature and will be neither saved nor stored.

Cisco estimates that 85 ZB of that data will be stored or used in 2021. Useful data also exceeds data center traffic (21 ZB per year) by a factor of four. That is why some believe edge computing (on the device, on the premises, at the network edge) will be useful and needed.



Friday, May 10, 2019

Where are IoT Picks and Shovels?

One gets asked, from time to time, where the internet of things “picks and shovels” (underlying technologies) can be found. As this chart by Frost and Sullivan analysts might suggest, the platform picks and shovels cover most of the current mix of computing and communications firms. That wide diversity also partly explains why estimates of IoT sales vary so widely.


According to IDC, worldwide spending on the Internet of Things reached $772.5 billion in 2018, up 15 percent from the $674 billion that was spent on IoT in 2017. By other estimates, IoT spending on IoT endpoints alone already is in the $2 trillion range.  

For the moment, consider only enterprise IoT.

According to IDC, by the end of 2020, close to half of new IoT applications built by enterprises “will leverage an IoT platform that offers outcome-focused functionality based on comprehensive analytics capabilities.”

The global IoT market will grow to $457 billion by 2020, at a compound annual growth rate of 28.5 percent, by some estimates, but is quite a bit larger if consumer wearables are included. In that case, current IoT spending exceeds $3 trillion, reaching $9 trillion by perhaps 2020.

Looking only at enterprise markets, and excluding consumer IoT, might reach $6.7 trillion in 2020, if those projections are close to reality.

Thursday, May 9, 2019

Are Metro Data Centers Good Enough for Low-Latency Edge Apps?

How great are the advantages of infrastructure edge computing that is widely distributed within a single metro area? Not so clear, says Raul Martynek, DataBank CEO.

To be sure, 5G dramatically reduces the latency of the air interface, dropping round trip times from an average of 50 milliseconds to 60 ms (or greater) to sub-10 ms, Martynek says.

“With 5G, most cloud data centers will be 25 ms to 50 ms away from end-users, a significant improvement over 4G,” he argues. Deploying five smaller sites within a single metro area might improve round-trip latency by one to two milliseconds, he says.

“Once an application is deployed in a single location in a given market, the actual latency to reach any eyeball in that market is dramatically reduced compared to the common East/Central/West configuration of web scale data centers and the incremental benefits of micro data centers evaporate,” when 5G access is deployed, he argues.

So for many applications, requiring round-trip latency of three ms to five ms, for example, a single metro data center might work.

“It seems logical to us that before the large cloud and content players deploy at 10,000 cell tower locations, they will first deploy a single cluster in a traditional data center in the top metro markets that they are looking to service and be able to reach any eyeball in those geographies with very low latency,” Martynek says.

Such improvements obviously are relevant for any discussion of whether network slicing, for example, might allow creation of quality-assured customized networks with guaranteed levels of service (latency, packet loss, bandwidth, for example).

Many believe a new opportunity to supply quality-of-service guarantees on virtualized, 5G and other networks will exist. The counter argument is that routine levels of service will be so much better that it will be hard to convince most customers to pay for the higher-cost QoS-assured tiers of service.

That might especially be true for consumer customers.

Some might argue that 5G and better networks, growing competition for internet access services, plus content encryption, have killed the means and the demand for quality-assured consumer broadband services.

Even if ISPs wanted to sell services that prioritize quality, the technical ability to do so, and the consumer demand, are not present.

Ignore for a moment the politics of network neutrality. It can be argued that internet service providers rapidly are losing the technological ability to “degrade” or “slow down,” much less identify, packets they deliver. And without packet visibility, it is impossible to apply the quality of service mechanisms net neutrality proponents fear.

Keep in mind that any attempt to categorize and apply service level features to internet content becomes impossible when the data is encrypted. QoS packets are encrypted at the edge, by the app providers themselves.

When that happens, service providers cannot prioritize, because they have no idea what actual class or category of content is delivered. In other words, they cannot “tamper with what they cannot see. And that is the growing trend as most traffic gets encrypted.  

By about 2020, estimates Openwave Mobility, fully 80 percent of all internet traffic will be encrypted. In 2017, more than 70 percent of all traffic is encrypted.

There are other reasons packet discrimination is not possible, for technical and business reasons.

Can 5G service providers charge a premium for low-latency performance guarantees, when the stated latency parameters--best effort--are already so low? Could they charge a higher fee for faster speeds, when faster speeds are the norm?

That, essentially, is among the implications of fast 5G networks and metro-level computing.

Wednesday, May 8, 2019

AI at the Edge

Will Google bring deep learning to the deep learning to the device? It appears that will be the case. In fact, AI is expected to be deployed mostly at the edge, and not in traditional data centers.


Traditionally, hardware limitations have made this too prohibitive. But prices are really dropping.


Google CEO Sundar Pichai says Google deep learning models, which were previously up to 100 GB in size, have been scaled down to 0.5 GB.


“What if we could bring the AI that powers Assistant right onto your phone?” said Scott Huffman, VP of Engineering for Google Assistant.




For purposes of clarity, smartphones have not traditionally been considered “things.” That appellation has been reserved for other appliances or sensors not intended to be used by humans.


For purposes of plotting the value of edge computing, the distinction might not matter, as much of the consumer IoT space (smartwatches, for example) features devices which are used by humans.




Tuesday, May 7, 2019

Microsoft Makes Play for Edge

SQL Server and Azure SQL Database are the leading data engines for enterprise workloads on-premises and in the cloud, respectively, and Microsoft wants to leverage those assets at the edge with Azure SQL Database Edge.

Azure SQL Database Edge runs on ARM processors and provides capabilities like data streaming and time series data, with in-database machine learning and graph. And because Azure SQL Database Edge shares the same programming surface area with Azure SQL Database and SQL Server, you can easily take your applications to the edge without having to learn new tools and languages, allowing you to preserve consistency in application management and security control, says Microsoft

In addition, Microsoft has developed IoT Plug and Play, a new open modeling language to connect IoT devices to the cloud seamlessly, enabling developers to navigate one of the biggest challenges they face — deploying IoT solutions at scale.

Previously, software had to be written specifically for the connected device it supported, limiting the scale of IoT deployments. IoT Plug and Play provides developers with a faster way to build IoT devices and will provide customers with a large ecosystem of partner-certified devices that can work with any IoT solution, Microsoft says.

Cloud-Based IoT Now a Growing Battleground

Cloud-based or edge-based Internet of things use cases are widely expected to grow over the coming decade. And that means there is a chance for contestants to solidify or gain market share.

Microsoft has a 23-percent share of the IoT cloud market, but only a 17 percent share of the general cloud market, illustrating the potential upside. Google has about 10-percent share of cloud computing, but perhaps 20 percent of cloud-based IoT.

Amazon now represents 34 percent of the IoT cloud market and 32 percent of the general cloud computing market.

April 2018, Microsoft announced it would invest $5 billion in IoT efforts over the next four years.



Friday, May 3, 2019

Arrow Launches Smart Airport Service

Arrow Electronics has launched what it calls the Smart Airport Asset Management Solution, in collaboration with IBM.

The solution will provide airport operators with temperature, vibration and motor fluid levels from sensors on motors and other moving parts in escalators, moving walkways and baggage handling systems.

Additionally, sensors on potable water cabinets measure temperature, open/closed door, water flow and leakage to determine if the cabinet is functional.

The solution is based on an Arrow-designed and sourced set of IoT sensors and gateways, the IBM Watson IoT platform, and IBM asset management software.

Data is collected from the sensors connected to wireless gateways, and then consolidated and analyzed on the IBM Watson IoT platform.