Just about everything in engineering is a trade off: speed versus cost; distance versus cost; latency versus computational power. Latency versus computational power is among the key trade offs for edge computing.
The lowest latency occurs when edge devices themselves can do the computing. But those devices also have limits as far as computational power. On-the-premises servers can offer a huge advantage for processing power and the ability to derive results faster, with little latency penalty.
source: Cloud Native Computing Foundation
When more computational power is required, edge computing facilities located somewhat in a metro area might be required. There is a bit more latency, but also much more potential computational support.
Massive amounts of computing, but with the highest latency, happen when remote cloud computing facilities are used.
So engineering cost-effective and affordable solutions requires knowledge of the actual use cases and the drivers of value. Basically, engineers grapple with how to solve a particular problem at the lowest cost and complexity, given the latency requirements.
No comments:
Post a Comment