Greg Kerr


Edge Computing

May 8th 2018
image: Barker, Ian;

The world is well served by Intel founder Gordon Moore’s Law that accurately discribes humanity’s ability to double computation capacity every two years. Under Moore’s math, by 2040 the world’s energy production will not support our computing capability. [1]

Solving this imbalance of unsustainable computing will require a spread loading of data processing among all devices. As data processing requirements increase, edge computing will fill the void left by the limited power of modern cloud computing.

Gleaning the Cloud’s Edge

Traditional network pipelines are quickly overwhelmed with high data flow. Case in point, a modern household can produce over a gigabyte per second of data to maintain optimum functionality — all of this data being processed in the existing cloud will suffer from inherent network processing delays. Organizing this data on the edge of the networks computation capability will address network latency issues. Solving network latency at the computing edges increases mobile application’s functions to a middle-tier mobile-based micro data centers. These data centers lay in the crux of the Internet, the Cloud and the mobile device itself to execute dense computation as well as low latency tasks. [2] Femtoclouds, REPLISOM, CloudAware and ParaDrop are already offloading computation to edge computing devices.

Proximity’s Role

The increasing complexity of the world’s mesh network of devices is bringing a new value to proximity, playing a pivotal role for addressing low latency, high bandwidth, and increased access to network resources.

[1] Intel, “50 Years of Moore’s Law.” no 1. Date accessed: 20 March 2018

[2] Shi Weisong, Cao Jie, Zhang Quan, Li Youhuizi Li, and Xu Lanyu. “Edge Computing: Vision and Challenges.” IEEE Internet of Things Journal’ Vol 3, October 2016.

More by Greg Kerr

More Related Stories