Edge Computing

Edge computing has risen to prominence in the zeitgeist as one of the subjects that denote innovative thinking and technology. There has been a general assumption now that this is the future of computing. Despite this, edge computing has remained chiefly a hypothetical discussion due to the lack of infrastructure supporting it.

Several edge computing resources are now in the hands of application developers, entrepreneurs, and large enterprises, ranging from micro data centers to specialized processors and software abstractions. 

The practical implications of edge computing can now be examined beyond the theoretical. So, this begs the question of whether the buzz surrounding edge computing is deserved or misplaced.

In this article, we will delve into the current state of the edge computing industry. Despite the hype surrounding edge computing, the evidence suggests it is based on a growing need to decentralize applications for cost and performance reasons. 

The hype around edge computing has overshadowed other aspects, while others have been overlooked. The four points mentioned below aim to provide a realistic assessment of today’s and tomorrow’s capabilities of the edge:

Latency isn’t the only factor in edge computing

Traditional cloud computing models, which centralize computation in hyper-scale data centers, differ from this model. In edge computing, analysis and data storage are moved closer to the user. According to this, the edge can be close to the end-user or device compared to traditional cloud data centers. It could all be done on-premises, on-device, 100 miles away, or one mile away. 

Regardless of how edge computing is positioned, the traditional narrative has emphasized that it reduces latency by improving the user experience or enabling new latency-sensitive applications, which inevitably does edge computing a disservice. 

Despite its importance, latency mitigation is probably not the most valuable use case. Additionally, edge computing can reduce network traffic going to and from the cloud, or what some call cloud offload. Cloud offload will probably provide just as much economic value as latency reduction.

There is an enormous growth in the amount of data generated by users, devices, and sensors, driving cloud offload. Many people would not move their data to the cloud if they don’t have to since moving it costs money. 

The edge computing approach extracts value from data where it is generated instead of moving it further away from the point of generation. Pruning the data can send only a subset of it to the cloud for further analysis or storage.

Cloud offloading is typically used for video or audio processing, among the most bandwidth-intensive data types. Cloud emptying is also an excellent option for industrial equipment since it generates a large amount of data.

Edge makes the cloud accessible

Early predictions were that the edge would displace the cloud, but it is more accurate to say that the edge makes the cloud more accessible. In recent years, cloud computing has become increasingly dispersed from traditional data centers because of the on-demand resource availability and abstraction of physical infrastructure. Workloads will continue to migrate to the cloud unabated. 

Cloud-enabled tools and approaches will be used to manage edge locations over time, blurring the line between cloud and edge management. The edge computing initiatives of public cloud providers like AWS and Microsoft Azure prove that the edge and the cloud are interconnected. 

You can now integrate on-premises edge computing with Amazon’s AWS Outposts, which are fully assembled racks of computing and storage that mimic Amazon’s data centers. Customers install and maintain it in their own data centers, and Amazon monitors maintain and upgrade it. 

It’s important to note that Outposts provide many of the AWS services that cloud users have been accustomed to using, such as EC2 compute capabilities, making the edge comparable to the cloud in terms of operational effectiveness.

There is a phased approach to edge infrastructure

Some applications are better suited for on-premises deployment, but in many cases, application owners would rather minimize on-premises footprints and reap the benefits of edge computing. We need access to a new kind of infrastructure, similar to the cloud but geographically distributed as opposed to the dozens of hyper-scale data centers that currently make up the cloud. 

It’s still in its infancy, but this type of infrastructure is expected to develop in three stages, each broadening the edge’s reach geographically.

Stage 1: Multi-Region and Multi-Cloud

Edge computing is the first step toward a broad spectrum of applications but can be considered just one aspect of a range that includes all edge computing approaches. This step aims to take advantage of the multiple regions provided by the public cloud providers. 

Stage 2: The Regional Edge

The second phase in the edge evolution leverages infrastructure across hundreds of locations rather than a small number of hyper-scale data centers. 

Stage 3: The Access Edge

Edge evolution’s third phase drives it further outward, leaving the edge within a few network hops of the end user. This type of architecture is known as the Access Edge in traditional telecommunications terminology. 

Access Edge typically takes the form of a micro data center, ranging in size from a single rack to a semi-trailer, and can be installed at the base of a cellular network tower or on the side of the road. 


There has been some progress in edge computing over the past few years, but it is still in its infancy. Moreover, as we know it today, the cloud is not older than 15 years. Hence, it is vital to remember that edge computing, like the cloud, will advance rapidly and make its mark on the computing landscape.