REQUEST PASSWORD RESET

RESET YOUR PASSWORD

OK
forgot password?
CANCEL
by Wolfram Hempel (@wolframhempel)
Thu Aug 09 2018

What the f*** is the edge?

I still remember a call with a potential investor I had a while ago at deepstreamhub, a company selling realtime data servers to enterprises. Things were going well, we churned through the technical concepts, use cases, customer adoption and business metrics and she seemed to like what she heard. But then her tone of voice lowered - a long pause ensued, and I could tell that she was about to ask something really, REALLY important. I braced myself - here it came:

"But is it ... on the edge?"

The edge?!? Wut?!? I mean, don't get me wrong: I knew what "the edge" was - or at least I thought I did - but something in her voice made me doubt that.

"Well,..." I stammered "... it could be on the edge". WRONG!!! "could be on the edge" clearly wasn't good enough. It already needed to be there.

"Hmmm..." a long silence on her part. I felt that I had already lost this one, so I dared to ask - not straightforward of course, but just to make sure we were on the same boat: "Erm...what are...your thoughts about - the edge?"

"The Edge" - she said with a sort of solemn glory, probably more apt to Mufasa showing the other animals their new king "is the future!".

Oh dear...

So, what actually is the edge?

Edge Architecture Example The edge is a computer that's closer to you than another computer. That's it. Of course, things aren't that simple:

The business model of cloud providers such as AWS, Google Cloud or Microsoft Azure is largely based on economies of scale. If everyone looking to host a website had to run their own datacenter, complete with facility-wide climate control, water-cooled server towers, highly specialized fulltime staff and physical access controls, things would be very expensive indeed.

Cloud providers operate at the very opposite end of this spectrum. Few of the costs of running a data center scale linearly with the computing power it provides. Consequentially, the tremendous price drops in cloud computation are largely a function of more efficient hardware utilization (virtualization, serverless on-demand scheduling etc.) and - most importantly - larger and larger datacenters.

That means that the best strategy for a cloud provider to maximize profit margins is centralization. Every additional server sharing the existing resources of a facility means an increase in profitability for the site.

But centralization comes with a cost: Physical locations are exposed to catastrophic environmental events, power grid failures and - worst of all - loss of internet connectivity. And the higher the degree of centralization, the higher the share of customers impacted by the event.

Yet even without any major earthquake, power outage or meteor strike there's a more pervasive problem: Network latency. Information travels through optic fiber at about two/thirds the speed of light. Add a couple of switches, routers and other network-hops along the way and the real-world transmission times between e.g. London and San Francisco end up at around 150ms. That's acceptable for website loading and HTTP traffic, a nuisance for online gamers, and a catastrophe for self-driving cars needing to make realtime decisions.

To mitigate both problems, cloud providers run multiple datacenters - however with an incentive to strike a balance between cost and latency/distribution. At the time of writing the three biggest contenders run 18 datacenters (AWS), 50 datacenters (Azure) and 17 datacenters (Google Cloud) respectively, each with their own subdivision into isolated networks and availability zones.

This satisfies moderately latency sensitive use cases, such as video conferencing, multiplayer gaming, chat or collaboration - but it is not enough for cases requiring sub-millisecond latency and incessant availability: Fields such as IoT metric processing and controlling, autonomous vehicles, financial analytics o trading (excluding HFT which comes with its very own set of challenges). But even without life or death latency and stability requirements - being able to process data faster and gain insights earlier can be a very real business advantage.

So how can we achieve this level of latency and stability? By moving computational resources closer to where the data is produced or consumed - a.k.a. "the edge". Again, it is important to stress that there is no "edge" per se - just a spectrum of proximities of computational resources.

  • When e.g. AWS Cloudfront, Amazon's Content Delivery Network talks about "Edge Caching" it simply refers to the Amazon datacenter closest to whoever requests the site.

  • For other cases, e.g. industrial IoT, facility automation or sensor data processing the "edge" tends to be a physical computer within the same building or site that runs pre-processing steps before the data is forwarded to the wider cloud. This front-layer of computing power is also referred to as "fog computing" or "dew computing".

  • But the edge can be even closer. As mobile devices and browsers become ever stronger, more and more computational tasks, e.g. resizing images before uploading or active P2P communication can be handled before the cloud even gets involved.

Cynics might of course argue that we have come full circle, from thick client to cloud back to thick client, but that would miss the point of what "the edge" is all about: In a world of increasingly ubiquitous computing power we are well advised to reflect on where our computation happens and how we can make the most efficient use of the resources at our disposal.