03 June 2015

An Open Fabric at the Edge

Fog Computing pushes the Cloud Computing paradigm towards the edge of current networks, leveraging on distributed processing and storage resources around Users. Apparently another buzzword: in reality the progressive maturation of a technology trend, which is about hardware miniaturization, increasing performance and costs reductions, together with pervasive ultra-broadband. This is resulting in more and more powerful devices, smart terminals, intelligent machines scattered in the environment around Users, and capable of storing data and executing services locally (or better in orchestration with the Cloud).

This floating fog of ICT resources at the edge will create the conditions whereby Users will literally “decide and drive”  future networks and services. This fog of edge devices can indeed create a sort of processing and storage fabric that can be used to execute any network function and to provide any sort of ICT services and applications. The components of this fabric can be seen as: CPU/GPU, SSD (Solid State Drive), HDD (Hard Disk Drive) and link (and this is perfectly in line with the “disaggregation of resources” targeted by the Open Compute Project). One may imagine these components aggregating dynamically in an application-driven “flocking”. And, in the same way as birds with simple local behaviors are optimizing the aerodynamics of the flock (which is solving a “constraints optimization problems” by using very simple local rules), the flocking of component can follow dynamically application-driven network optimizations.

So, imagine providing ICT services by orchestrating the use of local idle computing and storage resources of millions of smart terminals, nodes, machines at the edge…one may argue that not all types of services and applications can run entirely on the edge, however, there are several examples like content aggregation, transformation, data collection, analytic, static data bases, etc. which can really take benefit from the fog paradigm.


Surely, the end-to-end latency is one of the major problems to be solved.

Imagine, just for didactical reasons, to consider the equivalence between the time of one CPU cycle and the time of a step in a walk. The latency in accessing a SSM (e.g., DRAMs) can be estimated as around tenths of CPU cycle, tenths of steps in our example. But if you wish estimating the average latency in accessing HDDs, i.e. stored data (also including the latency of the network links, RTT), then, overall, it results the time of a walk of about 10 000 km !

1 comment:

  1. DreamHost is ultimately one of the best hosting company with plans for all of your hosting requirements.

    ReplyDelete