29 November 2018

Towards Quantum Technologies and Services in Telcos



There are evidences of increasing efforts and investments in innovation activities on Quantum Technologies. Notable example are activities of Microsoft, IBM, HP, Toshiba, Google, NASA, Intel, Alibaba, BT, TID, KT and several other Academia and Centers of Excellence.

Quantum technologies and architectures are showing different levels of maturity, but it is believed that first commercial systems are likely to be available in the range of five to ten years: advanced prototypes, an in some cases commercial solutions, are already available.

A future breakthrough in the development of Quantum technologies and services at affordable prices will have systemic and far reaching impacts, e.g.

  • Quantum Internet capable of exchanging information through fully optical networks and processing it, optically, in the form of encoded photons (higher level of security than today);

  • the development of disruptive applications in the areas of cryptography, cyber-security and anti-counterfeit transactions with “quantum money”, finance, but also in bioinformatics, quantum machine learning and quantum intelligence;

  • radical implications in other sectors and industries, such as new faster ways of processing genetic big data, quantum biology and medicine or developing of new nano-tech smart materials.


It is likely that quantum systems will eventually be available in five to ten years:

Current efforts are on: 1) materials/chipsets; 2) scalability by implementing error correcting codes; 3) design and engineering quantum architectures.

Once available, quantum systems (and quantum algorithms) have the potential jeopardizing the current security systems (source ETSI).

Products and trends tend to follow a standard innovation cycle starting with early adopters who pay high premiums, and ending with commoditized product offerings with abundant competition. Quantum will reset the innovation cycle for many common commoditized security, and the real costs of concern are related to switching to new “quantum safe” technologies.


Eventually, it can be argued that if the “Softwarization” of Telecommunications is going to “commoditize” the digital infrastructures (by opening an OPEX cycle) a breakthrough in quantum technologies would have the potential to (re-)open a new CAPEX cycle, by requesting large investments for deploying future quantum infrastructures. 

Presentation @ GSMA available at this Link 

22 November 2018

The emergence of the 4th Brain …

Neuroscience has provided many important insights about structure and functions of the human brain. One of the most shared models was proposed by neuroscientist Paul MacLean: the so-called 'Triune Brain'.

Three separated brain structures are often referred as separate 'brains', operating almost independently but simultaneously.
  1. basal ganglia (found at the center of the human brain) referred to as the reptilian brain, in charge of controlling our innate and automatic self-preserving behaviors, ensuring survivability;
  2. limbic system (which consists of various component brain structures, such as the amygdala and hippocampus), in charge of controlling emotions;
  3. mammalian neocortex (which is implicated in conscious thought, language and reasoning). 
Today technology advances (in the systemic digitalization of reality and Artificial Intelligence) are likely to create a 4th digital brain, on top of the mammalian neocortex. 

There are several virtual assistants or intelligent personal assistants which are becoming  very popular today, as capable of performing tasks for individuals, In some sense these software smart agents are augmenting human intelligence in the digital cyberspace. In the near future these virtual assistants or intelligent personal assistants will become more and more intelligent, capable of being proactive and autonomous. 


But there is more.

Ray Kurzweil predicts: within 30 years direct links will be established between the human brain and computer circuitry. The implications are mind-boggling. Such links could mean that the entire contents of a brain could be copied (and preserved) in an external database. Not only would the human brain be supplemented with enormous amounts of digital memory, it would also be linked to vast information resources like the internet — at the speed of thought.


Eventually, in a few years, the three biological human brains will be supplemented with enormous amounts of processing and memory capabilities - which means A.I. - and it is likely that they will be linked to the immense information resources offered by the web, almost instantaneously.

Indeed one may look at this as a 4th brain, on top of the mammalian neocortex !


The only big problem is that it was Nature to design and create directly this fourth brain.


So, this would have inevitably enormous implications and terrifying dangers, but hopefully an Artificial Immune System will be develop to defend civilization from this dangers.

10 October 2018

A Digital Nervous System for the Industry4.0

A profound Digital Transformation is impacting the evolution of the Digital Society.

The levels of maturity and the convergence of a number of techno-economic trajectories such as: the penetration of ultra-broadband fixed and mobile and the coming of 5G, the down spiralling cost of IT systems and the contemporary increase of their performance, the evolution of the Cloud Computing towards Edge and Fog Computing, and remarkably the consequent Cloudification of the Telecommunications.

The Digital Transformation is transforming also the Industry.

In fact, the term Industry 4.0 is becoming more and more popular to refer to the so-called fourth revolution: in fact, the first industrial revolution mainly concerned mechanization through water and steam power; the second one addressed mass production and assembly lines using electricity; the third was based on the adoption of computers and automation to enhance production and assembly; eventually the Industry 4.0 is bringing the concept of Smart Factories based on the digitalization and seamless interworking of processes and  production steps, from planning stages to actuators in the field.

Machinery and equipment will be able to improve processes through self-optimizations and autonomous adaptation to the environment conditions (e.g., from local ones to market requests).

Among the major technology drivers for industry 4.0 there are: availability of huge computational power at low costs, low-latency high bandwidth connectivity, Big data Analytics and AI systems, human machine interaction and digital to physical conversion. Industry 4.0 will allow faster, flexible and efficient processes (from product development and purchasing, through manufacturing, logistics and service) digitalization and integration of vertical and horizontal value chains, digitalization of product and service offerings, digitalization of business models and customer access.

No need to say that market opportunities are huge. For example, according to a report by HSRC (Global Industry 4.0 Market& Technologies 2018-2023) the Industry 4.0 market is projected to reach $214B by 2023.

In Industry 4.0, smart manufacturing is based on cyber-physical systems, or digital twins, as virtual models of processes, products, and services. Through ubiquitous, low-latency 5G connectivity, smart sensors transmit data to the Cloud-Edge Computing where the data are processed and analyzed with AI system to provide contextual and predictive data in order to make decisions the actuated in the reality.


In Industry 4.0, the convergence of manufacturing and services is going to be fueled by the XaaS model which has recently emerged in the Digital Transformation of Telecommunications. In fact, the introduction of technologies such as SDN (Software Defined Network) and NFV (Network Function Virtualization), supported by the virtualization of any resource and function, put forward the model XaaS, both in a Telco infrastructure and in any reality.

This is what we call by Cloudification of Telco Infrastructures: a Digital Transformation of Telecom Infrastructures spanning from the network PoPs to the Data Centers up to the edge nodes, Users’ terminals and Smart Things where “virtualization” is acting as a unification framework.

In summary, Cloudification of Telco Infrastructures is representing an opportunity to develop a Digital Nervous System for the Smart Factories of the Industry 4.0. In fact, Cloudification will allow: 1) collecting and processing the Big Data of the Smart Factory; 2) processing Big Data with A.I. methods/algorithms and comparing results with plans in order to infer decisions; 3) actuating XaaS (even automatically) in terms of actions, with multiple actuators, devices, smart things to communicate, control and optimize Smart Factory’s processes, etc…

03 May 2018

Multi Access Edge Computing: Telcos cooperation in IaaS vs PaaS scenarios


It was 2013 when IEEE Com. Magazine (2013) published my paper “Clouds of Virtual Machines in Edge Networks”, a first seminal work on Edge Computing. Paper claimed (for the first time to my knowledge) the advantages of bringing the Cloud model towards the edge of a SDN-NFV network.

In 2014 ETSI released an introductory technical white paper on MEC, at that time meaning Mobile Edge Computing, then renamed  Multi Access Edge Computing (MEC). 

Technical speaking, MEC can be seen as an extension of the Cloud Computing paradigm towards the edge (i.e., aggregation and access segments) of the Telecommunications Networks. 

As a matter of fact, the use of IT resources (computing, storage/memory and networking), allocated at the edge of the infrastructure, can bring a number of advantages, for example: to improve QoS/QoE by reducing the network latencies, to reduce costs in the Cloudification of the infrastructure towards 5G, to enable new service/biz models, etc:

In fact, in the medium-long term, it is likely that the network infrastructures will be composed by a physical layer (i.e., IT and network hardware and physical links) hosting dynamic software platforms executing millions of software processes, implementing both network and services components/functionalities (e.g., VNFs, Virtualized Network Functions).

In these scenario, MEC aims at complementing Cloud Computing (it is not replacing it, obviously…): for example, the so-called “slices” of the network infrastructure (e.g. future networks, 5G) may integrate both Cloud Computing and MEC resource, requiring this orchestration capabilities spanning across the overall infrastructure.

Today, Operators are exploring different strategies for MEC adoption, motivated by:

  • costs savings in the Cloudification (SDN-NFV) of the infrastructure: for example  using MEC for deploying smaller Central Offices at the edge (for example., Cloud CO inititative of Broadband Forum);
  • revenues generation.


Regarding the latter, among the various approaches and biz models for revenue generations, decoupling MEC IaaS vs PaaS appear to enable cooperation between Telcos, who can join forces to boost the development of multi-domains open/ecosystems (e.g., for V2X, Industry 4.0, etc.).

In particular, decoupling MEC IaaS vs PaaS means:

  • Telcos deploy MEC servers (e.g. Cloudlets) for providing infrastructure services (i.e., MEC IaaS);
  • Third Parties (or also Telcos, themselves) deploy a MEC software platform/framework for providing platform services (i.e., MEC PaaS);




To achieve this decoupling, and to allow different companies to develop and to interwork,  it is crucial well-defining what's MEC IaaS, MEC PaaS and standardizing interfaces between MEC IaaS and PaaS.  

In the menawhile, a numer of initiatives on MEC, in general Edge Computing, are emerging and flourishing, such as this one: https://www.akraino.org/ 

15 February 2018

When Will Networks Have Common Sense? Generative Adversarial Networks are on the way…


Generative Adversarial Networks (GANs) is a relatively new Machine Learning architecture for neural networks: it was first introduced in 2014 by University of Montreal (see this paper).

In order to better capture the value of GANs, one has to consider the difference between Supervised and Unsupervised learning. Supervised neural machineries are trained and tested based on large quantities of “labeled” samples. For example, a supervised image classifier engine would require a set of images with correct labels (e.g. cats, dogs, birds, . . .). Unsupervised neural machineries learn on the job from mistakes and try avoiding errors in the future. One can view a GAN as a new architecture for an unsupervised neural network able to achieve far better performance compared to traditional ones.

Main idea of GAN is to let two neural networks competing in a zero-sum game framework. A first network takes noise as input and generates samples (generator). The second one (discriminator) receives samples from both the generator and the training data, and has to be able to distinguish between the two sources.
The two networks play a game, where the generator is learning to produce more and more realistic samples, and the discriminator is learning to get better and better at distinguishing generated data from real data. These two networks are trained simultaneously, in order to drive the generated samples to be indistinguishable from real data.

GANs will allow training a discriminator as an unsupervised “density estimator”, i.e. a contrast function that gives us a low value for data and higher output for everything else: discriminator has to develop a good internal representation of the data to solve this problem properly. More details here.

GANs were previously thought to be unstable. Facebook AI Research (FAIR) published a set of papers on stabilizing adversarial networks, starting with image generators using Laplacian Adversarial Networks (LAPGAN) and Deep Convolutional Generative Adversarial Networks (DCGAN), and continuing into the more complex endeavor of video generation using Adversarial Gradient Difference Loss Predictors (AGDL).

As claimed here, it seems that GANs can provide a strong algorithmic framework for building unsupervised learning models that incorporate properties such as common sense.

There is a nice metaphor here about GANs: “In a way of an analogy, GANs act like the political environment of a country with two rival political parties. Each party continuously attempts to improve on its weaknesses while trying to find and leverage vulnerabilities in their adversary to push their agenda. Over time both parties become better operators”.

30 January 2018

A.I. for mitigating the "complexity" of the Digital Transformation

Several techno-economic drivers which are paving the way to a profound digital transformation of the Telecommunications infrastructures. Among these drivers, there are: the diffusion of ultra-broadband, the increasing of performance of IT systems vs the down-spiralling costs, emerging of innovative networks and services paradigms such as SDN and NFV, the growing availability of open source software but also the impressive advances of Machine Learning and Artificial Intelligence.

This digital transformation will lead the current legacy Telecommunications infrastructures to evolve towards the 5G (the 5th generation of network infrastructures) as an end-to-end network and service platform: in the long term, 5G is set to integrate processing, memory/storage and networking resources, functions and services through a “transparent”, hyper-connected ultra-broadband programmable “fabric”.

In this direction, in several Standardization Bodies and Fora, an high-level reference model is emerging, based on two main pillars: 1) an infrastructure physical layer, which will include computing, memory/storage and network resources (up to the edge/fog resources and even the Users’ terminals, devices, smart things); 2) a software virtualization layer which will allow providing high-level abstractions of all the infrastructure resources, functions and services.

It is well known that Software-Defined Networks (SDN) and Network Function Virtualization (NFV) are two of the key enabling technologies. Their exploitation in 5G will allow Virtualized Network Function (VNF) and services will be dynamically combined and orchestrated to create specific end-to-end “service chains” for the vertical applications; moreover the infrastructure will provide “slices” of logical resources where to execute multiple chains to serve applications (specific QoS requirements).

It is also reasonable to expect that this network transformation will reduce the costs (e.g., CAPEX and OPEX) and increase the flexibility of the infrastructure, ensuring high levels of programmability (through APIs) and the performance and security levels required by future 5G scenarios and applications (e.g., Internet of Things, Tactile Internet, Immersive Communications, Automotive, Indutry4.0, Smart Agriculture, Genomics/Omics and E-Health, etc).

So 5G will be much more than one step beyond today’s 4G-LTE networks: it is expected to become a an end-to-end network and service platform where multi-level APIs will allow Operators/Providers, Third Parties or even end-Users to create/operate “service chains”, made of elementary services/functions component capable of meeting on-demand the applications’ requirements. As a matter of fact, 5G architectural and functional disaggregation is one of the most debated avenues in innovation and standardization activities. 

We are witnessing a rapidly increasing in the “complexity” of the infrastructures subjected to this process of digital transformation, a complexity which will be too high just for human-made operators.

In fact, configuration, control and management of current physical pieces of equipment (in most cases closed boxes) will have to be replaced by automated processes acting over millions of virtual/logical entities (e.g., Virtual Machines, Containers, appliances etc). Management (e.g., Fault, Configuration, Accounting, Performance and Security) control and orchestration functions of such future infrastructures will require innovative methods and systems (e.g., self-organizing, adaptive control, machine learning, neural networks, etc.) capable of using the big data to mitigate this “complexity”.


It is not only a technical “complexity” but also an economic one, about biz sustainability. The increasing competition pressure in the Telecommunications market is pushing Network Operators and Service Providers to look for new services scenarios and solutions for reducing/optimising the overall operations costs to compensate the cases where revenues are declining.    

It is expected that A.I. (e.g., ML over actionable Big Data, etc...) will help for mitigating the "complexity" of this Digital Transformation, but what will be its impact on the networks and services platforms architectures ?

It's not just a matter of mathematical methods or algorithms, heuristics, etc.. What A.I. functions, what interfaces to what have to be standardized ?