12 September 2017

The “Operating System” model for the Digital Society

We are witnessing a number of techno-economic drivers (e.g., global and low costs access to IT and network technologies, moreover accelerating) which are creating the conditions for aCambrian explosion” of new roles, services, value chains, etc… This is true for Telecommunications/ICT and also for several social contexts (e.g., Smart Cities) and industrial ecosystems (e.g., Industry 4.0).

We realize that Telecom infrastructure will have to “tame” a growing “complexity” (e.g., hyper-connectivity, heterogeneity of nodes and systems, high level of dynamism, emerging of non-linear dynamics in feedbacks loops, possible uncontrolled interactions); they will have to be very effective, low-costs and self-adaptable to highly variable context dynamics (e.g., needs of changing strategies with other Players, any-services fast-provisioning and adaptive enforcement of biz policies to end-Users and Vertical Apps requirements, local-vs-global geographical policies, etc).

We’ve been mentioning several time that in order to face such challenges, we need proper innovative paradigms (e.g., based on DevOps, adopting Computational Intelligence, capable of scaling to millions of VM/Containers), to manage the future Softwarized Telecom infrastructures (i.e., based on SDN, NFV, pursuing decoupling of HW from SW, virtualizations anc Cloudification-Edgification of functions and services). And this implies challenges not only technical/engineering but also related to governance, organization, culture, skills, etc…

Now let’s open this vision to extend the concept of infrastructure beyond the Telecoms. Also a Smart City has its own physical infrastructure, which is heterogeneous and includes a complex variety of resources, whose dynamics are intertwined; but also a smart factory in I4.0; they will have to be very effective, low-costs and self-adaptable to highly variable context dynamics.

So my take is that we are facing a sort of non-linear phase transition of a complex system (the intertwining of our Society, Industries, Culture…) whose control variables include (hyper-connectivity, globalization, digitalization, etc). How extracting value from this phase transition?

The models of an Operating System (OS) would represent - for any Industry adopting it – the “strategic and unifying approach” to manage this phase transition. Not only it allows taming the complex oscillations of this transition but also it extracts dynamically value from them, creating and running ecosystems, even new ones.

In the very essence, this requires  virtualization/abstraction of all resources/service/functions (e.g., in broad sense including the ones of a Smart City or a I4.0 Factory) and their secure APIs accesses from both End-Users/Developers, Third Parties and other related Operators.


The future sustainability of the Digital Society is about the flourishing and running of 5G Softwarised Ecosystems.

My take is that we need a system thinking to design this Digital Society OS, capable of enabling dynamical trade-off Slow-Cheap to Fast-Costly vs Flexible-General to Inflexible-Special.

Eventually, look at how Nature implemented it... with a very distributed and resilient approach.


08 September 2017

Talking the language of Softwarization: towards Service2Vectors (part 2)

SDI functions and services modularization can be achieved through Network and Service Primitives NSP: this will increase the level of flexibility, programmability and resilience of the SDI, for example improving agility in software development and operations when using DevOps approaches. On the other hand, there is a cost to pay: it increases the level of complexity of the SDI.

Then, management, control and orchestration (and in general all the OSS/BSS processes) of a SDI should deal with an enormous number of NSP which have to be interconnected/hooked and operated to implement (the logics of) network services and functions. Moreover said NSP should be continuously updated and released.

This can be simplified and above all automated by using a dynamic multi-dimensional services space where coding a distributed representations of all NSP of a SDI. Remeber what is done, for example, in the approaches adopted for the word embedding in Natural Language Processing (NLP). For example see this tutorial on the word2vec model by Mikolov et al. This model is used for learning vector representations of words.

Leveraging on this thinking, I’ve invented a method (service2Vectors) for the distributed representation of NSP with a vector of several elements, each of which is capturing the relationships with other NSP. So, each NSP is represented by a distribution of weights across those elements of the vector, which comes to represent in some abstract way the ‘meaning’ of a NSP. Said NSP vectors can be seen as single points in a high dimensional service space This multi-dimensional space can be created and continuously updated by using Artificial Intelligence (A.I.) learning methods (e.g., recurrent neural networks).

In a SDI there might be thousands or even more different NSPs: all of them create a sort of vocabulary whose terms can be used for expressing  the SDI services (for example through an intent-based language, example below). Let’s assume for example that this vocabulary of NSP has 1000 elements, then each vector representing an NSP will have V = 1000 elements, then the NSP can be represented by a point in a space of 1000 dimensions.

This distributed representations of NSP in a multi-dimensional services space allow A.I. learning algorithms to process the “language” (e.g., intent-based language, example below) used by Application and Users to formulate service requests to the SDI. In fact, NSP vectors can be given as inputs to a recurrent neural network which can be trained, for example, in order to predict a certain service context given a NSP and/or vice-versa a NSP given a certain service context. The learning algorithm could go, for example, through sets of thousands of services context (existing compositions of NSP).

Once the recurrent neural network is trained to make said predictions to some level of accuracy, the output is the so-called space matrix of the trained neural network, capable of projecting any NSP vectors into the space. NSPs with similar context tend to cluster in this space; for example this matrix can be queried to find relationships between NSPs, or the level of similarity between them.

Another alternative is providing distributed representation of SDI service (instead of the single NSP) with a vector of several elements, each of which is capturing the relationships with other SDI services. So, each SDI service is represented by a distribution of weights across those elements of the vector. Said SDI service vectors can be seen as single points in a high dimensional service space This multi-dimensional space can be created and continuously updated by using Artificial Intelligence (A.I.) learning methods (e.g., recurrent neural networks).

This reminds what Prof. Geoff Hinton argued by introducing the term "thought vector": “it is possible to embed an entire thought or sentence — including actions, verbs, subjects, adjectives, adverbs etc. — as a single point (i.e., vector) in a high dimensional space. Then if thought vector structure of human language encodes the key primitives used in human intelligence then SDI services vector structure could encode the key primitives used by “applications intelligence”.

Moreover thought vectors have been observed empirically to possess some properties: on for example is known as "Linear Structure": i.e.,  certain directions in thought-space can be given semantic meaning, and consequently the whole thought vector is geometrical sum of a set of directions or primitives. In the same way certain directions in the SDI service space can be given a context meaning, and consequently whole SDI services vector can be seen geometrical sum of a set of directions or primitive.

Hopefully this will pave the way for  Humans and not-human Users (apps, avatars, smart or cognitive objects, processes, A.I. entities, ....) "to talk" with Softwarised Infrastructures, with a common, quasi-natural language.

06 September 2017

Talking the language of Softwarization: towards Service2Vectors (part 1)

Cost reductions and new revenues flows are key business drivers for the sustainability of Network Operators. Telecommunications Infrastructures are growing in heterogeneity and complexity, but they should be at the same time agile and flexible, reliable and programmable...to cope with market dynamics (wih increasing "frequencies").
This is a new cycle of "complexity", in one word. It is ever-growing in Nature, by definition, at least up to when a "tool" is found to "tame it" and to make a "phase transition" to a new "state".
We know, that recent advances in enabling technologies such as SDN and NFV are offering the enablers of decoupling the hardware and software architectures and introducing the virtualization of resources (the so-called Softwarization of Telecommunication infrastructures). At the same time, the evolution of Cloud Computing towards Edge and Fog Computing, Artificial Intelligence, multi-level APIs, etc etc represent other technology trends which are intercepting SDN and NFV in shaping future Software Defined Infrastructures.
Still management, control and orchestration systems of SDI should make sure that the infrastructure services, characterized with specific KPI (Key Performance Indicators), are provisioned to Applications and Users upon their specific requests. But this implies carrying out operational tasks in a new way: management and control of both physical (e.g., physical nodes and IT servers, physical connections) and (millions of) virtual resources (e.g., virtual machines or containers, virtual links and virtual network functions), scheduling and end-to-end orchestration of  virtual network functions, and services...etc. In fact, Softwarization means that virtualized network functions and services can be dynamically allocated and even executed in the Cloud Computing and/or Edge Computing (e.g., in centralized Data Centre and/or in mini-Data Centre, which can be located in correspondence of network PoPs equipped with processing and storage capabilities).
Also, this is allowing to exploit the model of  Service Chaining (also known as Service Function Chaining) in SDI. In general, Service Chaining is about creating and provisioning a network service as a sequence or chain of interconnected network functions and services, by hooking the logical resources where they are executed through the steering of the traffic.
Obviously, just like applications, said network functions and services of a SDI can be modeled and developed by combining software tasks and/or Microservices and network primitives. As known an application can be modeled as a core part containing the application logic and adapters that interface the application with the external world. Examples of adapters include database access components, messaging components that produce and consume messages, or web components that either expose APIs or implement a User Interface (UI). Instead of developing monolithic application, Microservices architectural paradigm proposes split the application into set of smaller, interconnected services (called Microservices). Microservices are basically modular software components each of which runs a unique process which can be deployed independently, with minimal centralized management. For  example some Microservices can expose an API that’s consumed by other Microservices or by the application’s clients, other Microservices can implement a web UI.
One advantage is that these smaller components can be developed independently and scaled independently: this is improving agility in software development and operations, promoting resilience and scalability. Microservices can be used also for developing network and service functions in SDI: in fact a Virtual Network Function can be decomposed in a sequence/combination of Microservices and/or network primitives.
Generalizing, Microservices could be seen as any kind of packet processing primitives (also called network or service primitives) which could be dynamically composed to be executed in different hardware architectures. Example of said packet processing primitives could be: packet forwarding, packet inspection, modification (including dropping a packet), queuing, flow control and scheduling, or any other software tasks/functions (such as those one required to create any VNF), to provide access to nodes (e.g., node address, interfaces, link status) and its storage an processing resources. 
Part 2 to come next !

01 September 2017

The rise of a Networked AI with humans-in-the-loop

The programmability, flexibility and high levels of automation of 5G operations will reduce costs (e.g., OPEX) and create new service paradigms which might be even beyond our imagination. Some examples concern the applications of the Internet of Things, Tactile Internet, advanced Robotics, Immersive Communications and, in general, the X-as-a-Service paradigm.

Let us consider some examples. Cloud Robotics and 5G-controlled robotics will have huge impacts in several sectors, such as industrial and agricultural automation, in smart cities and in many domestic applications. In agriculture, autonomous machines will be used for tasks like crop inspection, the targeted use of water and pesticides, and for other actions and monitoring activities that will assist farmers, as well as in data gathering, exchange and processing for process optimization. Interestingly, Cloud Robotics and 5G APIs can be opened to end-users and third-parties to develop, program and provide any type of related service or application for pursuing specific tasks. In industry, this will pave the way to process automation, data exchange and robotics manufacturing technologies (e.g., Industry 4.0). It is likely that we will soon see robotic applications in the domestic environment: it is estimated that by 2050-2060 one third of European people will be over 65. The cost of the combined pension and health care system could be close to 29% of the European GDP. Remotely controlled and operated robots will enable remote medical/supportive care and open up a new world of domestic applications which may also be incorporated by the entire population (e.g. cleaning, cooking, playing, communicating, etc.).

5G will have a big impact also on the automotive and transportation markets. Nevertheless there are still open issues. In fact, even if significant progresses have been made in developing self-driving/autonomous machines, equipped with sensors, actuators and ICT capabilities, the achievement of very low reaction times still represent an open challenge. As a matter of fact, the autonomous driving in real traffic is a very challenging problem: reaction time in units of milliseconds, or even less, are needed for safety reasons to avoid sudden and unpredictable obstacles. This means that a considerable amount of computing and storage power must be always available through ultra-low latency links. Today, the amount of computing and storage power that can be equipped locally in a machine/vehicle is not enough (for several reasons, e.g., space, dissipation limits, costs restraints, etc.) to cope with these requirements. Huge amounts of data needs to be stored and accessed and the AI methods have to be executed very quickly to exploit such levels of reactive autonomy. An ultra-low latency 5G network will allow exploiting the best balance of resources in the Cloud and Edge Computing systems, thus offering trade-offs between a local vs global cognition execution, essential to minimize reaction times.

In a similar direction, images/video real-time processing, for example for recognizing forms, faces or even emotions in photos or live-streamed video, represents another challenging case study for AI in 5G infrastructures. In fact, this could be radically improved from the distributed execution of deep learning solutions in a 5G infrastructure capable of providing ultra-low latency connectivity links.  Also in this case, performances will be improved by the flexibility of 5G in dynamically allocating/moving either huge data sets and software tasks/service where/when it is more effective to have them.

Another example is Immersive Communications, which refers to a paradigm going beyond the “commoditization” of current communication means (e.g., voice, messaging, social media, etc.). Immersive Communications will be enabled by new advanced technologies of social communication interactions, for example through artificially intelligent avatars, cognitive robot-human interfaces, etc. Eventually, the term X-as-a-Service will refer to the possibility of providing (anytime and anywhere) wider and wider sets of 5G services by means of anything from machines to smart things, from robots to toys, etc. If today we are already linking our minds with laptops, tablets, smartphones, wearable devices, and avatars, in the future we will see enhanced forms of interactions between humans, intelligent machines and software processes.

So it is argued that current socio-economic drivers and ICT trends are already bringing to a convergence Computer Science, Telecommunications and AI.

In this profound transformation, mathematics will be the language, computation will be about running that language (coded in software), storage will be about saving this encoded information, and, eventually, the network will be creating relationships – at almost zero latency -- between these sets of functions. This trend will also see the rise of the so-called Networked AI with humans-in-the-loop. Today there are already some examples, such as analyst-in-the-loop security systems, which combine human experts’ intuition with machine learning capable of predicting infrastructure cyber-attacks.

Although security and privacy are out of the scope of this work (focusing on 5G enabling capabilities), these two strategic areas deserve some further considerations. On one side 5G could provide the means for improving security, for example as information will be available everywhere and the context needed to detect anomalous behavior will be more easily provided; nevertheless on the other side, enabling technologies such as SDN and NFV have the potential to create situations where all primary personal data and information is held and controlled at a global level, even outside the national jurisdiction of individual citizens. It has been mentioned, as an example, the real-time processing of several thousands of images per second and live-streamed video: this will have wide-ranging, but also controversial applications: from predicting crimes, terrorist acts and social upheaval to law enforcement and psychological analysis. Eventually, in the long term, this might transform everything from policing to the way people interact every day with banks, stores, and transportation services: this will have huge security and privacy implications.

Reasonably privacy and security concerns should be considered by-design, with  systemic solutions capable of operating at different levels in future 5G infrastructures: for example, such design will need to consider issues such as automated mutual authentication, isolation, data access and management of multiple virtual network slices coexisting onto the same 5G infrastructure.

16 May 2017

Operating Systems for Cognitive Cities

Idea of exploiting a sort of Operating System for Smart Cities is not new, today. It's a few years that some Cities are developing and experimenting it. Just to mention some examples, there are the brilliant experiences of Bristol and Barcelona with the so-called CityOS.

We know that in Computing systems, the adoption of an Operating Systems facilitated applications development and diffusion by providing controlled access to high-level abstractions for the hardware resources (e.g., memory, storage, communication) and information (e.g., files, directories). Similarly, in a Smart City, one may imagine a sort of Operating System facilitating City's applications and services development by providing controlled access to high-level abstractions of the City resources.

In general, a City Operating System will allow:
  • collecting and sharing data in a city;
  • elaborating said data and inferring decisions (actuation) along multiple actuators, devices, smart things to communicate, control and optimize city’s processes, etc…;
  • providing any sort of ICT services for a City.

In other words a City Operating System will allow:
  • a sensing, collecting and storing (even locally) massive data sets (through terminals, smart things, intelligent machines);
  • transporting quickly huge sets of data (through high bandwidth and ultra-low low latency network connections) where it is more convenient (allocation of virtual functions);
  • elaborating big data (with A.I. and Cognitive methods in Cloud and Edge/Fog Computing) to infer decisions for actuating/controlling local actions

so it will introduce cognitive “control loops” into the City, creating a sort of Nervous System for it ! That's why I like to call the cities of the future, Cognitive Cities.

Obviously a Cognitive City OS will include some of the corresponding functions/capabilities, which are typical in an Operating System, but referring specifically to the resources and services of a City... and my take is that A.I. will be everywhere around us, fundamental to help taming the cyber-security risks.



In fact, up today, we are using quoting the well-known sentence "Software is eating the World", but looking ahead it will be more "Cognition will optimise the World" ! 

Take a look here link

02 May 2017

Technology evolution as a collective phenomenon

Today we are witnessing a growing interest on Artificial Intelligence methods and systems (from the terminals to the Network nodes to the Clouds), about on exploiting cognition capabilities into robots or autonomous vehicles, self-learning avatars, autonomic bots, etc... Even we are looking at a sort of Nervous System for the overall Digital Society and Economy (see my previous posts). It looks like we are pursing the embodiment of the "cognition and autonomic" paradigms into Telecommunications and ICT.

In this avenue I believe we need leveraging much more than we are doing on Biology, Neuroscience, Analytical Psychology and all those efforts which are targeting the understanding of Biological Intelligence; not also, also we need to leverage on Physics for the deeper physical phenomena governing the cognition. Some solutions are already there, and maybe we need just applying them to a specific new context. I could mention several of them.

The theory of F. Varela is one example of the still very popular approach to understand the roots of cognition in very simple living entities (see “The Embodied Mind: Cognitive Science and Human Experience”, Cambridge, MA: MIT Press) 

The theory argues that adaptive behaviour of simple living system (e.g., an ant or a bee) is based on two interrelated points: 1) perception consisting of perceptually guided action and 2) cognitive structures, emerging from the recurrent sensori-motor patterns, enabling action to be perceptually guided. In particular, during their life, the living systems cross several and diverse cognitive domains (called micro-worlds) which are generated from their (local and maybe also non-local) interactions with the external environment: within a micro-world the behaviour is determined by pre-defined sensori-motor loops, very simple and very fast and automatic; from time to time breakdowns occur which are unexpected disruptive situations determining the need to change from a cognitive domain (i.e. from a micro-world) to another one. Importantly, this bridging (during breakdowns) is assured by the “intelligence” of the nervous system (allowing a new adaptation and the consequent learning of new sensorimotor loops). So, within a certain micro cognitive domain, the behaviour is directed by a set of sensory-motor loops, which are fast and performing sort of well trained, automatic reactions to the local situation. When a breakdown occurs, which is an unexpected event, the nervous system reacts developing a set of possible alternative reactions. During these trail-and-error phases, eventually a specific sensory-loop prevails which allow reacting properly to the unexpected event. So, the living system, entering this new micro cognitive domain, has learnt a new sensory-motor loop, and so on. 

Example: imagine a termite bringing some food into the nest (a sensory-motor loop) when suddenly a collapse of a gallery happens: this is a breakdown. Termite should enter a new cognitive micro world to try overcoming the obstacle. A new sensory-motor loop is developed and learnt (how overcoming a collapsed gallery). They say that the connections between these systems, micro cognitive domains, happen through a sort of overall structural coupling with the overall environment (the colony), through  a sort “field”, i.e. space-time gradient of electromagnetic fields (and potential), sounds, intertwined with tactual and metabolic information. These gradients are triggering the overall collective reactions, e.g. in terms of alignment of micro cognitive worlds. This is collective intelligence, this is how Nature's technologies works in a sustainable way.

This cognition model (balancing local vs global cognition) could be perfectly applied to develop swarms of robots or drones in Industry 4.0 scenarios.

More in general I would argue that technology evolution, for being sustainable, should be seen as a collective phenomenon ! Have a look at this amazing paper: N. Goldenfeld “Life is physics: evolution as a collective phenomenon far from equilibrium”:


27 April 2017

A.I.: What's next ? Biological intelligence (B.I.)

Biological intelligence concerns all the control and adaptive systems that are not artefacts, but rather that are exploited by Nature in living entities after millions of years of evolution.

Normally when we think about Biological Intelligence we refer to human brains and nervous system functions, but there is much more in Nature. Think about the collective intelligence in colony species like ants, bees capable of adapting and co-evolving as ecosystems in changing environment. These colonies - as our organs! - are complex adaptive systems, open as exchanging matter, energy and information with the external environment. This Biological Intelligence is self-organizing. 

Biological Intelligence is, obviously, much beyond our most advanced thinking of Artificial Intelligence (A.I.), today. A.I., in most cases, is still based on heuristics and algorithms (e.g., ML, DL, neural networks, etc), using binary logic but, above all, it is reductionist. Biological Intelligence leverages on the deeper quantum phenomena which are at the most basic level of life: binary logic is very different from the tangled interactions in quantum mechanics.

In A.I. avenues we are making outstanding progresses and we have great visions how to make a biz out of that! For example, two amazing projects have been announced last week aiming at progressing A.I.: Facebook’s plan to develop a non-invasive brain-computer interface that will let you type at 100 words per minute and Elon Musks’ proposal that we become superhuman cyborgs to deal with superintelligent AI. 

Also, a few days ago Apple suggested at a TED 2017 conference that "instead of replacing humans with robots, artificial intelligence should be used to give us super-human abilities”. 

No doubts that high bandwidth/low latency connectivity + massive A.I. (Cloud/Edge/Fog) + B.C.I (or similar advanced interfaces for humans) are likely to bring us to the next big Internet, which far reaching socio-economic implications...but beyond that there is a much more challenging and impactful frontier, for us, which is understanding Biological Intelligence and as such life. 

In fact, this implies looking at more subtle biological processes and interactions paradigms, maybe less familiar in Computer Science but surely nearer Quantum Biology.

Capturing the essence of Biological intelligence is the biggest bet we can make !