14 October 2019

Photonic Computing paving the way to a "pervasive intelligence"


One of the most demanded tasks of AI is extracting patterns and features directly from collected big data. Among the various most promising approaches for accomplishing this goal, Deep Neural Networks (DNNs) are outperforming. 

The reason of the DNNs are so performing is not fully explained yet, but one possible explanation, widely elaborated in literature, is that being DNNs based on an iterative coarse-graining scheme, their functioning is somehow rooted to fundamental theoretical physics tool (e.g., Renormalization Group). 

The reverse side of the coin is that this is rather resource consuming and, as such, energy demanding. In fact, today DNNs (as other AI models) still rely on Boolean algebra transistors to do an enormous amount of computations over huge data sets. This has two major consequences: on one side chips and processors technologies aren’t getting faster at the same pace that AI methods and systems are progressing, and, on the other hand, current AI technologies are becoming more and more electricity-hungry. 

Today, for example, Cloud servers and data centers currently account for around 2% of power consumption in the U.S. According to some forecasts, data centers will consume one fifth of the world’s electricity by 2025. 

Will this energy consuming trend be really sustainable in long term scenarios (e.g., 6G) ?

Take a look at this paper - Lovén, Lauri, et al. "EdgeAI: A Vision for Distributed, Edge-native Artificial Intelligence in Future 6G Networks." The 1st 6G Wireless Summit (2019): 1-2.

We remind that in a DNN that each high-level layer learns increasingly abstract higher-level features, providing a useful, and at times reduced, representation of the features to a lower-level layer. This similarity, more specifically, is suggesting the intriguing possibility that DNNs principles are deeply rooted in quantum electromagnetics. This is offering a way to bypass above roadblocks: developing AI technologies based on photonic/optical computing systems which are faster and much less energy-hungry that current ones. 

Indeed, low-latency and low-energy neural network computations can be a game changer for a pervasive AI. In this direction, fully optical neural network could offer enhancements in computational speed and reduced power consumptions.

My last paper on these topics available at the following link:
https://www.mdpi.com/2624-960X/1/1/11

09 October 2019

A pervasive "edge intelligence" ? Yes, but consuming less energy

After about five years of posts addressing various aspects on the evolution towards 5G Cloud-Edge Computing, I’ve been asked to start elaborating some ideas on “what’s next”. 

My take is that in the next 5-10 years there will be the true techno-economic chance of maturing and extending the perspective of the networks as part of a sort of pervasive “nervous system” of the Digital Society: a vision which I coined in 2014 for the first time at the Plenary of the EuCNC Conference in Bologna.


We know that a biological nervous system is a complex network of nerves and cells that carry messages to and from the brain and spinal cord to various parts of the body. The nervous system includes both the Central nervous system and Peripheral nervous system. The Central nervous system is made up of the brain and spinal cord and The Peripheral nervous system is made up of the Somatic and the Autonomic nervous systems.

Overall, we may summarize that a "nervous system” is about sensing the reality, comparing sensations with predictions and, eventually, acting on the reality in order to best adapt to the environment dynamics. This is a sort of “intelligence”, naturally embedded in living organisms. 

The idea that the brain, and more generally a nervous system, is like a network with inference engines is not new. As a matter of fact, the main task of the brain is trying to optimize probabilistic representations of what caused its sensory input: in other words, the brain has a model of the world that it tries to optimize it using sensory inputs to improve adaptation. This optimization is finessed using a (variational free-energy) bound on surprise.  


And this is done very efficiently, consuming only a few tenth of Watts !
This is great challenge as today AI is highly energy consuming ! Current AI technologies are very electricity-hungry, a problem that is manifesting itself both in the cloud and at the edge. Cloud servers and data centers currently account for around 2 percent of power consumption in the U.S. According to some forecasts, data centers will consume one fifth of the world’s electricity by 2025.



Take a look at this amazing paper by K. Friston, “The free-energy principle: a unified brain theory?” How can we bring these concepts into a pervasive network to transform it into a "nervous system"? 

In summary, it is likely we'll see a true “internet of intelligence” connecting “minds” with new forms of communications and interactions, sensing the reality with the most advance technologies (e.g., THz sensing), comparing sensations with predictions by means of Optical/Quantum Intelligence (much beyond today AI), and eventually, acting on the reality to best adapt to the environment dynamics.