23 December 2019

Intelligent Metamaterial and Metasurfaces opening new dimensions

Metamaterials are designed and artificially crafted composite materials that derive their properties from internal microstructure, rather than chemical composition found in natural materials.

The core concept of metamaterials is to craft materials by using artificially designed and fabricated structural units (e.g. oscillators) to achieve the desired properties and functionalities. These structural units – the constituent artificial 'atoms' and 'molecules' of the metamaterial – can be tailored in shape and size, the lattice constant and interatomic interaction can be artificially tuned, and 'defects' can be designed and placed at desired locations.  

By engineering the arrangement of these nanoscale unit cells into a desired architecture or geometry, one can tune the refractive index of the metamaterial to positive, near-zero or negative values. Thus, metamaterials can be endowed with properties and functionalities unattainable in natural materials.

The fascinating functionalities of metamaterials typically require multiple stacks of material layers, which not only leads to extensive losses but also brings a lot of challenges in nanofabrication. Many metamaterials consist of complex metallic wires and other structures that require sophisticated fabrication technology and are difficult to assemble. The unusual optical effects do not necessarily imply the use of the volumetric (3D) metamaterials.

You can also manipulate the light with the help of two-dimensional (2D) structures – so-called metasurfaces (or flat optics).  Metasurfaces are thin-films composed of individual elements that have initially been developed to overcome the obstacles that metamaterials are confronted with.

According to the Coherent Market Insights, the global metamaterials market was valued at US$ 238.9 million in 2018 and is projected to exhibit a CAGR of 39.5% over the forecast period (2019 – 2027).

Some  examples of start-ups here:

Take a look at this paper: Intelligent metasurface imager and recognizer 

In this article, we present a proof-of-concept intelligent metasurface working at ~2.4 GHz (the commodity Wi-Fi frequency) to experimentally demonstrate its capabilities in obtaining full-scene images with high resolution and recognizing human-body language and respiration with high accuracy in a smart, real-time and inexpensive way. We experimentally show that our ANN-driven intelligent metasurface works well in the presence of passive stray Wi-Fi signals, in which the programmable metasurface supports adaptive manipulations and smart acquisitions of the stray Wi-Fi signals. This intelligent metasurface introduces a new way to not only “see” what people are doing but also “hear” what people talk without deploying any acoustic sensors, even when multiple people are behind obstacles. In this sense, our strategy could offer a new intelligent interface between humans and devices, which enables devices to remotely sense and recognize more complicated human behaviors with negligible cost.

In principle, the concept of the intelligent metasurface can be extended over the entire EM spectrum, which will open up a new avenue for future smart homes, human-device interaction interfaces, health monitoring, and safety screening.

20 December 2019

Optical Quantum Computing for a pervasive, fast and low consuming AI

Current AI solutions are quite resources/energy-hungry and still time-consuming. In fact, today DNNs (as other AI models) still rely on Boolean algebra transistors to do an enormous amount of digital computations over huge data sets. The roadblock is that chipsets technologies aren’t getting faster at the same pace as AI software solutions are progressing in serving markets’ needs.

Will this energy consuming trend be really sustainable in the long term? 

We remind that in basic functioning of a DNN, each high-level layer learns increasingly abstract higher-level features, providing a useful, and at times reduced, representation of the features to a lower-level layer. This similarity is suggestin, the intriguing possibility that DNNs principles are deeply rooted in Quantum Field Theory and Quantum Electromagnetics. This aspect is, perhaps, offering a way to bypass above roadblocks: developing AI technologies based on photonic/optical computing systems which are much faster and much less energy-consuming that current ones.

As a matter of fact, while, in line with the Moore’s law, electronics starts facing physically fundamental bottlenecks, nanophotonics technologies are considered promising candidates to overcome electronics future limitations. Consider that DNNs operations are mostly matrix multiplications, and nanophotonic circuits can make such operations almost at the speed of light and very efficiently due to the nature of photons. In simple words, photonic/optical computing uses electromagnetic signals (e.g., via laser beams) to store, transfer and process information. Optics has been around for decades, but until now, it has been mostly limited to laser transmission over optical fiber. Today technologies, using optical signals to do computations and store data, would accelerate AI computing by orders of magnitude in latency, throughput and power efficiency.

Matrix multiplication is the most power hungry and time-consuming operations in AI algorithms. Extrapolating current trends, speed of electronics components performing matrix calculations is likely to insufficient for supporting future AI applications, at least in the long term. Reducing the electric energy consumptions is also another strict requirement for sustainability.

Neuroscience may hold the solution. Our brain is not digital, it’s analogue, and it makes calculations all the time using electromagnetics signals, consuming just 30 W.

The advantage of using light processing to do matrix multiplication plays significantly in speeding up calculations and power savings. In fact, instead of using streams of electrons, the calculations are performed by beams of photons that interact with one another, in a medium, and with optical resonators and guiding components. To make it simple: unlike electrons, photons have no mass, travel at light-speed and draw no additional power once generated.

Interesting prototypes of an all-optical DNN are already available. For example, in this paper [1] shows the feasibility study of an all-optical diffractive DNN. The prototype is made of a set of diffractive layers, where each point (equivalent to a neuron) acts as a secondary source of an electromagnetic wave directed to the following layer.

The amplitude and phase of the secondary wave are determined by the product of the input wave and the complex-valued transmission or reflection coefficient at that point (following the laws of transformation optics). The transmission/reflection coefficient of each point of a layer is a learnable network parameter, which is iteratively adjusted during the training process (e.g., performed in a computer) using a classical error back-propagation method. After the training, the design of the layer is fixed as the transmission/reflection coefficients of all the neurons of all layers are determined. 

Other examples of prototypes are based on the metamaterials, information meta-surface, and optical field-programmable gate array based on Mach-Zehnder Interferometers.

[1] Lin, Xing, et al. "All-optical machine learning using diffractive deep neural networks. Science 361.6406 (2018): 1004-1008.

More about this at the following link.

22 October 2019

Edge Computing meets Artificial Intelligence

Edge Computing (EC) is about moving part of the service-specific processing and data storage from the Cloud Computing to the edge network nodes. Among the expected benefits of EC deployment in 5G, there are: performance improvements, traffic optimization and new ultra-low-latency services.

If today EC is getting momentum, we’re witnessing, at the same time, a growing development of Artificial Intelligence (AI) for a wide spectrum of applications, such as: intelligent personal assistants, video/audio surveillance, smart cities’ applications, self-driving, Industry 4.0. The requirements of these applications seem calling an AI’s resources-hungry model, whose cloud-centric execution appears in the opposite direction with a migration of computing, storage and networking resources at the edge. 

In reality, the two technology trends are crossing in the Edge intelligence (EI): an emerging paradigm meeting the challenging requirements of future pervasive services scenarios where optical-radio networks requires automatic real-time joint optimization of heterogeneous computation, communication, and memory/cache resources and high dimensional fast configurations (e.g., selecting and combining optimum network functions and inference techniques). 

Moreover, the nexus of EI with distributed ledger technologies will enable new collaborative ecosystems which can include, but are not limited to: network operators, platform providers, AI technology/software providers and Users.

A major roadblock to this vision is the long-term extrapolations of the energy consumption needs of a pervasive Artificial Intelligence embedded into future network infrastructures. 

Low-latency and low-energy neural network computations can be a game changer. In this direction, fully optical neural network could offer impressive enhancements in computational speed and reduced power consumptions. 

14 October 2019

Photonic Computing paving the way to a "pervasive intelligence"

One of the most demanded tasks of AI is extracting patterns and features directly from collected big data. Among the various most promising approaches for accomplishing this goal, Deep Neural Networks (DNNs) are outperforming. 

The reason of the DNNs are so performing is not fully explained yet, but one possible explanation, widely elaborated in literature, is that being DNNs based on an iterative coarse-graining scheme, their functioning is somehow rooted to fundamental theoretical physics tool (e.g., Renormalization Group). 

The reverse side of the coin is that this is rather resource consuming and, as such, energy demanding. In fact, today DNNs (as other AI models) still rely on Boolean algebra transistors to do an enormous amount of computations over huge data sets. This has two major consequences: on one side chips and processors technologies aren’t getting faster at the same pace that AI methods and systems are progressing, and, on the other hand, current AI technologies are becoming more and more electricity-hungry. 

Today, for example, Cloud servers and data centers currently account for around 2% of power consumption in the U.S. According to some forecasts, data centers will consume one fifth of the world’s electricity by 2025. 

Will this energy consuming trend be really sustainable in long term scenarios (e.g., 6G) ?

Take a look at this paper - Lovén, Lauri, et al. "EdgeAI: A Vision for Distributed, Edge-native Artificial Intelligence in Future 6G Networks." The 1st 6G Wireless Summit (2019): 1-2.

We remind that in a DNN that each high-level layer learns increasingly abstract higher-level features, providing a useful, and at times reduced, representation of the features to a lower-level layer. This similarity, more specifically, is suggesting the intriguing possibility that DNNs principles are deeply rooted in quantum electromagnetics. This is offering a way to bypass above roadblocks: developing AI technologies based on photonic/optical computing systems which are faster and much less energy-hungry that current ones. 

Indeed, low-latency and low-energy neural network computations can be a game changer for a pervasive AI. In this direction, fully optical neural network could offer enhancements in computational speed and reduced power consumptions.

My last paper on these topics available at the following link:

09 October 2019

A pervasive "edge intelligence" ? Yes, but consuming less energy

After about five years of posts addressing various aspects on the evolution towards 5G Cloud-Edge Computing, I’ve been asked to start elaborating some ideas on “what’s next”. 

My take is that in the next 5-10 years there will be the true techno-economic chance of maturing and extending the perspective of the networks as part of a sort of pervasive “nervous system” of the Digital Society: a vision which I coined in 2014 for the first time at the Plenary of the EuCNC Conference in Bologna.

We know that a biological nervous system is a complex network of nerves and cells that carry messages to and from the brain and spinal cord to various parts of the body. The nervous system includes both the Central nervous system and Peripheral nervous system. The Central nervous system is made up of the brain and spinal cord and The Peripheral nervous system is made up of the Somatic and the Autonomic nervous systems.

Overall, we may summarize that a "nervous system” is about sensing the reality, comparing sensations with predictions and, eventually, acting on the reality in order to best adapt to the environment dynamics. This is a sort of “intelligence”, naturally embedded in living organisms. 

The idea that the brain, and more generally a nervous system, is like a network with inference engines is not new. As a matter of fact, the main task of the brain is trying to optimize probabilistic representations of what caused its sensory input: in other words, the brain has a model of the world that it tries to optimize it using sensory inputs to improve adaptation. This optimization is finessed using a (variational free-energy) bound on surprise.  

And this is done very efficiently, consuming only a few tenth of Watts !
This is great challenge as today AI is highly energy consuming ! Current AI technologies are very electricity-hungry, a problem that is manifesting itself both in the cloud and at the edge. Cloud servers and data centers currently account for around 2 percent of power consumption in the U.S. According to some forecasts, data centers will consume one fifth of the world’s electricity by 2025.

Take a look at this amazing paper by K. Friston, “The free-energy principle: a unified brain theory?” How can we bring these concepts into a pervasive network to transform it into a "nervous system"? 

In summary, it is likely we'll see a true “internet of intelligence” connecting “minds” with new forms of communications and interactions, sensing the reality with the most advance technologies (e.g., THz sensing), comparing sensations with predictions by means of Optical/Quantum Intelligence (much beyond today AI), and eventually, acting on the reality to best adapt to the environment dynamics.

31 July 2019

Complex Deep Learning with Quantum Optics

The rapid evolution towards future telecommunications infrastructures (e.g., 5G, the fifth generation of mobile networks) and the internet is renewing a strong interest for artificial intelligence (AI) methods, systems, and networks. 

Processing big data to infer patterns at high speeds and with low power consumption is becoming an increasing central technological challenge. Electronics are facing physically fundamental bottlenecks, whilst nanophotonics technologies are considered promising candidates to overcome the limitations of electronics. 

Today, there are evidences of an emerging research field, rooted in quantum optics, where the technological trajectories of deep neural networks (DNNs) and nanophotonics are crossing each other.

This paper elaborates on these topics and proposes a theoretical architecture for a Complex DNN made from programmable metasurfaces; an example is also provided showing a striking correspondence between the equivariance of convolutional neural networks (CNNs) and the invariance principle of gauge transformations.

Paper available at the following link:

12 March 2019

5G EdgeCloud - part 2

Cloudification” of telecommunications infrastructures and MEC/Edge-Cloud are leveraging on the exploitation of the (almost) full “virtualization” of both resources (e.g., processing, storage and networking) and network/service functions (e.g., Virtual Network Functions) up to the edge (i.e., access, distribution segments), or even beyond it up to the terminals or smart things (i.e., Fog Computing). 

MEC/Edge-Cloud is just a piece of the overall puzzle, as the border with Cloud Computing is disappearing soon !

This innovation trend in offering the opportunity of extending the business role of telecommunication Operators to play the role not only of Service Providers (for SaaS) but also the roles of both IaaS Infrastructure-as-a-Service (IaaS) Provider and Platform-as-a-Service (PaaS) Provider, in global markets.

This creates the conditions for boosting new open ecosystems, easing life to Applications and Service Developers.

In order to materialize this vision, there should be a clear definition of what are the services offered by the 5G IaaS and PaaS and the related open/standard APIs to access them.

For example, Infrastructure as a Service (IaaS) layer offers:
  • Raw virtual resources for connectivity, processing, storage;
  • Virtual Network Functions and other Network Services (e.g., middleboxes such as bridges, routers, load balancers, firewalls, video optimizers, etc).

Platform as a Service (PaaS) layer provides a controlled access - through standard interfaces API - to underneath IaaS. Moreover, PaaS include, for example:
  • Operating Systems services, services and apps development instruments and tools, database management capabilities, business analysis, A.I. tools, etc

The following picture is summarizing a number of possible business models, capable of re-defining the equilibria in the overall Industry.

08 March 2019

5G EdgeCloud - part 1

It was mid 2013 when we published one of the first visionary papers on EdgeCloud, in the IEEE Communications Magazine. 

After six years, MEC/EdgeCloud indefinitely under the spot in industry, worldwide.  In fact, MEC/EdgeCloud is expected to play a key strategic role in this Digital Transformation towards 5G.

There are evidences that, today, Network and Service Providers are exploring different strategies for MEC/Edge Cloud introduction and exploitation, mainly (but not only) motivated by the potential opportunities for: (i) saving costs in the Digital Transformation of the network and service infrastructures; (ii) generating new revenues, e.g., by improving performance of current services and enabling new ones, with the related business models.

These topics are addressed by several standardization bodies and fora.
A non- exhaustive list includes:
Telecom Infra Project (WGs on Edge Computing)
EdgeX Foundry
Open Edge Computing
In general, the overall standardization picture is rather fragmented but these bodies are addressing MEC/Edge from different perspectives (they are not fully overlapping) and there is a common awareness that global interoperability is a “must” for enabling new services ecosystems.

This means that Industry need to align on open and common APIs capable to ease Service and Apps Developers: in fact, this is crucial to promotes innovation and accelerates development by Third Parties applications and services, capable of enabling Network and Service Providers to capitalize on their investments on EdgeCloud.

29 November 2018

Towards Quantum Technologies and Services in Telcos

There are evidences of increasing efforts and investments in innovation activities on Quantum Technologies. Notable example are activities of Microsoft, IBM, HP, Toshiba, Google, NASA, Intel, Alibaba, BT, TID, KT and several other Academia and Centers of Excellence.

Quantum technologies and architectures are showing different levels of maturity, but it is believed that first commercial systems are likely to be available in the range of five to ten years: advanced prototypes, an in some cases commercial solutions, are already available.

A future breakthrough in the development of Quantum technologies and services at affordable prices will have systemic and far reaching impacts, e.g.

  • Quantum Internet capable of exchanging information through fully optical networks and processing it, optically, in the form of encoded photons (higher level of security than today);

  • the development of disruptive applications in the areas of cryptography, cyber-security and anti-counterfeit transactions with “quantum money”, finance, but also in bioinformatics, quantum machine learning and quantum intelligence;

  • radical implications in other sectors and industries, such as new faster ways of processing genetic big data, quantum biology and medicine or developing of new nano-tech smart materials.

It is likely that quantum systems will eventually be available in five to ten years:

Current efforts are on: 1) materials/chipsets; 2) scalability by implementing error correcting codes; 3) design and engineering quantum architectures.

Once available, quantum systems (and quantum algorithms) have the potential jeopardizing the current security systems (source ETSI).

Products and trends tend to follow a standard innovation cycle starting with early adopters who pay high premiums, and ending with commoditized product offerings with abundant competition. Quantum will reset the innovation cycle for many common commoditized security, and the real costs of concern are related to switching to new “quantum safe” technologies.

Eventually, it can be argued that if the “Softwarization” of Telecommunications is going to “commoditize” the digital infrastructures (by opening an OPEX cycle) a breakthrough in quantum technologies would have the potential to (re-)open a new CAPEX cycle, by requesting large investments for deploying future quantum infrastructures. 

Presentation @ GSMA available at this Link 

22 November 2018

The emergence of the 4th Brain …

Neuroscience has provided many important insights about structure and functions of the human brain. One of the most shared models was proposed by neuroscientist Paul MacLean: the so-called 'Triune Brain'.

Three separated brain structures are often referred as separate 'brains', operating almost independently but simultaneously.
  1. basal ganglia (found at the center of the human brain) referred to as the reptilian brain, in charge of controlling our innate and automatic self-preserving behaviors, ensuring survivability;
  2. limbic system (which consists of various component brain structures, such as the amygdala and hippocampus), in charge of controlling emotions;
  3. mammalian neocortex (which is implicated in conscious thought, language and reasoning). 
Today technology advances (in the systemic digitalization of reality and Artificial Intelligence) are likely to create a 4th digital brain, on top of the mammalian neocortex. 

There are several virtual assistants or intelligent personal assistants which are becoming  very popular today, as capable of performing tasks for individuals, In some sense these software smart agents are augmenting human intelligence in the digital cyberspace. In the near future these virtual assistants or intelligent personal assistants will become more and more intelligent, capable of being proactive and autonomous. 

But there is more.

Ray Kurzweil predicts: within 30 years direct links will be established between the human brain and computer circuitry. The implications are mind-boggling. Such links could mean that the entire contents of a brain could be copied (and preserved) in an external database. Not only would the human brain be supplemented with enormous amounts of digital memory, it would also be linked to vast information resources like the internet — at the speed of thought.

Eventually, in a few years, the three biological human brains will be supplemented with enormous amounts of processing and memory capabilities - which means A.I. - and it is likely that they will be linked to the immense information resources offered by the web, almost instantaneously.

Indeed one may look at this as a 4th brain, on top of the mammalian neocortex !

The only big problem is that it was Nature to design and create directly this fourth brain.

So, this would have inevitably enormous implications and terrifying dangers, but hopefully an Artificial Immune System will be develop to defend civilization from this dangers.

10 October 2018

A Digital Nervous System for the Industry4.0

A profound Digital Transformation is impacting the evolution of the Digital Society.

The levels of maturity and the convergence of a number of techno-economic trajectories such as: the penetration of ultra-broadband fixed and mobile and the coming of 5G, the down spiralling cost of IT systems and the contemporary increase of their performance, the evolution of the Cloud Computing towards Edge and Fog Computing, and remarkably the consequent Cloudification of the Telecommunications.

The Digital Transformation is transforming also the Industry.

In fact, the term Industry 4.0 is becoming more and more popular to refer to the so-called fourth revolution: in fact, the first industrial revolution mainly concerned mechanization through water and steam power; the second one addressed mass production and assembly lines using electricity; the third was based on the adoption of computers and automation to enhance production and assembly; eventually the Industry 4.0 is bringing the concept of Smart Factories based on the digitalization and seamless interworking of processes and  production steps, from planning stages to actuators in the field.

Machinery and equipment will be able to improve processes through self-optimizations and autonomous adaptation to the environment conditions (e.g., from local ones to market requests).

Among the major technology drivers for industry 4.0 there are: availability of huge computational power at low costs, low-latency high bandwidth connectivity, Big data Analytics and AI systems, human machine interaction and digital to physical conversion. Industry 4.0 will allow faster, flexible and efficient processes (from product development and purchasing, through manufacturing, logistics and service) digitalization and integration of vertical and horizontal value chains, digitalization of product and service offerings, digitalization of business models and customer access.

No need to say that market opportunities are huge. For example, according to a report by HSRC (Global Industry 4.0 Market& Technologies 2018-2023) the Industry 4.0 market is projected to reach $214B by 2023.

In Industry 4.0, smart manufacturing is based on cyber-physical systems, or digital twins, as virtual models of processes, products, and services. Through ubiquitous, low-latency 5G connectivity, smart sensors transmit data to the Cloud-Edge Computing where the data are processed and analyzed with AI system to provide contextual and predictive data in order to make decisions the actuated in the reality.

In Industry 4.0, the convergence of manufacturing and services is going to be fueled by the XaaS model which has recently emerged in the Digital Transformation of Telecommunications. In fact, the introduction of technologies such as SDN (Software Defined Network) and NFV (Network Function Virtualization), supported by the virtualization of any resource and function, put forward the model XaaS, both in a Telco infrastructure and in any reality.

This is what we call by Cloudification of Telco Infrastructures: a Digital Transformation of Telecom Infrastructures spanning from the network PoPs to the Data Centers up to the edge nodes, Users’ terminals and Smart Things where “virtualization” is acting as a unification framework.

In summary, Cloudification of Telco Infrastructures is representing an opportunity to develop a Digital Nervous System for the Smart Factories of the Industry 4.0. In fact, Cloudification will allow: 1) collecting and processing the Big Data of the Smart Factory; 2) processing Big Data with A.I. methods/algorithms and comparing results with plans in order to infer decisions; 3) actuating XaaS (even automatically) in terms of actions, with multiple actuators, devices, smart things to communicate, control and optimize Smart Factory’s processes, etc…

Intelligent Metamaterial and Metasurfaces opening new dimensions

Metamaterials are designed and artificially crafted composite materials that derive their properties from internal microstructure, rather...