31 August 2015

Digital Economy transition: “must learn how to deal with Complexity”

The “Tragedy of the Commons” was elaborated by Garrett Hardin and published, for the first time, in the journal Science in 1968. The article described a dilemma in which multiple individuals, acting independently according to their self-interest, ultimately destroy a shared limited resource (commons) even when it is clear that it is not in anyone's long term interest for this to happen.

However, when economists began to look at ecosystems of commonly managed resources, he discovered that often they work quite well. At the end, Hardin admitted he should have called his article “The Tragedy of the Unmanaged Commons”.

There is another very interesting perspective. Professor E. Ostrom, (Indiana University) was awarded with the 2009 Nobel Prize in Economic Sciences (shared with Oliver E. Williamson) for the results she achieved in analysing how communities managed Commons (e.g., grazing lands, pastures and similar natural resources to their advantage.

The problem formulation was: in a world of depletable resources, where individuals have incentives for survival (that would undermine the long-term viability of such resources) how does coordination and cooperation emerge ?

E. Ostrom argued that, with the right information, productive discussion and trust-based institutions, communities can come up with win-win ways to manage commons, without being government-regulated or privatized. In synthesis, theory emphasizes how humans and ecosystems can interact to provide for long run sustainability and highlights how diverse arrangements (over resources) can prevent ecosystem collapse. Models can perfectly applied to future smart cities, as an example of use-case, or even – more broadly - to the ecosystems created by the Digital Economy subjected to the Softwarization transition.

The Nobel Prize lecture is available here. Among the main recommandations, conclusions: « Must learn how to deal with complexity rather than rejecting it; polycentric systems can cope with complexity »

28 August 2015

Nature says that three control loops should be enough

Future Telecommunications infrastructures will be composed by such a large number of highly dynamic nodes, systems and devices that the number of connections will require going far beyond today’s paradigms. In the long term, Softwarization will transform said infrastructures in complex systems of virtual entities, pervasively distributed in our environment.

At the edges (where start-ups are booming today) it is likely that we'll see highly dynamic games of virtual resources (belonging to the same, or different Operators or any other Players), executing any sort of services by using local-edge-centralised processing and storage capabilities.  

It is pretty clear that it will not be possible any more adopting traditional management and control. 

Modelling (open or closed loop) control will become too much complicated and unstable if not supplemented with a variety of novel control techniques, including (non-linear) dynamic systems, computational intelligence, intelligent control (adaptive control, learning models, neural networks, fuzzy systems, evolutionary and genetic algorithms), and artificial intelligence. 

Out of this “chaos” of interactions, a sort of collective intelligence will emerge. This reminds the collective intelligence emerging in a beehive. Also, this is bringing me back when I studied the theory of “enactive” behavior of living systems, developed by F. Varela.

The theory (quite influential in some Artifical Intelligence avenues) argued that adaptive behaviour of simple living entities (e.g., bees) is based on two interrelated dimensions: 1) perception consisting of perceptually guided action and 2) cognitive structures, emerging from the recurrent sensori-motor patterns, enabling action to be perceptually guided.

So simple living entities cross several and diverse cognitive domains (or micro-worlds) which are generated from their interactions with the external environment: within a micro-world the behaviour is simply determined by pre-defined sensori-motor loops, simple and fast; from time to time breakdowns occur which are unexpected disruptive situations determining the need to change from a cognitive domain (i.e., from a micro-world) to another one. Importantly, this bridging (during breakdowns) is assured by the nervous system (allowing a new adaptation and the consequent learning of new sensori-motor loops).

Amazingly, this behaviour has been successfully exploited by Nature with three nested control loops (automatic/sensori-motor, autonomic/nervous, global). I've been impressed realizing that these same principles (e.g., adaptive control based on three control loops, supported by learning) are used for developing auto-pilots, smarter robots...and self-driving cars! 

The same principles can be used for decentralising the "intelligence" required for the execution of services (and data storage) in Fog - Edge - Cloud Computing resources in a SD-I. 

19 August 2015

A Vision of the Future: beyond Robots, ICT and SD - Quantum Communications...

Acceleration of Information Technology will bring to a socio-economic transition which will be very different from the past ones. Why ? It's because of its systemic impact: it never happened like that.

In the past even very disruptive innovations have had a relatively focussed impact, sector-by-sector: mechanization of agriculture, industry automation...for considering the most recent ones. Today's transition is quite different, it's much broader, spanning across several (if not all) sectors of our society and economy.

In principle, any job that can be broken down in series of routine tasks is susceptible of being digitalised, automated and done by any sort of robot (or cognitive machine) replacing humans. And maybe it's even more than that: think about IBM's Watson taking the Jeopardy!Challenge.

Generally speaking, this is avenue of "cognitization" is complementing and then replacing human brain: by the way this is not pursuing the original dream of emulating/re-building a human-like brain or nervous system. Human mind works another way. Ecosystems in Nature as well. A totally different story as it implies completely different technologies and approaches, dealing with Complexity (in the large) and probably the weirdness of Quantum Physics.

So what is left to humans tomorrow ? All those tasks and job which are not (yet) susceptible to digitalization, or softwarization. But isn't this like saying let's start creating, as humans, a new society and economy and let's leave the current digital society and economy to robots and cognitive machines! They will work for us, while, as humans, we'll create a new world which  machines cannot create and rule. The question is: can we make this sustainable ? The Digital Society investing in creating a future Quantum Society.

My bet is that investing in this new quantum world will be quite advisable. It will exploit the Quantum Mechanics basic principles. These technologies, capable of manipulating bosons (photons, bio-photons, etc), will be very much nearer Nature behaviour than today's ones, paving the way to us to learn looking better inwards and outwards. By the middle of the next decade the Moore’s law will no longer be sustained by silicon. Electronics will be surpassed by Photonics. So, we should make research how taming the weirdness of Quantum Physics for a new technology, developing a future economy. Or the alternative is to colonise the space (as argued by S. Hawking).

In the meanwhile, we're witnessing a growing attention on Quantum Computing, Quantum A.I. and Quantum Communications. Have a look at this paper on SD-Quantum Communications...but still far away from truly Quantum Mechanics future technologies.

Quoting R. Feynman  there is plenty of room in the Quantum Mechanics, rather we need Engineers populating it with new technologies.  

16 August 2015

Cambrian moment for ICT and Telecommunications

This is a very interesting special report of The Economist. In summary, it is argued that "Softwarization" is creating the conditions for an "explosion" of Digital Start-ups and new ICT ecosystems, changing dramatically tomorrow Industry and Society. And maybe also the government.

The metaphor is brilliant: Cambrian explosion was the relatively short evolutionary event, beginning around 542 million years ago, during which most major animal phyla appeared on the Earth, creating an enormous number of new ecosystems.

In the report it is argued that software is eating more and more industries: IT is in fact lowering transaction costs. Industries will have to reshape themselves and turn into ecosystems that rely on large horizontal platforms at one end, and a wide variety of modes of production at the other, from start-ups to social enterprises and communities to user-generated content.

Report concludes arguing that: "All in all, the impact of platformisation will be monumental. Those who see the current entrepreneurial explosion as merely another dotcom bubble should think again. Today’s digital primordial soup contains the makings of the economy and perhaps even the government of tomorrow". 

13 August 2015

Ask an Edge Network to design its next generation

Have a look at this interesting paper. Researchers led by the University of Cambridge have developed a robot that can autonomously designs and build other robots, test which one does best, and automatically use the results to inform the design of the next generation.

The abstract reads: Artificial evolution of physical systems is a stochastic optimization method in which physical machines are iteratively adapted to a target function. The key for a meaningful design optimization is the capability to build variations of physical machines through the course of the evolutionary process. The optimization in turn no longer relies on complex physics models that are prone to the reality gap, a mismatch between simulated and real-world behavior...

This makes me thinking a paper which I wrote recently “The Network is the Robot”.

Can you imagine developing an edge network, i.e. a robot, that can also autonomously designs and build other (softwarized) networks architectures, test which one does and adapts best, and automatically use the results to select the next generation ? Obviously I'm not yet talking about core networks, but dynamic capillary networks, or IoT...

A matter of defining sort of “cognition loops” (crunching sensed real world network big data) to generate a set of new virtual architectures to help efficiently exploring the next generation design space… And even more to analyze the influence of the physical resources and ambient constraints onto the diversity that can be achieved ! But it could be also some application social networks.

As they say in the paper “the key for a meaningful design optimization is the capability to build variations of physical machines”: that’s true also for softwarized architectures empowered with a proper level of cognition.

12 August 2015

Leveraging on Neuroscience to design and operate Softwarized Networks

A few posts ago, I’ve already quoted Peter Fingar is arguing that the Cognitive Computing Era is Upon Us. In particular, I’ve argued that, at the end of the day, Softwarization of Telecommunications will be more and more oriented on implementing pervasive “Cognition”. See also my keynote at ONDM2015.

In fact, Softwarized Networks – just like a nervous system - will be able to sense and collect massive data (e.g., by pervasive sensors, smart terminals, things, machines, robots); these big sets of data will be exchanged and moved very quickly (transported by optical and mobile networks with high bandwidth and low latency); also they will be elaborated (with Cloud/Edge and Fog Computing) in order to make decisions and then to actuate local actions (by any pervasive actuators embedded into the reality around us).

In view of that capillarity and dynamism, Softwarized Networks will require a deep change of paradigm in the design and operations. In fact, such a continuum of virtualized resources is more similar to a Complex Adaptive System, as a real nervous system is. No centralized management, but a plethora of interacting processes determining emergent properties and characteristics.

I’ve already mentioned that in the future we’ll be using more and more cognitive methods, heuristics and algorithms, machine learning, knowledge representation-reasoning and, eventually, massively parallel computation to crunch, and make use of, the Big Data. But we could even go beyond that, by leveraging directly on Neuroscience analysis and models.

As a matter of fact, Neuroscience is arguing the nervous system networks represent the best low-cost structural basis for coexistence of informational processing (both segregation and integration) and transfer. Striking analogies between nervous system networks and future Softwarized Networks are stimulating the idea that Neuroscience theories, models and methods could provide valuable interdisciplinary elements for designing management and control of future networks.

As an example, I’m fascinated by the Global Workspace Theory (GWT): it argues that human cognition is implemented by a multitude of relatively simple, special purpose processes (processors). Interactions between them are based on cooperation-competition, thus allowing, at the end, coalitions of processes find their way into a Global Workspace (GW).

GW is a shared virtual space which acts to broadcast messages of certain coalitions to all processors, in order to recruit others to join. In summary, GW serves to integrate many competing and cooperating networks of processes. Global behavior will be driven by a myriad of local micro-behaviors rather than what’s happening in current networks, where a former built “knowledge representation” is used to manage the networks behavior. Practically the approach permits to rehearse global behaviors prior enacting local processes; said behaviors are evaluated, and the relative salience of a set of concurrently executable actions can be modulated as a result: those behaviors whose outcome is associated to a gain (or reward) become more and more salient and, at the end, selected and executed (e.g. with winner-take-all-strategy).

Examples like GWT could be easily implementable as distributed software frameworks (do you remember Linda and its evolutions ?) where processes could be seen as software service components or virtual network functions. A very exciting area of research and innovation...