30 December 2015

The Search for the “New Big A.I. Network”

The "New Big Network" will be Artificially Intelligent (my previous post). But which way to exploit it ? What are the ICT bottlenecks ?

About fifty years ago, Gordon Moore published a paper titled: “Cramming More Components onto Integrated Circuits.” It was the first formulation of the Moore principle that, after some revisions, became a law: every two years the number of transistors on a computer chip doubles.

Moore’s law is one of the foundations of the Digital Society. In fact, integrated circuits are the bricks for building  computers and IT systems: as a matter of fact, as mentioned in this nice article, The Search for a New Machine, “Moore’s law makes computers evolve.” In fact, making transistors smaller, means increasing computational capacity at the same costs. But now that we are approaching the atomic scale (xx nanometers), we see the scaling limits of this law, the quantum wall. To learn more about this, have a look at this paper Limits on Fundamental Limits to Computation where Igor L. Markov addressed both the limiting factors and the salient trends for achieving faster computations for single-core or parallel processors in different ways than just scaling.

It should be noted, nevertheless, that’s not just a matter of achieving faster CPU (or GPU). There is another wall, which might be more even more strategic: the memory wall. In fact, memory hierarchy (e.g., DRAM, SRAM, flash) is responsible for most of the problems for achieving faster computations. It’s little useful faster CPUs without high-speed, high-capacity memory to store bits and deliver them as quickly as possible.

The real challenge is developing a “universal memory”, a sort of data storage device combining the cost benefits of DRAM, the speed of SRAM, the non-volatility of flash memory, and infinite durability. It may no longer matter that silicon CPU elements will become smaller than seven nanometers or faster than four Gigahertz.

In this direction, Hewlett–Packard is developing chips based on a new type of electronic component: the memristor. A memristor is combining “memory” and “resistor”: in other words it has the strange ability to “remember” how much current previously flowed through it, thus combining storage and random-access memory. Like the neurons, memristors transmit and encode information as well as store it.

Also IBM is introducing a brain-inspired computer: through a number of steps Phase 0, Phase 1, Phase 2, and Phase 3, from neuroscience to supercomputing, they came up with a new computer architecture, a new programming language, algorithms, applications, and even a chip: TrueNorth, where five billion transistors model a million neurons linked by 256 million synaptic connections.

 If the future “New Big Network” will have to interconnect a growing number of IT systems, machines, terminals, smart things, devices … then it will have to transmit/network - at ultra-low latency - petabytes of information to and from Data Centers for storage and processing. Surely, SDN, NFV... Softwarization will make the networks highly flexible. But if a breakthrough towards the universal memory will be able to empower “supercomputer-like” capabilities in smaller, less energy-consuming edge-fog network elements, then data streams could be stored and network functions preprocessed locally. Universal memory, more than faster and faster CPU, or unlimited bandwidth, could change the “equation” of the future “New Big A.I. Network”.

27 December 2015

Artificially Intelligent Internet

At the “SDN & IOT Industry Session” of the World Forum - IoT 2015 (Milan, 13th Dec. 2015) I’ve argued that the deployment of SDN-NFV paradigms - at the edge of current infrastructures - will become a powerful enabler for IoT platforms: in other words the distinction between Edge SDN and IoT is going to disappear, with a “fusion” of the ICT enabling technologies.

Today I’m arguing that another border is going to blur: the one between ICT and A.I., bringing to the emergence of the Artificially Intelligent Internet. There are several evidences around…

Have a look at this report which is arguing that A.I. interfaces will replace smart-phones in five years. Machines are starting learning as humans, also they are teaching them to see and they have even developed a way for AI machines to learn from the crowd. Google working on how converting language into a problem of vector space mathematics. Next, it will be bringing “intelligence” into an Operating Systems to empower devices with the capacity for logic and natural conversation.

In summary, equation of future Internet will be based on a borderless “fusion” of technologies.

A.I. interfaces will be the next terminals for human and non-human Users. Mathematics will be the language, coded in terms of software services and functions; IT processing will run said functions and services in order to make decisions and service actions, IT storage systems will store encoded/actionable information and, eventually, the Big Network will be create a web of relationships between producers and consumers of services…by hooking billions of processes with ultra-low latency connections.Telecommunications, ICT and A.I. will "merge" together.

Artificially Intelligent Internet will embed a continuum of “networked cognition loops” into the reality. This will create the conditions for a new economy.

12 December 2015

Softwarization towards closing a CAPEX cycle

In the past, the barriers to entry for new Players in the telecommunications were rather high and primarily concerned the necessity for massive capital expenditures (CAPEX).

The telecommunications infrastructure required in fact high capital expenditure investments, at a level that would be very difficult for any new company to enter. Existing major Operators have taken decades to construct their existing infrastructures, and they used to possess an enormous advantage over any new company attempting to establish a presence in the telecommunications market.

Today the story is changing rapidly. Technology drivers (e.g., high-performance standard hardware coupled with a wide deployment of ultra-broadband connectivity) are bringing the telecommunications CAPEX cycle towards the end. Low investments, low RoI. On one side we have created the conditions for the development of an “hyper-connected world”, on the other side Softwarization is “pulverizing” the telecommunications market, which is moreover under the pressure of OTTs, which are providing global ICT services (almost) “for free” an (mostly) “un-regulated”.

In brief, Softwarization is enabling and driving:
  1. a converged (fixed-mobile Network and Data Centre) infrastructure (whose CAPEX is gradually decreasing) where the creation/provision of services is likely to become decoupled from the Operations. This will bring a related split of business roles (infrastructure/service enablers and the service providers);
  2. a converged industrial structure covering voice services, Internet access services, and ‘OTT’ services, “packaged” in various ways;
  3. a consequent split of roles in vendors supplying the infrastructure/service enablers and the service providers: the market will see high volume standard hardware (e.g., server and switch/routers) and a world of “software”.

This transition will have a number of far reaching implications, in terms of job, culture and socio-economic transformation. The grand question is “how opening a new cycle” !

29 November 2015

IEEE SDN Initiative launcing the Open Mobile Edge Cloud

As Chair of IEEE SDN, I’m very honored and pleased posting today the report (made by Cagatay Buyukkoc AT&T) of the Mobile Edge Cloud kick-off workshop (IEEE NJ, on November 16, 2015) organized by the IEEE SDN Initiative, Preindustrial Committee.

There is some fragmentation in the implementation of SDN/NFV frameworks. The main objective of Preindustrial Committee (Chaired by Cagatay Buyukkoc AT&T) is to create environments towards industrial and academic convergence in areas that we collectively decide that are key in our drive towards “5G Era.” To be able to do this, our committee is looking at opportunities for Proof of Concept (POC) work and establish relationships with other groups doing relevant work in Europe, Asia, Americas, Africa, etc. to increase collaboration & reduce duplication. The idea is to promote experimentation and build consensus around a small set of implementation frameworks. Otherwise, there is a danger of SDN/NFV islands that are not interoperable.

A main focus of the workshop was to introduce a small set of frameworks for such a POC at the Mobile Edge that goes beyond the usual rhetoric and actually implement it in an Open lab environment try to support the idea that look very promising. In this case we chose to start with ON.lab to support a POC on M-CORD as the platform. The details of the POC is at the end of this brief report. Within the industry and academia and continue supporting POCs and trials. Hence, the main focus is on potential gaps in the industry, and addressing the practical side of innovations, creating flexibility as well as interoperability.

Here are the SDN Initiative short and longer term objectives for this Preindustrial work:
  • Identify use-cases and proof-of-concepts (POCs) for SDN-NFV frameworks for End to End (E2E) services scenarios (e.g., CRAN, Orchestration of VNF, fixed + mobile OS, SDX etc.) E2E architectural components. E2E coordination/collaboration on policy, QoS, QoE and including device capabilities are longer term objectives.
  • Rethinking everything: Software, Automation, Shannon, Control, Cellular structure, Next Generation (NG) Base Stations and Mobile Edge, Spectrum, Software Defined-Air interface, NG Core, NG RAN, NG Edge, Complexity, Resilience, etc.
  • Define the experimental best practices for validating various use-case and proof-of-concepts (in coordination with other ongoing initiatives, e.g., ETSI/MEC, ITU-T, ATIS, ONF, OpNFV, etc.) Prepare – as a group – to identify major gaps, create a research community around them and eventually prepare joint contributions to steer standards. There is a broad community addressing various aspects of SDN/NFV/Programmability frameworks.
  • Explore and contribute to Open Systems, but provide an architectural framework as a starting point, hence the parenthesis around Open!
  • Explore the opportunity of creating IEEE certification services for SDN/NFV to accelerate trust for industrial adoption.

The meeting started with opening remarks from Tim Kostyk (IEEE program director) and Eileen Healy (SDN Initiative co-chair). They both emphasized the importance of industrial and academic collaboration, IEEE SDN initiative goals and outlined the workshop activities. The work we do here will impact the speed and ubiquity of interoperable software defined networks for decades to come. Cagatay Buyukkoc then welcomed the group and provided a brief summary of Preindustrial Subcommittee goals and explained the importance of convergence and Proof of Concepts. The major thrust was to emphasize 5G Era and how we need to rethink some key concepts to support and getting ready for future. Towards this end, collaboration with Princeton University (Prof Mung Chiang group) and Stanford University (Prof Sachin Katti group) was explained and the POC support for ON.lab was outlined. Several industry trends were summarized as potential collaboration with ETSI/MEC and other relevant work in the area of Mobile Edge.

Based on various collaborations with other SP’s the (Open) Mobile Edge Cloud was defined as:

An (open) cloud platform that uses some end-user clients and located at the “mobile edge” to carry out a substantial amount of storage (rather than stored primarily in cloud data centers) and computation (including edge analytics, rather than relying on cloud data centers) in real time, communication (rather than routed over backbone networks), and control, policy and management (rather than controlled primarily by network gateways such as those in the LTE core). (Based on largely Prof Mung Chiang work)

Prof Sachin Katti described their work around the softRAN and how that would fit within the Mobile Edge Cloud concept of the workshop. The key areas of softRAN are the deployment of SDN concepts to the RAN. The separation of control plane and data plane and ability to program and distribute key control plane and data plane functions across the RAN (actually any location) are key innovative pieces that will be part of IEEE POCs and trials.

Then Guru Parulkar, director of ON.Lab described the vision, role and capabilities brought by CORD framework. CORD (Central Office Re-architected as a Datacenter) is a key new direction of building an infrastructure using commodity components and enabling open source and whitebox approaches. It is a revolutionary look at architectures. The purpose here is to enable introduction of new services at much faster rates than traditional architectures allow. This was all possible using the recent SDN/NFV frameworks and the ON-OS that is enabling the direction. Tom Tofigh presented the Mobile-CORD, extensions to support Mobility and edge concepts. This is a set of architectures that are consistent with the vision of IEEE SDN Initiative and our support of the POC to follow in 1Q2016.

There were individual presentations from Tao Chen (Coherent and TNF views), Douglas Castor, Arun Jotshi, Laurent Ruckenbusch, Lior Fite, several others.
After this all participant were invited to talk for a few minutes and everybody participated in this round of discussions!

In the afternoon, there were team exercises in which participants worked on some key topics. The suggested topics that were identified by the participants were:
  • Service mobility/Content Distribution,
  • Radio, core functions disaggregation & Control loops: what is centralized, what is at the edge?
  • Mobility management.

Two separate teams worked through a few hours on the same topics and came back and presented their findings. Team leads were Ian Smith and Tom Tofigh.

The key results of the workshop are:
  1. Mobile edge needs a new definition. It should be inclusive and build on existing work elsewhere and prevent divergence through collaborations and joint POCs.
  2. Leverage cloud concepts at the edge: There is big drive towards this that we must quickly realize using a common architecture for content distribution, data analytics, compute and control & steering applications
  3. Tactile Internet and IoT with control loops require 1 ms E2E delays, i.e., at most 10 mile from the EU processing.
  4. The 5G Era is fundamentally refactoring the RAN, Edge and Core architectures, need to judiciously rearchitect E2E functions on common platform and create a Software-Defined ecosystem.
  5. Some of the work needs to align better with EU Horizon activities
  6. Cooperate & collaborate globally!

The Preindustrial subcommittee will host another meeting in about 6 months, and biweekly web meetings going forward. Let’s make this event a beginning for an important industry collaboration!

Meeting report by: Cagatay Buyukkoc

Please join IEEE SDN: contact antonio.manzalini@telecomitalia.it

19 November 2015

Innovation by Contamination

There is no doubt that Softwarization of Telecommunications will bring a radical business tranformation along a number of Industries, not only in Telecommunications and ICT.

Software Defined Networks (SDN) and Network Function Virtualization (NFV) will accelerate the process towards converged infrastructures (see my last post), spanning from the smart things, terminals, through the network, to the Data Centres (not only centralised, by also at the edge).

Resource virtualization, APIs and agile operational processes (including multi-level orchestrations) will constitute the „fil rouge“ of these converged infrastructures, which will be software-driven. All agree on that. So let’s make it happen, jointly, in a Blue Ocean ! 

We realise that convergence on common reference architectures and Open Source software standard solutions are highly important for enabling a successful deployment of said converged infrastructures.

Question is: how overcoming today fragmentation in this key transition ? There is still a tendency in working in closed „silos“, postponing the change, struggling in competition issues: but all of this is jeopardising the presence of mind to face this change, which will come anyway, and soon.
And it will be also (or above all) a change of culture.   

My take is that we need making „Innovation by Contamination“.
This means "energy and courage" to change, crowd-funding/ideas initiatives, open source communities, bottom-up integration of test-beds and field-trails, prototyping and standard certification of open source SW solutions to be exploited in "sandboxes".

"Contamination" is sometimes used to describe a smooth and continuous transfers of “organisms” from one natural ecosystem to another one: in our case, it will be the propagation of the new “culture of this digital business transformation” even to those ecosystems which are still not realizing the importance of this “dip” into the Blue Ocean.

Join „Innovation by Contamination“   

18 November 2015

Softwarization will be a Biz Transformation

Softwarization will mean a radical Biz Transformation of Telecommunications and ICT. Enabling technologies such as SDN, NFV but also Mobile Edge Computing will accelerate a substantial convergence process, already ongoing, bringing towards highly integrated IT+Network infrastructures.

These software-driven infrastructures will be able to host a wide variety of network and service functions and components. Some of these services (let’s say low level network services) will have to be executed either in the Cloud or in the middle, or at the edge of the network, by involving virtualized functions which carry out intermediate processing of information.

Examples are typical network services, such as: content distribution networks; authentication, authorisation, and access control; content policing and filtering; content based routing; content based QoS management (e.g., DPI); intrusion detection; firewall; content based performance acceleration and bandwidth optimisation (WAN acceleration)… and other middle-boxes. These virtual network functions (and many other other) will have to be dynamically combined into combination of services by constructing specific chaining of functions – called service chains (here we are not talking about Consumers’ services yet).

So, the orchestration can be seen as a key process of such converged infrastructure: it will take care of the different steps involved in the provisioning of virtual network functions and service such as creating and removing logical resources as well as installing, configuring, monitoring, running and stopping software in the logical resources. But these software-driven infrastructures will have also to provide APIs to upper platforms for developing and provisioning Consumers’ services. In this sense, it will have to have another level of orchestration typical of the IT higher level service (based on more articulated combination of service logics): just like an OS supporting diverse application platforms. The biz role/model will depending on where this border is set.

So, in this way to this future software-driven infrastructures, arguably, we have two main strategies for handling the consequent business transformation:
  1. Evolutionary: 1) inertial evolution of the legacy infrastructure... to transform it gradually into an agile software-driven infrastructure. 
  2. Bi-modal: 1) “clean slate” deployment of a new (agile by design) software-based production infrastructure (sandbox) operated in parallel the traditional legacy infrastructure; 2) progressive decommissioning of legacy systems and gradual migration of (ongoing and future) service provisioning from the legacy to the new software-based production infrastructure. This implies obviously that there will be a coexistence, for some time, of the two infrastructures (which should temporarily interwork) and that Customers will have to be smoothly migrated from the legacy to the new software-based production infrastructure.

Join us in IEEE SDN to shape this business transformation !

15 November 2015

A Global Consciousness for Security

Topic of the last post was the feasibility of an Internet of conscious machines.
Consciousness of machines has - obviously - nothing to do with humans consciousness, but it shares the same functioning principle: integrating encoded information collected from the world, predicting events and then inferring decisions.   
Today A.I. is already being used by some Providers for anticipatory shipping practices, to identify services and items Users may want to buy before they even begin to search. This is a form of consciousness, where multiple sources of information are integrated to predict events.
Tomorrow hopefully conscious machines may help us in defending our personal security.
Simply consider as an example intelligent video analytics: already today, A.I. methods and systems can analyze video in real-time and detect abnormal activities that could pose a threat to security. 
Imagine extending these capabilities along multiple sources of data, then integrating all encoded information and eventually sharing the inferred knowledge on a global planetary scale...
In a certain sense, it's ICT developing the Teilhard's idea of a Noosphere, i.e., a layer of intelligence enveloping the earth.
I've been always fascinated by activities of the Global Consciousness Project: have a look at its link
Just replace the "eggs" with conscious-like machines...

12 November 2015

Softwarization paving the way to an Internet of Conscious Machines ?

Have a look at this amazing talk.
Joscha brilliantly elaborates a fascinating path from Computation to Consciousness. It is argued that humans are likely computational systems capable of encoding and processing information (not necessarily digital), in very specific ways (e.g., neural, quantum computing).

Out of this, mind emerges from the body (not just from the brain): when our body lives and thinks is creating “informational structures” (I would add, breaking down symmetries) at the very basis of our daily behaviors. By the way, this is done by dissipating energy, thus unavoidably increasing the Entropy of Universe, as a whole.

So, it is argued consciousness comes from the correlation, integration of such “informational structures” or encoded information. That’s, in my opinion, the most challenging future of Artificial Intelligence (A.I.): building conscious-like machines by “networking, combining, integrating” sets of functions capable of encoding and processing very specific information, received from the world.

How long will it take ? Impossible to predict it, as any real and impactful disruption, it will be the result of geniality. It might happen earlier than we expect.

In essence, the main difference between non-living and living matter is indeed the way information is coded (bits, qubits, etc), processed (think about unconventional computing), stored and combined. We can also argue, in this direction, that the concept of machine consciousness can be globally extended up to where these encoding and processing functions are allocated and run.

It turns that Mathematics is a sort of language, Computation is about running said language (coded in software), Storage is about saving this encoded information and, eventually, the Network is creating relationships between said sets of functions. Thus consciousness has the highest value in the chain.

Here it comes also the concept of future Internet as an “artificial nervous system”. Well, already today, we are coupling our mind/body with laptops, tablets, smart terminals, sensors/actuators, avatars, agents, etc…which are implementing information retrieval, coding and processing functions. Add also that softwarization is about making the networks very flexible and pervasive, capable of hooking billions of processing functions with ultra-low latency connections.

It’s quite intuitive predicting the future intertwining of Softwarization and A.I. trajectories and, as such, that the coupling of different forms of humans-machines conscious interactions will likely to be the final killer application.

10 November 2015

Softwarization to "stop the war of IoT stacks"

During the last ICT 2015, Lisbon, Portugal, the biggest EU event on ICT R&I in 2015, Internet of Things and the SDN/NFV paradigm announced their marriage during the networking session “Stop the war of IoT stacks for (big) data management and orchestration”.

The session was organized by Giovanni Schembra, a Professor at the University of Catania, with the support of the H2020 project INPUT, to discuss the matter of how the fragmentation characterizing the IoT landscape is creating one of the major barriers to the full deployment of the Internet of Things (IoT). This is leading to the need of a technical framework, possibly based on SDN/NFV, enabling IoT solution developers to build applications seamlessly exploiting services across heterogeneous IoT platforms, also able to distill value from generated Big Data, so opening new market opportunities.

The session was structured as a contest with four teams: network operators, device manufactures, system integrators, and academia. These teams have been represented by the following Captains:
  • Antonio Manzalini from Telecom Italia (Telco Operators team);
  • Wolfgang Dettmann from Infineon Technologies (Device Manifacturers team);
  • Konstantinos Kalaboukas from SingularLogic S.A. (Solution Integrators team);
  • Antonio Jara from University of Applied Sciences - Western Switzerland (Academia team);
  • The role of Barrister was played by Angelos-Christos Anadiotis, from CNIT.
More in details, Antonio Manzalini introduced the Telco Operator viewpoint with a presentation entitled “A Pervasive and HYper-distributed OS enabling X-as-a-Service”. The second presentation was done by Antonio Jara that has discussed on the exploitation roadmap of the IoT through industrial-driven standards. The viewpoint of Solution Integrators was carried out by Konstantinos Kalaboukas that presented a distributed enterprise service bus (ESB) as a solution to IoT cross-domain/platform interoperability issues. Finally, Wolfgang Dettmann together with Antonio Escobar, provided an exhaustive overview of the protocol stacks of the main technologies used to support IoT. After the Captains presentations, audience joined to the preferred teams to feed the final discussion.

Main conclusion has been that the systemic nature of “softwarization” - from the Things to the Cloud - will open new challenging scenarios, capable of redesigning current ICT value chains, and having far reach socio-economic impacts. Networking Session offered the opportunity to set-up a community to discuss on how overcoming current fragmentations thus paving the way towards Blue Oceans scenarios.

Thanks to the relevance of its topic, and the experience of the captains, the session received a lot of interest, demonstrated by the number of participants that attended the event (some people remained standing or sit on the ground), and the number of registrations to the mailing list organized to support the session.

More information on the event are available on the website http://www.input-project.eu/iotstacks.

Giovanni Schembra (University of Catania) schembra@dieei.unict.it

03 November 2015

IEEE SDN Initiative Launches Newsletter Highlighting Global Industry Developments

I am pleased to announce that the inaugural issue of the IEEE SDN Initiative eNewsletter is now published and live.

Link to the eNewsletter on the SDN web portal:  http://sdn.ieee.org/newsletter 

The eNewsletter is a bi-monthly, technically focused online publication that highlights current SDN-related technology developments, innovations, and trends from the world’s top subject matter experts, researchers and practitioners. 

Please join us !

02 November 2015

Softwarization steered by future smart terminals, robots, autonomous machines...

It has been mentioned several time that SDN and NFV are addressing respectively a clear separation of hardware and software and virtualization. Two independent paradigms that will benefit each other. Major impact will start by the “edge” of current Telecommunications networks. In fact, in practice, technology today is making possible to distribute, at the “edges” of current Telecommunications networks services and functions that up today were used to be run in centralized systems.

In a few years, smart-terminals (including robots, autonomous machines, etc) will be an integral part of the network, in many cases creating and using sub-networks by themselves. Sensors and actuators will be pervasive, and the services execution and data storage will be widely distributed thanks to availability of ultra-high bandwidth pipes making capillary interconnections at almost-zero latency.

The network will be transformed from a fabric of interconnected closed boxes (todays nodes, e.g., switches, routers, middle-boxes, etc) into a continuum of logical containers (e.g., Virtual Machines, or Dockers) executing millions of software processes, interacting each other. If it will make sense to allocate, move or change a functionality in a smart-terminal, even a User will be able making it.

They forecast that SDN and NFV market is to grow at 86% CAGR from being worth approx. USD 2 Billion in 2015 to USD 45 Billion+ in 2020. Specifically, North America is expected to lead the market; Asia-Pacific including Japan is the fastest growing market and is expected to soon replace Europe to become the second biggest contributor in the market. But it will be much more than that, as SDN NFV will impact not only the network but also the service platforms beyond the centralized Data Centers, up to the “edge” (access, home, office…) and the very final terminals.

Considers that the number of smart-phone being sold versus network equipment is billions against millions, with economic unbalance 70% vs 30%. This means that the overall Telecommunications market is already led by smart-phones, and it is likely that SDN NFV tomorrow market will be steered also by future smart terminals, such as robots, drones, any sort of autonomous machines equipped with processing, storage, communications capabilities and sensors-actuators, becoming integrated part of a “continuum of logical containers”.  

01 November 2015

A Global Operating System «from the Things to the Clouds»

We know that in computing, the adoption of an Operating System facilitated applications developments and diffusion by providing controlled access to high-level abstractions for computing hardware resources (e.g., memory, storage, communication) and information (e.g., files, directories). Similarly, in Telecommunications infrastructures – currently being subjected to Softwarization - a distributed Operating Systems would ease and “boom” a new development of services (X-as-a-Service) providing high-level abstractions for all resources of  Telecommunications and ICT infrastructures. This is the vision and concept of a Global Operating System, spanning from the “Things to the Clouds”.

Telecommunications and ICT infrastructures are becoming indeed a giant decentralized super-computer with a wide area networking fabric: a sort of flexible, highly adaptable and pervasive virtual environment of logical resources, from the cloud up to the terminals and the smart things.

So, in the same way (metaphorically) a computer have Operating System  —  dictating the way it works and provides services as a foundation upon which all applications are built – also this Wide Area decentralized super-computer will have to have a Global Operating System on top of which any network and service functions (including control and management) will be run as “applications”.

This is the vision that I presented, last week, at the “EAI International Conference on Software Defined Wireless Networks and Cognitive Technologies for IoT”. Presentation can be downloaded from the http://sdn.ieee.org/. But it’s more than a vision: believe it or not, we are about developing it, by leveraging on the available open source software.

The Global Operating System would not manage the infrastructure itself, but it would a real time software environment equipped with APIs for supporting a broad spectrum of control and management applications (performing  the OSS/BSS tasks), and ICT services.

Practically, Softwarization will make Telecommunications and ICT disappearing into the reality, whilst the Global Operating System will be its “nervous system”. And it will have “nerve endings” up to the things and the “future smart-terminals” (e.g., autonomous machines, robots, drones, etc).
In fact, all the infrastructure “elements” will be abstracted with the same formalism, in terms of a common data model (e.g.  by using YANG). This is be a huge unification and simplification. Services will be run in “virtual slices” made of combinations of computing-storage logical resources (e.g., Containers) interconnected by set of Virtual Networks (VN). Configurations: see OpenConfig http://www.openconfig.net/.

This is not Science Fiction. X-OS (OpenStack, ONOS) are already there, a reality for the Central Offices, as we learn from ONLab. We want to make a step forward: this (open source) Global Operating System will have to reach also the “Things”, becoming, as such, capable of “activating and booming” new developments and growth in several ecosystems, with far reaching positive impacts from the socio-economic viewpoint.

25 October 2015

How Telecommunications will change

In the very beginning of Telecommunications, around 1880, the business seemed to be the sale of telephones: it should have been up to the buyer of the telephone to roll out the needed wires to connect with another telephone. But, soon it was realized that the “connectivity fabric” was the most important, and expensive part of the story. So the Network Providers started making the huge (Capex) investments for exploiting (and managing) such network infrastructures.
Telecommunication business didn’t change that much in the following 130 years.
But it will change radically in the coming few years, due to the convergence of a number of (well-known) techno-economic trajectories.
Example: already today main of the overall Telecommunication business is in the smart-phones, not in the network. The number of smart-phone being sold versus network equipment is billions against millions, with economic unbalance 70% vs 30% in favor of the terminals. This means the market is already led by smart-phones, and perhaps tomorrow by future smart terminals, such as robots, drones, any sort of autonomous machines equipped with processing, storage, communications capabilities and sensors/actuators. This does not mean that the network is no longer important, obviously: it means that it will change radically and also our perception of it. It’s the Softwarization of Telecommunications, which I started predicting that five years ago, and that now it’s really coming into reality.  

Softwarization will tranform the Telecommunications infrastructures from today networks of interconnected closed boxes (todays nodes, e.g., switches, routers, middle-boxes, etc) into a continuum of logical containers (e.g., Virtual Machines, or Dockers) executing millions of software processes, interacting each other. If it will make sense to allocate, move or change a functionality in a smart-terminal, in an edge DC, even a SME, a User or a machine will be able making it. Not only humans but also autonomous software entities will be able to produce and consume services in this continuum of ICT virtual resources.

That’s a radically different perspective for Telecommunications and ICT.
At this level, it makes a lot of sense to investigate how modeling, control and steering the dynamics of this software continuum. The mathematics behind this, in fact, may open the way to new models for a networked cognition, or even a new theory of information beyond Shannon. It will be about understanding how and why human or software processes beings assemble themselves into social networks. Just imagine dynamic logical networks where every logical node is a person or an avatar and every logical link between logical nodes is a relationship between them.

It’s about mathematical, social, biological and psychological rules that govern how these logical networks are assembled, are operated, how they will affect our lives, and the economy. That's the mine where extracting the value.

20 October 2015

Structured Information beyond Shannon…

Information permeates everything: from electrochemical information exchanged in networks of neurons, to biological information stored and processed in living cells, from the information extracted from big data, to the information available on the web…to the information we process and exchange in our daily activities…etc.

On the other hand, our current understanding of information communication is still based on Claude Shannon’s seminal work in 1948 resulting in a general mathematical theory for reliable communication in the presence of noise. Traditional information theory considers the communication from the viewpoints of channels connecting two endpoints. This approach should be enhance when considering networks (even social networks, or networks of s/w processes) with massive sources which relay information in a multi-hop manner and with time-varying logical topology. That’s fully another story.

Frederick P. Brooks, Jr., wrote in “The Great Challenges for Half Century Old Computer Science”: “Shannon performed an inestimable service by giving us a definition of Information and a metric for Information as communicated from place to place. We have no theory however that gives us a metric for the Information embodied in structure. . .”

Even more today, technology acceleration (e.g., in ultra-broadband diffusion, IT performances and miniaturization, systemic softwarization, etc) is calling for enhancing this model of information, especially when considering the near advent of highly pervasive ICT fabrics with massive s/w processes which relay information up to the terminals and things. Example: it has been shown that the theoretical capacity of a multi-hop network is proportional to the square root of the network size (number of nodes). This  promises enormous capacity for ultra-dense network fabrics! A breakthrough is possible here, especially when thinking about soft-RAN.

But it’s much more than that. This reasoning, abstracted, is also applicable to derive new cognitive models, whose base is – by definition – extracting and elaborating information, new forms of social interactions (even with/between Avatars) or new way for humans to interact with autonomous machines or environments.

As an example of ongoing activities in this field, National Science Foundation has established some time ago the Science and Technology Center for Science of Information to advance science and technology through a new quantitative understanding of the representation, communication and processing of information in biological, physical, social and engineering systems. The center is located at Purdue University (Partners include: Berkeley, MIT, Princeton, Stanford, UIUC, UCSD and Bryn Mawr & Howard U. ).

My take is that also for Network and Service Providers have to shift from the current “one way value proposition” where value stands mainly in the connectivity, to the "structured information value proposition", information generated through multiple interactions of humans and/or machines.