30 December 2015

The Search for the “New Big A.I. Network”

The "New Big Network" will be Artificially Intelligent (my previous post). But which way to exploit it ? What are the ICT bottlenecks ?

About fifty years ago, Gordon Moore published a paper titled: “Cramming More Components onto Integrated Circuits.” It was the first formulation of the Moore principle that, after some revisions, became a law: every two years the number of transistors on a computer chip doubles.

Moore’s law is one of the foundations of the Digital Society. In fact, integrated circuits are the bricks for building  computers and IT systems: as a matter of fact, as mentioned in this nice article, The Search for a New Machine, “Moore’s law makes computers evolve.” In fact, making transistors smaller, means increasing computational capacity at the same costs. But now that we are approaching the atomic scale (xx nanometers), we see the scaling limits of this law, the quantum wall. To learn more about this, have a look at this paper Limits on Fundamental Limits to Computation where Igor L. Markov addressed both the limiting factors and the salient trends for achieving faster computations for single-core or parallel processors in different ways than just scaling.

It should be noted, nevertheless, that’s not just a matter of achieving faster CPU (or GPU). There is another wall, which might be more even more strategic: the memory wall. In fact, memory hierarchy (e.g., DRAM, SRAM, flash) is responsible for most of the problems for achieving faster computations. It’s little useful faster CPUs without high-speed, high-capacity memory to store bits and deliver them as quickly as possible.

The real challenge is developing a “universal memory”, a sort of data storage device combining the cost benefits of DRAM, the speed of SRAM, the non-volatility of flash memory, and infinite durability. It may no longer matter that silicon CPU elements will become smaller than seven nanometers or faster than four Gigahertz.

In this direction, Hewlett–Packard is developing chips based on a new type of electronic component: the memristor. A memristor is combining “memory” and “resistor”: in other words it has the strange ability to “remember” how much current previously flowed through it, thus combining storage and random-access memory. Like the neurons, memristors transmit and encode information as well as store it.

Also IBM is introducing a brain-inspired computer: through a number of steps Phase 0, Phase 1, Phase 2, and Phase 3, from neuroscience to supercomputing, they came up with a new computer architecture, a new programming language, algorithms, applications, and even a chip: TrueNorth, where five billion transistors model a million neurons linked by 256 million synaptic connections.

 If the future “New Big Network” will have to interconnect a growing number of IT systems, machines, terminals, smart things, devices … then it will have to transmit/network - at ultra-low latency - petabytes of information to and from Data Centers for storage and processing. Surely, SDN, NFV... Softwarization will make the networks highly flexible. But if a breakthrough towards the universal memory will be able to empower “supercomputer-like” capabilities in smaller, less energy-consuming edge-fog network elements, then data streams could be stored and network functions preprocessed locally. Universal memory, more than faster and faster CPU, or unlimited bandwidth, could change the “equation” of the future “New Big A.I. Network”.

27 December 2015

Artificially Intelligent Internet

At the “SDN & IOT Industry Session” of the World Forum - IoT 2015 (Milan, 13th Dec. 2015) I’ve argued that the deployment of SDN-NFV paradigms - at the edge of current infrastructures - will become a powerful enabler for IoT platforms: in other words the distinction between Edge SDN and IoT is going to disappear, with a “fusion” of the ICT enabling technologies.

Today I’m arguing that another border is going to blur: the one between ICT and A.I., bringing to the emergence of the Artificially Intelligent Internet. There are several evidences around…

Have a look at this report which is arguing that A.I. interfaces will replace smart-phones in five years. Machines are starting learning as humans, also they are teaching them to see and they have even developed a way for AI machines to learn from the crowd. Google working on how converting language into a problem of vector space mathematics. Next, it will be bringing “intelligence” into an Operating Systems to empower devices with the capacity for logic and natural conversation.

In summary, equation of future Internet will be based on a borderless “fusion” of technologies.

A.I. interfaces will be the next terminals for human and non-human Users. Mathematics will be the language, coded in terms of software services and functions; IT processing will run said functions and services in order to make decisions and service actions, IT storage systems will store encoded/actionable information and, eventually, the Big Network will be create a web of relationships between producers and consumers of services…by hooking billions of processes with ultra-low latency connections.Telecommunications, ICT and A.I. will "merge" together.

Artificially Intelligent Internet will embed a continuum of “networked cognition loops” into the reality. This will create the conditions for a new economy.

12 December 2015

Softwarization towards closing a CAPEX cycle

In the past, the barriers to entry for new Players in the telecommunications were rather high and primarily concerned the necessity for massive capital expenditures (CAPEX).

The telecommunications infrastructure required in fact high capital expenditure investments, at a level that would be very difficult for any new company to enter. Existing major Operators have taken decades to construct their existing infrastructures, and they used to possess an enormous advantage over any new company attempting to establish a presence in the telecommunications market.

Today the story is changing rapidly. Technology drivers (e.g., high-performance standard hardware coupled with a wide deployment of ultra-broadband connectivity) are bringing the telecommunications CAPEX cycle towards the end. Low investments, low RoI. On one side we have created the conditions for the development of an “hyper-connected world”, on the other side Softwarization is “pulverizing” the telecommunications market, which is moreover under the pressure of OTTs, which are providing global ICT services (almost) “for free” an (mostly) “un-regulated”.

In brief, Softwarization is enabling and driving:
  1. a converged (fixed-mobile Network and Data Centre) infrastructure (whose CAPEX is gradually decreasing) where the creation/provision of services is likely to become decoupled from the Operations. This will bring a related split of business roles (infrastructure/service enablers and the service providers);
  2. a converged industrial structure covering voice services, Internet access services, and ‘OTT’ services, “packaged” in various ways;
  3. a consequent split of roles in vendors supplying the infrastructure/service enablers and the service providers: the market will see high volume standard hardware (e.g., server and switch/routers) and a world of “software”.

This transition will have a number of far reaching implications, in terms of job, culture and socio-economic transformation. The grand question is “how opening a new cycle” !