The "New Big Network" will be Artificially Intelligent (my previous post). But which way to exploit it ? What are the ICT bottlenecks ?
About fifty years ago, Gordon Moore published a paper titled: “Cramming More Components onto Integrated Circuits.” It was the first formulation of the Moore principle that, after some revisions, became a law: every two years the number of transistors on a computer chip doubles.
About fifty years ago, Gordon Moore published a paper titled: “Cramming More Components onto Integrated Circuits.” It was the first formulation of the Moore principle that, after some revisions, became a law: every two years the number of transistors on a computer chip doubles.
Moore’s
law is one of the foundations of the Digital Society. In fact, integrated
circuits are the bricks for building
computers and IT systems: as a matter of fact, as mentioned in this nice
article, The
Search for a New Machine, “Moore’s law makes computers evolve.” In fact, making
transistors smaller, means increasing computational capacity at the same costs.
But now that we are approaching the atomic scale (xx nanometers), we see the scaling limits of this law, the quantum wall.
To learn more about this, have a look at this paper Limits on Fundamental Limits to
Computation where Igor L. Markov addressed both the limiting factors and
the salient trends for achieving faster computations for single-core or
parallel processors in different ways than just scaling.
It should
be noted, nevertheless, that’s not just a matter of achieving faster CPU (or
GPU). There is another wall, which might be more even more strategic: the
memory wall. In fact, memory hierarchy (e.g., DRAM, SRAM, flash) is responsible
for most of the problems for achieving faster computations. It’s little useful faster
CPUs without high-speed, high-capacity memory to store bits and deliver them as
quickly as possible.
The real challenge
is developing a “universal memory”, a sort of data storage device combining the
cost benefits of DRAM, the speed of SRAM, the non-volatility of flash memory,
and infinite durability. It may no longer matter that silicon CPU elements will
become smaller than seven nanometers or faster than four Gigahertz.
In this
direction, Hewlett–Packard is developing chips based on a new type of
electronic component: the memristor. A memristor is combining “memory” and
“resistor”: in other words it has the strange ability to “remember” how much
current previously flowed through it, thus combining storage and random-access
memory. Like the neurons, memristors transmit and encode information as well as
store it.
Also IBM is
introducing a brain-inspired computer: through
a number of steps Phase
0, Phase
1, Phase
2, and Phase
3, from neuroscience to supercomputing, they came up with a new
computer architecture, a new
programming language, algorithms, applications, and even a chip: TrueNorth, where five billion
transistors model a million neurons linked by 256 million synaptic connections.
If the future “New Big Network” will have to
interconnect a growing number of IT systems, machines, terminals, smart things,
devices … then it will have to transmit/network - at ultra-low latency - petabytes
of information to and from Data Centers for storage and processing. Surely, SDN, NFV... Softwarization will make the networks highly flexible. But if a breakthrough towards the universal memory will be able to empower “supercomputer-like” capabilities in smaller, less
energy-consuming edge-fog network elements, then data streams could be stored and network functions preprocessed locally. Universal
memory, more than faster and faster CPU, or unlimited bandwidth, could change the “equation” of the future “New Big A.I. Network”.