The throughput of a router is mainly limited by the routing (control plane) processing, which is impacting the maximum number of packets that the router can process at each time: as a consequence there is an inevitable trade-off between the number of ports (node degree) and speed of each port (bandwidth per connection) of a router. Router Vendors cannot make a router that has both a large degree and a large bandwidth per connection mainly due to the limitation of the routing processing.
Normally nodes in the core network have large bandwidth per connection, and thus small degree.
Vice versa for the edge network: typically the degree of an edge router is almost five times larger than the one of a core router. Even larger if we move nearer the end-Users.
On the other hand, we have to consider that IT advances are making possible to build a 100 (or even more) Gbps software router. Or, software router architectures capable of parallelizing routing functionality both across multiple servers and across multiple cores within a single server (e.g. RouteBricks).
So, thanks also to SDN and NfV, it is likely it will be possible to build high-speed software routers using low-cost, commodity hardware. This means that it will be possible overcoming routing processing limitation by using the huge amount of processing power made available in large data centers (in other words logically moving the control plane of s/w routers – separated from the forwarding h/w - in the Cloud).
This would change – in principle – the (economic) equation of the network: over-provisioning connectivity rather than just over-provisioning bandwidth. Over-provision connectivity pays off better than over-provision capacity: it is possible creating very large numbers of flexible topologies to choose, even almost randomly, or programming and controlling the QoS to Users' requests.
Up today, over-provision connectivity in a network is typically more expensive than over-provision capacity, but tomorrow this equation may change. Furthermore, consider also adding to this, the over-provisioning of processing and storage capabilities. That's the IT-zations. It's network adaptability for future ICT services.
In a data center, we have already over-provisioning of connectivity, but the story is different: network covers a relatively small fraction of the cost, compared to server, electricity and cooling costs. So over-provisioning connectivity makes economic sense (by the way, in data centers, traffic demands are quite volatile and not well understood, so it is strictly necessary to over-provision connectivity). Traffic fluctuation on a network has been (up today) over time rather than space, so it has been mitigated by capacity over-provisioning, but this is going to change in the future...