Mobile network operators would have had an easier life if it wasn’t for smartphones and the flood of data traffic they initiated. Apps have led to a massive increase in the volume of data moving back and forth over phone networks - not just from users; the ads in free apps helped too - and operators are struggling to cope.
And this before the Internet of Things really takes off as it’s expected to do in the coming years, adding millions more – particularly enthusiastic forecasts put the total at billions – devices to these networks too. Catering for all this data traffic isn’t simply a matter up widening the pipe, it will require a massive expansion of the infrastructure needed to host these networks.
Quite apart from the time it will take to put that infrastructure in place, there’s the cost. Businesses and consumers want more bandwidth for less money, but the money has to come from somewhere.
Enter chip giant Intel, not with its capacious cheque book at the ready but with a notion to commoditise telecommunications network infrastructure by ridding it of expensive, proprietary, function-specific and purpose-built hardware and replacing it with cheap general-purpose kit able to replicate in software the functionality delivered by the old boxes.
Intel’s motivation is not philanthropic, of course. These new, standard devices will, it hopes, be based on its processors.
The 1990s all over again
Today’s networks are based around boxes designed to do very specific jobs. Most of those tasks were defined years ago, and hardware built near enough on a bespoke basis for each operator. That makes them very expensive. It also means they can’t be readily adapted as network demand changes over time. Instead, vendors come up with new kit, timing its availability to tie in with established telco upgrade cycles.
It used to be that way in the server business too, but through the 1990s and early 2000s, x86-based commodity hardware running Linux or Windows proved itself to be much cheaper, more flexible, more scalable and easier to upgrade than older Risc-based machines.
The old way and the new: NFV replaces proprietary, bespoke boxes (left) with as many standard servers as you need
Intel’s logic centres on the notion that of relatively low-cost x86 servers can successfully replace pricier servers running on server makers’ own silicon, so surely they can likewise replace all those pricey proprietary boxes currently attached to base-stations and other parts of the network.
Even the chip giant admits x86 servers aren’t going to push out the established hardware in the near term, and not all of it once. But its scents a shift in the mood of the telcos themselves. This change is one that they want, and rather a lot of them are working together to make it happen.
A process has already been established to define how this shift might be made to take place quickly and to better meet the needs of telcos. The process is called Network Functions Virtualisation (NFV).
Bespoke hardware out, commodity kit in
NFV essentially replaces proprietary boxes with software running on standard servers. Or, better still, one server on which you make use of its processors’ virtualisation capabilities to implement the workloads of multiple boxes and their operating systems on a single unit.
BT is a keen supporter of the scheme. According to Don Clarke, the British telco’s Head of Network Evolution Innovation, the company has been researching NFV for the best part of three years now.
“Two-and-a-half years ago, we started a research programme to build a proof-of-concept platform to test network-type workflows on a standard industry servers,” he says. BT took hardware from HP, loaded it with a Wind River embedded software stack and began seeing what network hardware functionality it could replicate in software.
“We implemented a network function that’s well understood in BT, and tested it at scale and at performance,” says Clarke. “Pretty quickly it became apparent that we could get the same or better performance for a quite complex network function from this hardware as we could from a hardware-optimised device we’d bought in volume from one of our network equipment vendors.”
The next stage was to experiment with multiple functions - firewall, threat monitoring, connection acceleration - in parallel. “We asked, ‘OK if I take an industry standard server, what hardware appliances that currently have to be procured, installed and supported individually can I integrate by loading them as software equivalents on a single box, and have them deliver the same performance as the individual units.’ I can confidently say that, from a technical perspective, we did that. We proved the concept.”