A matter of resources
Clearly, NFV can be handled in many different ways, from the scarily efficient to the horrifically inefficient. The how and why of these different implementations will come to matter a great deal as more and more administrators adopt NFV in their data centres. As such, it's time we all started learning.
Virtual administrators are horrible about overprovisioning. If you don't believe me, have a talk with VMturbo or Cloudphysics about the wildly inappropriate resource allocation they see when they first enter a new customer environment.
Based on the available evidence, virtualisation administrators have no idea how much of what resources to allocate to application VMs. From talking to cloud providers, application administrators have no idea whatsoever about how to select appropriately sized instances either.
Network administrators are used to working with hardware appliances that have Easy Button descriptors on them, such as "this unit supports 5,000 users". That this is a best guess based on user types that may or may not match the users at the organisation in question doesn't matter: 5,000 users will use that device and they will like it.
If any or all of these groups are going to be implementing NFV, they need to understand a lot more about how much – and more critically, how little – is actually required to perform a given task.
Administrators used to standing up 4GB web servers that could quite cheerfully run in 256MB are going to have a lot of problems deciding on whether or not the stock NFV offerings are adequate, should be tweaked, or should be replaced with third-party alternatives.
Put simply: we can't go around spinning up 4GB RAM worth of NFV to protect 1GB RAM worth of web server.
Experiment, for science!
The quickest path for admins is to experiment. It doesn't take much to spin up a Linux – or even a Windows – VM and use it as your network edge device. How many users can you support on how small a footprint? Are you sure that's the smallest you can go?
Consider my home router. My router is a Netgear WNDR 3700V2. It's a fairly weedy ARM processor with only 64MB of RAM, but it can gleefully run Linux and handle tens of thousands of network connections all whilst providing a full suite of security services. It serves as a VPN server, IPv6 tunnel and handles multiple SSID's on two wireless radios all whilst passing 50Mbits of internet traffic and juggling four 1Gbit ports on the LANward side.
I've never seen the CPU utilisation go above 50 per cent and RAM usage is usually less than 75 per cent. Think about that for a moment and then ask yourself what kind of resources an NFV VM should really need?
Microsoft may actually be ahead of the game. Its Nano Server efforts have been quite well publicised. Windows is well supported and there is a rich ecosystem of tools to manage and configure it.
By the time it comes out, the rest of the market should have done its job teaching everyone what NFV is. Throw in some containerisation and I can see Windows Nano Server being usable – if a little hefty – for NFV, while still being within the comfort zone of many administrators.
Linux is in a tougher place. Most mainstream distributions are also pretty hefty, even on "minimal" installs. There are, of course, distributions that specialise in small footprints, but they lack the marketing budgets and name brand awareness. They also tend to be more difficult to work with.
This leads us to the magical land of third-party NFV options. Anyone who has dipped a toe into Openstack will know that there are innumerable replacements for Neutron's NFV. NFV options that plug into VMware, Hyper-V and so forth are also sprouting like weeds all over Silicon Valley.
Most of us aren't going to roll our own NFV. Even if it's Microsoft-based. We're going to buy a third-party option. Making rational choices about those third-party offerings, however, is going to require that we be somewhat educated.