This article is more than 1 year old

Network interface cards are coming up trumps

How to pick out the aces

Network interface cards (NICs) are often overlooked in server design. Their sheer ubiquity, combined with the ability to deal with most networking problems in software, can make them a component whose exact specifications are ignored.

Some motherboard integrated NICs offer features such as jumbo frames and TCP offloading.

These reduce network overhead as well as the processing load network access places on a server’s CPUs. Some NICs do not offer these features, while others make the attempt badly, fail miserably and cause no end of frustration.

In a virtualised environment, this sort of crapshoot isn’t acceptable: basic features such as jumbo frames and TCP offloading just have to work.

The real deal

The increased use of storage protocols such as iSCSI and Fibre Channel over Ethernet (FCoE) also offers efficiencies to be gained through offloading. Cards supporting these protocols play a critical role in the network-based storage technologies that are becoming commonplace in large data centres.

This is where real NICs come in. Real NICs do all of the above and then start arguing over who supports which sexy new thing.

Consider virtual Ethernet bridging (VEB), which is how a hypervisor's vSwitch allows virtual machines (VMs) to connect to the host’s network. It has also increasingly become a tickbox offload capability for real NICs – a speed boost that is becoming vital for large-scale implementations.

Name game

Virtual networking queuing has its own offload buzzwords. To VMware it is NetQueue, while Microsoft calls it VMQ. Brocade uses the term "virtual machine optimised ports", while Intel chooses virtual machine device queues. Regardless of the terminology, proper NICs can do the offloading.

All this is fairly straightforward. Buy the appropriate NIC, put it into a server, make sure your operating system or hypervisor has the right drivers, and things just get faster.

This is a cute trick but definitely not the star attraction. The power of real NICs comes from their ability to impose network management and visibility upon virtual switching environments.

The reason we go to all the trouble of implementing expensive layer-3 switches with more features than your average automobile is because of the immense control those devices offer over every aspect of networking.

Until recently, the virtual switches inside a hypervisor removed a certain amount of granular control by obfuscating the last layer of switching away from the management software used by the rest of the network.

Enter 802.1Qbg Virtual Ethernet Port Aggregation (VEPA) and 802.1Qbh Bridge Port Extension (often called VN-Tag).

Open contest

Both are competing standards for gaining management control over virtual switching. VN-Tag is backed by Cisco and works by replacing the hypervisor’s vSwitch with a proprietary one that alters Ethernet frames. It isn’t compatible with existing Ethernet anything and you will need special Cisco gear to make it work.

VEPA is the standard backed by pretty much everyone else. Its solution to the management problem is to tell the hypervisor to send all packets out to an external switch for processing.

In the case of VEPA-enabled NICs, that external switch is the network card installed in the virtual server, but it can just as easily be a switch on the other end of a wire.

VEB, VEPA and VN-Tag all fall under the catch-all of Edge Virtual Bridging, the network battle of the moment. Your choice of NIC determines protocol support, and that in turn determines which physical and virtual switches you can use if you want granular visibility of every virtual node on your network.

Open standards are important, but broad vendor implementation is even more so. The role of the humble network card isn’t quite so humble any more. ®

More about

TIP US OFF

Send us news


Other stories you might like