Soz, switch-fondlers: Doesn't look like 2013 is 10Gb Ethernet's year

Recession-bashed users are waiting for another ball to drop


It is becoming increasingly unlikely that 2013 will be the year that sees widespread adoption of 10 gigabit Ethernet. Of course we'll be told it will be, just as we have been told for years that wholesale shift is right on the horizon. The reason? It's not a question of technological capability – the technology for 10GbE has been solid for quite some time – but rather a simple question of cost.

When 1000Base-T took over the world, you could take your 10/100 device, plug it in to your 10/100/1000 switch and it would "just work." More importantly, the cost per port for 1GbE ports on switches dropped so low that we collectively don't feel that we are throwing money away if we don't maintain high throughput on those links. 1GbE is there if we need it, even if most of the time we don't.

Some believe that 10GbE looks set to follow the same pattern, but even these optimists are predicting that it will be 2014 before 10Gbase-T sees more annual deployments than the alternatives. Even then, we'll still be dealing with a mishmash of deployed 10GbE interface types.

Price still is the barrier here. 1000Base-T can be had for $10 per switch port. Cost per port of most common 10GbE port type - SFP+ - is already kissing $300 per switch port (for single-switch purchases) and I expect that to be driven down below $200 before the end of the year. 10Gbase-T remains slightly more expensive than SFP+ per switch port, and is significantly rarer amongst the commodity switch vendors than SFP+.

Cisco (PDF) would like to tell you that the time of 10Gbase-T is now... Of course it has to rely on malarkey like "cost per gigabit" to sell the idea. Most businesses aren't going to flatten our links, so trying to sell 10GbE with the assumption that we will is just a flashing neon sign that says the vendor in question is completely unwilling to participate in the commoditisation of the 10GbE market.

But fancy PowerPoint slides talking about "10x the bandwidth at only 3x the cost" deliberately avoid addressing the reality of tepid mainstream adoption: the bulk of businesses just don't need 10x the bandwidth and aren't willing to pay 3x the cost.

If your switching infrastructure is Cisco end-to-end you probably don't care about cost. You also aren't likely to care about $100 SFP+ direct attach cables or $300 SFP+ fibre transceivers. If your businesses relies on commodity SMB switches, then you – like me – are lashing together virtually free 1Gbit links to make do while waiting around for 10GbE to drop to a reasonable price.

Cisco, Juniper and the rest aren't going to drive commodisation. They are already facing the spectre of their own potential irrelevance in 2013. Software defined networks look set to become a thing, finally eliminating the proprietary stranglehold these vendors have had on our core infrastructure for decades.

Intel has already put 10GbE onto the motherboard in a big way, commoditising the endpoint part of the equation. Tomorrow's servers will be all dressed up for the 10GbE ball, but for many companies, they'll have no one to talk to.

As one of the giants in the switching silicon space, Intel could choose tomorrow to make the 10Gbase-T market explode in the space of a single quarter. The firm makes excellent 10GbE switching silicon - getting into the hands of a D-Link or a Netgear for rock-bottom prices would redefine the entire space.

Unfortunately for us, Intel doesn't look set to ride to the rescue of the SME market in 2013 - its switching silicon is currently being deployed as high-end competition to folks like Cisco. Dropping that to commodity class this early in the game just doesn't make sense.

With the mobile world heating up, and pressures to crank out an ever-increasing number of various different chips, it looks more and more like nobody else out there has the spare fab capacity to drive down the cost of 10GbE switching silicon; especially not the new 40nm stuff that has finally brought 10Gbase-T into the realm of reasonable power efficiency. (Though if you really care about power consumption, 10Gbase-T is still not a real consideration.)

So we wait - another year, maybe two. If you were holding off upgrades in hopes that switch prices would magically plummet this year, don't. The year of 10GbE will arrive eventually, but 2013 won't be it. ®

Similar topics


Other stories you might like

  • This startup says it can glue all your networks together in the cloud
    Or some approximation of that

    Multi-cloud networking startup Alkira has decided it wants to be a network-as-a-service (NaaS) provider with the launch of its cloud area networking platform this week.

    The upstart, founded in 2018, claims this platform lets customers automatically stitch together multiple on-prem datacenters, branches, and cloud workloads at the press of a button.

    The subscription is the latest evolution of Alkira’s multi-cloud platform introduced back in 2020. The service integrates with all major public cloud providers – Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud – and automates the provisioning and management of their network services.

    Continue reading
  • Alcatel-Lucent Enterprise adds Wi-Fi 6E to 'premium' access points
    Company claims standard will improve performance in dense environments

    Alcatel-Lucent Enterprise is the latest networking outfit to add Wi-Fi 6E capability to its hardware, opening up access to the less congested 6GHz spectrum for business users.

    The France-based company just revealed the OmniAccess Stellar 14xx series of wireless access points, which are set for availability from this September. Alcatel-Lucent Enterprise said its first Wi-Fi 6E device will be a high-end "premium" Access Point and will be followed by a mid-range product by the end of the year.

    Wi-Fi 6E is compatible with the Wi-Fi 6 standard, but adds the ability to use channels in the 6GHz portion of the spectrum, a feature that will be built into the upcoming Wi-Fi 7 standard from the start. This enables users to reduce network contention, or so the argument goes, as the 6GHz portion of the spectrum is less congested with other traffic than the existing 2.4GHz and 5GHz frequencies used for Wi-Fi access.

    Continue reading
  • Cloudflare explains how it managed to break the internet
    'Network engineers walked over each other's changes'

    A large chunk of the web (including your own Vulture Central) fell off the internet this morning as content delivery network Cloudflare suffered a self-inflicted outage.

    The incident began at 0627 UTC (2327 Pacific Time) and it took until 0742 UTC (0042 Pacific) before the company managed to bring all its datacenters back online and verify they were working correctly. During this time a variety of sites and services relying on Cloudflare went dark while engineers frantically worked to undo the damage they had wrought short hours previously.

    "The outage," explained Cloudflare, "was caused by a change that was part of a long-running project to increase resilience in our busiest locations."

    Continue reading
  • AWS buys before it tries with quantum networking center
    Fundamental problems of qubit physics aside, the cloud giant thinks it can help

    Nothing in the quantum hardware world is fully cooked yet, but quantum computing is quite a bit further along than quantum networking – an esoteric but potentially significant technology area, particularly for ultra-secure transactions. Amazon Web Services is among those working to bring quantum connectivity from the lab to the real world. 

    Short of developing its own quantum processors, AWS has created an ecosystem around existing quantum devices and tools via its Braket (no, that's not a typo) service. While these bits and pieces focus on compute, the tech giant has turned its gaze to quantum networking.

    Alongside its Center for Quantum Computing, which it launched in late 2021, AWS has announced the launch of its Center for Quantum Networking. The latter is grandly working to solve "fundamental scientific and engineering challenges and to develop new hardware, software, and applications for quantum networks," the internet souk declared.

    Continue reading
  • Wireless kit hit by supply chain woes in Q1, China lockdowns blamed
    Backlogs reportedly 10 to 15 times greater than they were pre-pandemic

    The Wireless LAN market was battered by a choppy supply chain in the first quarter of 2022 and lockdowns in China are compounding the problem, according to analysis by Dell'Oro Group.

    Many organizations have scheduled network upgrades, but supply is not able to keep pace with demand and backlogs are reportedly 10 to 15 times greater than they were pre-pandemic.

    Several manufacturers have cited components from second and third-tier suppliers as the cause of the bottleneck, Dell'Oro said, which means that the problem may not be a shortage of Wi-Fi silicon, but rather of secondary components that are nevertheless necessary to make a complete product.

    Continue reading
  • UK police to spend tens of millions on legacy comms network kit
    More evidence of where that half-a-billion-a-year cost of Emergency Services Network delay is going

    The UK's police service is set to spend up to £50 million ($62.7 million) buying hardware and software for a legacy communication network that was planned to become obsolete in 2019.

    The Home Office had planned to replace the Airwave secure emergency communication system, which launched in 2000, with a more advanced Emergency Services Network by the close of the decade. However, the legacy network has seen its life extended as its replacement was beset with delays. The ESN is expected to go live in 2026.

    In a procurement notice, the Police Digital Service (PDS) said it was looking for up to three suppliers of Terrestrial Trunked Radio (TETRA) Encryption Algorithm 2 (TEA2) compatible radio devices – including handheld, desktop, and mobile terminals – as well as software, accessories, services, and maintenance for use on the UK Airwave system.

    Continue reading
  • IT downtime not itself going down, power failures most common cause
    2022 in a nutshell: Missing SLAs, failing to meet customer expectations

    Infrastructure operators are struggling to reduce the rate of IT outages despite improving technology and strong investment in this area.

    The Uptime Institute's 2022 Outage Analysis Report says that progress toward reducing downtime has been mixed. Investment in cloud technologies and distributed resiliency has helped to reduce the impact of site-level failures, for example, but has also added complexity. A growing number of incidents are being attributed to network, software or systems issues because of this intricacy.

    The authors make it clear that critical IT systems are far more reliable than they once were, thanks to many decades of improvement. However, data covering 2021 and 2022 indicates that unscheduled downtime is continuing at a rate that is not significantly reduced from previous years.

    Continue reading

Biting the hand that feeds IT © 1998–2022