SmartNICs power the cloud, are enterprise datacenters next?

High pricing, lack of software make smartNICs a tough sell, despite offload potential


SmartNICs have the potential to accelerate enterprise workloads, but don't expect to see them bring hyperscale-class efficiency to most datacenters anytime soon, ZK Research's Zeus Kerravala told The Register.

SmartNICs are widely deployed in cloud and hyperscale datacenters as a means to offload input/output (I/O) intensive network, security, and storage operations from the CPU, freeing it up to run revenue generating tenant workloads. Some more advanced chips even offload the hypervisor to further separate the infrastructure management layer from the rest of the server.

Despite relative success in the cloud and a flurry of innovation from the still-limited vendor SmartNIC ecosystem, including Mellanox (Nvidia), Intel, Marvell, and Xilinx (AMD), Kerravala argues that the use cases for enterprise datacenters are unlikely to resemble those of the major hyperscalers, at least in the near term.

"When cloud was first coming around, the hyperscalers went into their engagements with their customers with the assumption that everybody wanted to run IT like they did," he said. "Businesses aren't hyperscalers. They don't run IT like hyperscalars. They're never going to."

Kerravala believes for the SmartNIC market to move beyond the high-performance, efficiency-obsessed world of the cloud, the industry needs to reevaluate how SmartNICs can accelerate enterprise workloads.

No shortage of opportunity

With that said, Kerravala sees a plethora of use cases for SmartNICs in enterprise datacenters and the edge.

SmartNICs are particularly well suited to processor-intensive workloads that involve large quantities of data, he said. "Traditional servers weren't really meant for that kind of overhead."

"There's not a company that I talk to that's not telling me that they've got more data on their hands than ever," he added.

This, Kerravala argues, makes SmartNICs particularly appealing to the growing number of industries looking for ways to use data to a competitive advantage.

This includes markets you might not expect to need high-speed networking and I/O offload capabilities, like retail, hospitality, and entertainment.

Kerravala highlighted sporting events, where a large number of video feeds are streamed live alongside telemetry on the player's performance, as one example of where SmartNICs could prove especially useful.

Another area where SmartNICs may be beneficial to enterprise customers is security. Because security is becoming so analytics driven, Kerravala believes SmartNICs have the potential to put security inspection much closer to the data.

Palo Alto Networks has already demoed this functionality by deploying its virtualized firewalls on Nvidia's BlueField-2 SmartNICs.

Supporting these data-intensive workloads means carefully balancing compute, storage, networking, and security to avoid bottlenecks. "You can't have a fast network and slow storage, just like you can't have fast storage and a slow processor," Kerravala said, adding that letting the CPUs do what they're meant to do rather than having to process network transport or encrypt traffic, will save companies money in the long term.

Barriers to adoption abound

For all the potential SmartNICs have to offer, there remains substantial barriers to overcome. The high price of SmartNICs relative to standard NICs being one of many.

Networking vendors have been chasing this kind of I/O offload functionality for years, with things like TCP offload engines, Kerravala said. "That never really caught on and cost was the primary factor there."

Another challenge for SmartNIC vendors is the operational complexity associated with managing a fleet of SmartNICs distributed across a datacenter or the edge.

"There is a risk here of complexity getting to the point where none of this stuff is really usable," he said, comparing the SmartNIC market to the early days of virtualization.

"People were starting to deploy virtual machines like crazy, but then they had so many virtual machines they couldn't manage them," he said. "It wasn't until VMware built vCenter, that companies had one unified control plane for all their virtual machines. We don't really have that on the SmartNIC side."

That lack of centralized management could make widespread deployment in environments that don't have the resources commanded by the major hyperscalers a tough sell. Most enterprises simply don't have the IT resources, Kerravala argued.

Several companies are working closely with SmartNIC vendors to address these challenges. Juniper, for example, has been working with Intel and Nvidia to extend its Contrail orchestration platform to their SmartNICs. Similarly, VMware plans to address this challenge through its Project Monterey initiative.

"Like most technologies, management of the stuff usually gets developed a couple of years after the stuff," Kerravala said.

Where to invest?

Right now, Kerravala believes enterprise datacenters can benefit from SmartNICs, but should deploy them sparingly to accelerate their highest-demand applications.

"If they've got a very high-performance workload, use it there. I wouldn't deploy it everywhere," he said. "I would be putting them in for low-hanging fruit use cases. Use it for all the new stuff, but leave the legacy stuff the way it is."

He also recommends customers consider fully-integrated platforms that pair software, management, and hardware in a single package.

Fungible is one such example. The company offers a suite of storage and compute appliances built around its embedded data processing unit (DPU) — a trendy name for SmartNICs — and managed via a common software layer.

Until recently, the company has focused much of its attention on high-throughput storage applications, but recently applied the technology to compute pooling with a network-addressable appliance that allows GPU resources to be composed on the fly.

However, long term, Kerravala believes SmartNIC vendors need to provide more than hardware and a loose SDK. He highlighted Nvidia's efforts to build out an ecosystem of both hardware and software libraries necessary to build support into enterprise software.

"That kind of complete systems approach reduces the complexity for the developer, helps to build an ecosystem a lot faster, and really turns what is essentially silicon into a platform," Kerravala said. ®

Broader topics


Other stories you might like

  • Having trouble finding power supplies or server racks? You're not the only one
    Hyperscalers hog the good stuff

    Power and thermal management equipment essential to building datacenters is in short supply, with delays of months on shipments – a situation that's likely to persist well into 2023, Dell'Oro Group reports.

    The analyst firm's latest datacenter physical infrastructure report – which tracks an array of basic but essential components such as uninterruptible power supplies (UPS), thermal management systems, IT racks, and power distribution units – found that manufacturers' shipments accounted for just one to two percent of datacenter physical infrastructure revenue growth during the first quarter.

    "Unit shipments, for the most part, were flat to low single-digit growth," Dell'Oro analyst Lucas Beran told The Register.

    Continue reading
  • This startup says it can glue all your networks together in the cloud
    Or some approximation of that

    Multi-cloud networking startup Alkira has decided it wants to be a network-as-a-service (NaaS) provider with the launch of its cloud area networking platform this week.

    The upstart, founded in 2018, claims this platform lets customers automatically stitch together multiple on-prem datacenters, branches, and cloud workloads at the press of a button.

    The subscription is the latest evolution of Alkira’s multi-cloud platform introduced back in 2020. The service integrates with all major public cloud providers – Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud – and automates the provisioning and management of their network services.

    Continue reading
  • Will cloud giants really drive colos off a financial cliff?
    The dude who predicted the Enron collapse bets they will

    Analysis Jim Chanos, the infamous short-seller who predicted Enron's downfall, has said he plans to short datacenter real-estate investment trusts (REIT).

    "This is our big short right now," Chanos told the Financial Times. "The story is that, although the cloud is growing, the cloud is their enemy, not their business. Value is accrued to the cloud companies, not the bricks-and-mortar legacy datacenters."

    However, Chanos's premise that these datacenter REITs are overvalued and at risk of being eaten alive by their biggest customers appears to overlook several important factors. For one, we're coming out of a pandemic-fueled supply chain crisis in which customers were willing to pay just about anything to get the gear they needed, even if it meant waiting six months to a year to get it.

    Continue reading
  • Datacenter networks: You'll manage them from the cloud, eventually, claims Cisco
    Nexus portfolio undergoes cloudy Software-as-a-Service revamp

    Cisco's Nexus Cloud will eventually allow customers to manage their datacenter networks entirely from the cloud, says the networking giant.

    The company unveiled the latest addition to its datacenter-focused Nexus portfolio at Cisco Live this week, where the product set got a software-as-a-service (SaaS) revamp.

    "It's targeted at network operations teams that need to manage, or want to manage, their Nexus infrastructure as well as their public-cloud network infrastructure in one spot," Cisco's Thomas Scheibe – VP product management, cloud networking for Nexus & ACI product lines – told The Register.

    Continue reading
  • Linux Foundation thinks it can get you interested in smartNICs
    Step one: Make them easier to program

    The Linux Foundation wants to make data processing units (DPUs) easier to deploy, with the launch of the Open Programmable Infrastructure (OPI) project this week.

    The program has already garnered support from several leading chipmakers, systems builders, and software vendors – Nvidia, Intel, Marvell, F5, Keysight, Dell Tech, and Red Hat to name a few – and promises to build an open ecosystem of common software frameworks that can run on any DPU or smartNIC.

    SmartNICs, DPUs, IPUs – whatever you prefer to call them – have been used in cloud and hyperscale datacenters for years now. The devices typically feature onboard networking in a PCIe card form factor and are designed to offload and accelerate I/O-intensive processes and virtualization functions that would otherwise consume valuable host CPU resources.

    Continue reading
  • HPE thinks your next GreenLake deploy will be a private cloud
    Plus: IT giant expands relationship with Red Hat and SUSE, tackles hybrid data fabrics

    Extending a public-cloud-like experience to on-prem datacenters has long been a promise of HPE's GreenLake anything-as-a-service (XaaS) platform. At HPE Discover this week, the company made good on that promise with the launch of GreenLake for Private Cloud.

    The platform enables customers "to have a cloud in their premises wherever the data is, whether it's at the edge, it's at a colo datacenter, or is at any other location," Vishal Lall, SVP and GM for HPE GreenLake cloud services solutions, said during a press briefing ahead of Discovery.

    Most private clouds up to this point have been custom-built environments strapped together with some automation, he said. "It was somewhat of an improvement over the DIY infrastructure, but it really wasn't private cloud."

    Continue reading
  • Google recasts Anthos with hitch to AWS Outposts
    If at first you don't succeed, change names and try again

    Google Cloud's Anthos on-prem platform is getting a new home under the search giant’s recently announced Google Distributed Cloud (GDC) portfolio, where it will live on as a software-based competitor to AWS Outposts and Microsoft Azure Stack.

    Introduced last fall, GDC enables customers to deploy managed servers and software in private datacenters and at communication service provider or on the edge.

    Its latest update sees Google reposition Anthos on-prem, introduced back in 2020, as the bring-your-own-server edition of GDC. Using the service, customers can extend Google Cloud-style management and services to applications running on-prem.

    Continue reading

Biting the hand that feeds IT © 1998–2022