This article is more than 1 year old
Are accelerators the cure to video's power problem or just an excuse to peddle GPUs?
You could just watch less TikTok or Twitch, maybe go outside
Analysis There's no denying the fact that streaming video is insanely popular – now accounting for upwards of 80 percent of all internet traffic, according to some estimates.
Video is also an incredibly resource-intensive media. As reported by the Financial Times, an ammunition plant in Norway found that out the hard way. The plant, which was producing munitions for the war in Ukraine, had planned to expand only to discover that there wasn't enough power. Why? Because a new TikTok datacenter was sucking up every spare watt it could find.
And while TikTok may be on uncertain ground as it faces the prospect of an outright ban in the US, it's not even the largest contributor of video. That title goes to Google (Youtube) and Netflix, according to the latest Sandvine report.
So what can be done? Well, if you ask Nvidia CEO Jensen Huang, the answer is accelerated computing, preferably using its GPUs to encode and decode the video on the fly, and its data processing units (DPUs) to accelerate the movement of the data as it hurtles across the intertubes to your phone or TV.
"User generated video is driving significant growth and consuming massive amounts of power," Huang said during the company's GTC event in March. "We should accelerate all video processing and reclaim that power."
Accelerated transcoding gear
The idea of using GPUs or other dedicated accelerators to transcode video is hardly new. Nvidia's diminutive P4 and T4 GPUs have been a popular choice for video streaming applications for years. Last month Nvidia unveiled the L4.
The company claims an eight-L4 node can transcode more than 1,000, 720p streams at 30fps when using the P1 preset. If these numbers sound a little optimistic, that's because Nvidia is goosing the numbers a bit by using the lowest quality preset possible.
But Nvidia isn't the only company that sees an opportunity to address growing demand for streaming video and to peddle their wares in the process. Earlier this month AMD's Xilinx division unveiled its new Alveo media accelerator card, the MA35D.
AMD says the card can transcode up to 32 1080p AV1 streams at 60fps while consuming just 35W under load.
Even Intel, a company that only recently entered the GPU market, has skin in the game and actually beat both AMD and Nvidia to market with an AV1-capable chip. Intel's Flex-series GPUs – codenamed Arctic Sound – were announced during Hot Chips last August. The cards were designed with video in mind.
All of these parts, regardless of whether they're from Nvidia, AMD, or Intel, feature single-slot designs and TDPs under 75W (for the most part anyway), allowing for eight or more of them to be packed into a single system. And that's not the only thing that these cards have in common. All of them are marketed toward live video, cloud gaming, and AI image processing – think Twitch or live sports broadcasting.
- It's time to stop fearing CPU power management
- Why we think Intel may be gearing up to push its GPU Max chips into China
- When will regulators get serious on datacenter emissions reporting?
- Google boffins pull back more of the curtain hiding TPU v4 secrets
No silver bullet for the silver screen
Compared to conventional CPU-based software rendering, these cards might seem like the future, but it's actually not that simple. That's because most streaming services, like Netflix, don't have to deal with large volumes of low-latency video. Instead, they can spend hours converting a video master into multiple copies for each format, resolution, and bitrate they plan to serve the video up at.
The streaming service can then distribute the videos out to content delivery networks (CDNs) and serve them up from there. And since software transcoding, while slower, tends to render higher quality video for a given bitrate, and storage is relatively cheap compared to a bunch of GPU nodes, the economics are likely to continue to favor this approach for conventional video.
Software encoders will remain relevant despite the gains in accelerated computing, Sean Gardner, head of video strategy at AMD, told The Register.
"Software encoders continue to evolve ... For some applications – some use cases – [these cards] will be good enough to finally be considered for some of those file-based, not-real-time [applications]. But for Netflix? I don't think so."
AV1 to the rescue
Despite what Jensen would have you believe – remember he wants to sell you GPUs – accelerated computing isn't the be-all-end-all of video.
With that said, accelerated computing isn't the only way to improve the efficiency of video platforms. The bandwidth consumed getting the video to your phone or TV is another major factor.
"Typically it's somewhere between 6 and 9 percent of their revenue that they spend on bandwidth," he said of AMD's streaming video customers. "One of the biggest power consumers is actually the communication side, the network that that delivers those, those Ethernet packets to the user."
This is where the Alliance for Open Media's AV1 codec comes in. AV1 is a relatively new codec that has a lot of people excited, in part because it's royalty free, but also because it's exceptionally space efficient.
Various tests have shown AV1 to be anywhere from 20 to 40 percent more efficient when compared to popular web streaming codecs, including H.265.
For a platform like YouTube or Netflix, the bandwidth savings would be considerable, which is probably one of the reasons why nearly every major streaming service is a member of the Alliance for Open Media.
However, despite having been around since 2018, AV1 remains in its infancy. Only the latest generation of GPUs from AMD, Nvidia or Intel support full AV1 encode/decode. Meanwhile, Apple has yet to add support for the codec on its devices. But given AV1's robust industry backing, this particular problem is really more of a growing pain than anything else. ®