Nvidia unveils $59 Nvidia Jetson Nano 2GB mini AI board, machine learning that slashes vid-chat data by 90%, and new super for Britain

We sat through the full day so you didn't have to

GTC 2020 Nvidia on Monday launched its first ever virtual GPU Technology Conference, taking place online over the course of this week across multiple timezones.

The coronavirus pandemic forced the graphics processor giant to cancel its in-person tech event normally held in Silicon Valley. Although the location of the conference this year is, well, wherever you're watching it, its focus on Nvidia-powered AI and machine learning is still the same.

CEO Jensen Huang said, “AI requires a whole reinvention of computing – full-stack rethinking – from chips, to systems, algorithms, tools, the ecosystem,” during the keynote.

If you missed previous announcements streamed from his kitchen, Huang introduced Nvidia’s most powerful Ampere architecture and a new range of GPUs for servers and supercomputers back in May, and a set of real-time ray-tracing cards for gamers last month. Now, for the virtual GTC, Huang on Monday introduced two less powerful Ampere-based GPUs for the cloud systems and workstations.

On top of that, there are several other bits of news that we’ve rounded up here just in case you’ve missed it.

New Jetson Nano mini AI computer

The Jetson Nano 2GB Developer Kit, announced this week, is a single-board computer – like the Raspberry Pi – though geared towards machine learning rather than general computing. If you like the idea of simple AI projects running on a dedicated board, such as building your own mini self-driving car or an object-recognition system for your home, this one might be for you.


Nvidia touts another two spanking new GPUs to join its list of Ampere architecture based goodies


It runs Nvidia CUDA code and provides a Linux-based environment. At only $59 a pop, it’s pretty cheap and a nifty bit of hardware if you’re just dipping your toes in deep learning. As its name suggests, it has 2GB of RAM, plus four Arm Cortex-A57 CPU cores clocked at 1.43GHz and a 128-core Nvidia Maxwell GPU. There are other bits and pieces like gigabit Ethernet, HDMI output, a microSD slot for storage, USB interfaces, GPIO and UART pins, Wi-Fi depending on you region, and more.

“While today’s students and engineers are programming computers, in the near future they’ll be interacting with, and imparting AI to, robots,” said Deepu Talla, vice president and general manager of Edge Computing at Nvidia. “The new Jetson Nano is the ultimate starter AI computer that allows hands-on learning and experimentation at an incredibly affordable price.”

The Jetson Nano 2GB Developer Kit will be available from the end of the month.

AI video conferencing

Nvidia is well-known for its research in generative adversarial networks (GANs), and now it has applied some of that know-how to improve video calls online.

Its engineers have developed a software platform known as Nvidia Maxine aimed at teleconferencing companies. The idea is that Nvidia provides video-chat app makers a GAN model that’s capable of cutting the bandwidth of a video call by as much as 90 per cent.

The model does this by automatically constructing and animating your face at the other end of the call, which saves you having to send all those pixels – even when compressed – each frame, reducing the overall data transferred. Here's the blurb:

The Nvidia Maxine platform dramatically reduces how much bandwidth is required for video calls. Instead of streaming the entire screen of pixels, the AI software analyzes the key facial points of each person on a call and then intelligently re-animates the face in the video on the other side. This makes it possible to stream video with far less data flowing back and forth across the internet.

Using this new AI-based video compression technology running on Nvidia GPUs, developers can reduce video bandwidth consumption down to one-tenth of the requirements of the H.264 streaming video compression standard. This cuts costs for providers and delivers a smoother video conferencing experience for end users, who can enjoy more AI-powered services while streaming less data on their computers, tablets and phones.

And here's the tech in action, in this Nvidia-made demo:

Youtube Video

Developers or startups interested in implementing such software into their own products or services can apply to early access to the platform now.

AI autocomplete but for grammar on Microsoft Word

Nvidia has paired up with Microsoft to roll out an AI-based grammar editor for people typing in Microsoft Word. The software analyses your prose, and makes suggestions to improve the grammar of a particular sentence. It can be turned on and off via the Editor tab in Word. Nvidia is the brains behind the system, which uses Nv's Triton Inference Server and ONNX Runtime, a set of tools that speeds up models running on its GPUs.

It has helped Microsoft handle up to 450 grammar queries per second using a single V100 GPU in real time, we're told.

Nvidia DGX SuperPODs are now shipping

The largest AI supercomputers Nvidia offers – its DGX SuperPODs – have now been delivered to some of its early customers, including Naver, Korea’s largest search engine company; Linköping University in Sweden; and to the Indian government’s Centre for Development of Advanced Computing.

DGX SuperPODs can be made from 20 to 140 DGX A100 systems, each one containing eight A100 GPUs. Considering a single DGX A100 sets you back $199,000, building a SuperPOD is not for the faint-hearted. The most powerful SuperPOD configuration can reach up to 700 petaflops, it is claimed.

The next SuperPOD project is the Cambridge-1 behemoth, planned to be Britain's most powerful publicly known supercomuter and will be focused on healthcare. It will be based on 80 DGX A100 systems connected by Nvidia’s Mellanox InfiniBand networking, capable of delivering more than 400 petaflops of AI compute performance, and eight petaflops of Linpack benchmark performance. If switched on right now, it would fit in at number 29 in the world's top 500 most powerful publicly known supers, Nvidia said.

Researchers from GSK and AstraZeneca as well as folks from Guy’s and St Thomas’s NHS Foundation Trust, King’s College London and Oxford Nanopore Technologies plan to use the system to design new drugs for public use. Nvidia said it will pour £40m ($51.7m) into Cambridge-1. ®

Similar topics

Narrower topics

Other stories you might like

  • Nvidia wants to lure you to the Arm side with fresh server bait
    GPU giant promises big advancements with Arm-based Grace CPU, says the software is ready

    Interview 2023 is shaping up to become a big year for Arm-based server chips, and a significant part of this drive will come from Nvidia, which appears steadfast in its belief in the future of Arm, even if it can't own the company.

    Several system vendors are expected to push out servers next year that will use Nvidia's new Arm-based chips. These consist of the Grace Superchip, which combines two of Nvidia's Grace CPUs, and the Grace-Hopper Superchip, which brings together one Grace CPU with one Hopper GPU.

    The vendors lining up servers include American companies like Dell Technologies, HPE and Supermicro, as well Lenovo in Hong Kong, Inspur in China, plus ASUS, Foxconn, Gigabyte, and Wiwynn in Taiwan are also on board. The servers will target application areas where high performance is key: AI training and inference, high-performance computing, digital twins, and cloud gaming and graphics.

    Continue reading
  • Nvidia taps Intel’s Sapphire Rapids CPU for Hopper-powered DGX H100
    A win against AMD as a much bigger war over AI compute plays out

    Nvidia has chosen Intel's next-generation Xeon Scalable processor, known as Sapphire Rapids, to go inside its upcoming DGX H100 AI system to showcase its flagship H100 GPU.

    Jensen Huang, co-founder and CEO of Nvidia, confirmed the CPU choice during a fireside chat Tuesday at the BofA Securities 2022 Global Technology Conference. Nvidia positions the DGX family as the premier vehicle for its datacenter GPUs, pre-loading the machines with its software and optimizing them to provide the fastest AI performance as individual systems or in large supercomputer clusters.

    Huang's confirmation answers a question we and other observers have had about which next-generation x86 server CPU the new DGX system would use since it was announced in March.

    Continue reading
  • GPUs aren’t always your best bet, Twitter ML tests suggest
    Graphcore processor outperforms Nvidia rival in team's experiments

    GPUs are a powerful tool for machine-learning workloads, though they’re not necessarily the right tool for every AI job, according to Michael Bronstein, Twitter’s head of graph learning research.

    His team recently showed Graphcore’s AI hardware offered an “order of magnitude speedup when comparing a single IPU processor to an Nvidia A100 GPU,” in temporal graph network (TGN) models.

    “The choice of hardware for implementing Graph ML models is a crucial, yet often overlooked problem,” reads a joint article penned by Bronstein with Emanuele Rossi, an ML researcher at Twitter, and Daniel Justus, a researcher at Graphcore.

    Continue reading
  • Will optics ever replace copper interconnects? We asked this silicon photonics startup
    Star Trek's glowing circuit boards may not be so crazy

    Science fiction is littered with fantastic visions of computing. One of the more pervasive is the idea that one day computers will run on light. After all, what’s faster than the speed of light?

    But it turns out Star Trek’s glowing circuit boards might be closer to reality than you think, Ayar Labs CTO Mark Wade tells The Register. While fiber optic communications have been around for half a century, we’ve only recently started applying the technology at the board level. Despite this, Wade expects, within the next decade, optical waveguides will begin supplanting the copper traces on PCBs as shipments of optical I/O products take off.

    Driving this transition are a number of factors and emerging technologies that demand ever-higher bandwidths across longer distances without sacrificing on latency or power.

    Continue reading
  • Despite global uncertainty, $500m hit doesn't rattle Nvidia execs
    CEO acknowledges impact of war, pandemic but says fundamentals ‘are really good’

    Nvidia is expecting a $500 million hit to its global datacenter and consumer business in the second quarter due to COVID lockdowns in China and Russia's invasion of Ukraine. Despite those and other macroeconomic concerns, executives are still optimistic about future prospects.

    "The full impact and duration of the war in Ukraine and COVID lockdowns in China is difficult to predict. However, the impact of our technology and our market opportunities remain unchanged," said Jensen Huang, Nvidia's CEO and co-founder, during the company's first-quarter earnings call.

    Those two statements might sound a little contradictory, including to some investors, particularly following the stock selloff yesterday after concerns over Russia and China prompted Nvidia to issue lower-than-expected guidance for second-quarter revenue.

    Continue reading
  • Los Alamos to power up supercomputer using all-Nvidia CPU, GPU Superchips
    HPE-built system to be used by Uncle Sam for material science, renewables, and more

    Nvidia will reveal more details about its Venado supercomputer project today at the International Supercomputing Conference in Hamburg, Germany.

    Venado is hoped to be the first in a wave of high-performance computers that use an all-Nvidia architecture, in this case using Grace-Hopper Superchips that combine CPU and GPU dies, and Grace CPU-only Superchips.

    This supercomputer "will be the first system deployed not just with Grace-Hopper in terms of the converged Superchip but it’ll also have a cluster of Grace CPU-only Superchip modules,” Dion Harris, Nvidia’s head of datacenter product marketing for HPC, AI, and Magnum IO, said during an Nvidia press conference ahead of ISC.

    Continue reading

Biting the hand that feeds IT © 1998–2022