This article is more than 1 year old
Nvidia releases first Jetson AGX Orin module for production deployment
More expensive than a Pi but with a lot more oomph, platform to be embedded in devices at the edge
Nvidia is rolling out a production module of its Jetson AGX Orin platform, designed to be embedded inside devices and enable AI acceleration for a variety of applications such as robotics and edge computing.
The Jetson AGX Orin 32GB production module combines a GPU based on Nvidia’s Ampere architecture with an 8-core Cortex-A78AE Arm-based CPU, 32GB of memory and 64GB of embedded (eMMC) flash storage, onto a board measuring 100mm x 87mm (slightly larger than a Raspberry Pi).
However, the price is considerably higher than that of a Raspberry Pi, with the Jetson AGX Orin 32GB setting back anyone interested in building with it a cool $999, or £817.32.
This is the first of 4 Jetson Orin-based production modules that were announced at Nvidia’s GTC event earlier this year, along with the Jetson AGX Orin developer kit that is already available. A 64GB version of Jetson AGX Orin is set to be available from October, while a pair of less powerful Orin NX production modules are due later this year.
The Jetson AGX Orin 32GB unit is capable of 200 trillion operations per second (TOPS), according to Nvidia’s specs, less than the 275 TOPS the hardware in the Developer Kit is capable of, which boasts a 2048-core Ampere architecture GPU compared with the 1792-core GPU in this production unit. However, it is claimed to offer up to 6x the performance of the previous Jetson Xavier generation modules.
According to Nvidia, about three dozen technology providers in its global Partner Network are already offering commercially available products powered by the new module.
Like the developer kit, the production modules are supported by Nvidia’s Jetson software stack, which enables developers to build and deploy fully accelerated AI applications on Jetson. This comprises the company’s JetPack SDK which provides a full development environment, plus the CUDA-X collection of libraries and tools for tweaking performance.
- Intel is running rings around AMD and Arm at the edge
- IBM-powered Mayflower robo-ship once again tries to cross Atlantic
- Nvidia teases server designs for Grace-Hopper Superchips
- Nvidia CEO Jensen Huang talks chips, GPUs, metaverse
Nvidia also states that the Jetson Orin products can be used with various other platforms it provides, such as Isaac for robotics, DeepStream for computer vision, Riva for natural language understanding, and the TAO Toolkit to accelerate model development using ready-trained models.
This module has an array of I/O options, starting with 2 x8, 1 x4 and 2 x1 PCIe 4.0 lanes, gigabit and 10Gb Ethernet ports, 8K display output, and USB ports. Also available are more UARTs, SPI, I2S and I2C ports, plus CAN bus and GPIOs. However, some of these I/O options share lanes on the Universal PHY (UPHY) connector.
According to Nvidia, these capabilities enable developers to build and deploy Orin-powered systems equipped with cameras, sensors suited for edge AI, robotics, IoT and embedded applications.
The company said there will be production-ready systems available from its partners to enable customers to tackle challenges in industries from manufacturing, retail and construction to agriculture, logistics, healthcare, and smart cities. ®