This article is more than 1 year old
Supermicro's 'universal GPU' system welcomes all elements
Biz claims ultra-composable server platform is future-proof
Supermicro’s new "Universal GPU" servers, announced on Monday, is as equal-opportunity as silicon tech can get – it does not discriminate on CPUs, GPUs, storage, and networking technologies.
The boxen can be constructed in a number of ways to include processors from Intel and AMD, and graphics accelerators from AMD and Nvidia. It can be further customized to include proprietary technologies that include Nvidia's NVLink or AMD's Infinity Fabric interconnects to link up multiple GPUs.
The datacenter-class systems, which will come in 4U or 5U sizes, have a “modular” architecture based on standards established by the Open Compute Project, such as the OCP Accelerator Modules (OAM). The modular approach allows for more economical CPU and GPU upgrades without replacing entire systems, Supermicro said.
The server is clearly built for applications like artificial intelligence with a heavy reliance on GPUs, memory, and storage. For those who don’t want to use proprietary tech like Nvlink, the system supports PCIe 4.0, and will support PCIe 5.0, which Intel and AMD are now starting to embrace with their silicon.
The system will support GPUs including AMD's Instinct MI250 GPUs or the Nvidia’s A100 GPUs with a thermal capacity of 700 Watts.
The base configurations include dual-socket support for 3rd-Gen Intel Xeon scalable processors with up to 48 cores per package, or 3rd-Gen AMD Epyc parts with up to 64 cores per package.
Given the future-proof nature of this server, it is assumed that the server will support AMD’s new Milan-X chips, which were announced on Monday, and Intel’s upcoming Sapphire Rapids chips, which includes new packaging that could add more computing might to the GPU server.
- AMD: Our latest, pricier mega-cache Epyc processors leapfrog Intel's
- Nebulon adds Ansible support for infrastructure deployment
- Another Massive Display as AMD hails 'outstanding' 2021, teases Genoa and Bergamo chips
- Nvidia pushes crowd-pleasing container support into AI Enterprise suite
The AMD and Intel-based configuration supports up to 8TB of DDR4 memory and has eight PCIe 4.0 slots. It is also assumed the system will support DDR5 memory, given the future-proof nature of this server.
Expansion modules support the addition of 1U expansion module to tack on more GPUs or PCIe attached devices.
The AMD configuration supports four AMD MI250 or Nvidia’s HGX A100 GPUs. The Intel system supports only Nvidia’s HGX A100 GPUs, and can take up to 12TB of Intel Optane Persistent Memory. ®