Gigabyte passively cooled Radeon 4850 card

How quieter does your GPU need to be?


Review We've been hugely impressed by the Radeon HD 4850 graphics card thanks to the balance it strikes between price and performance, and we firmly believe that, at £125, it's damn good value.

Gigabyte GV-R485MC-1GH

Gigabyte's GV-R485MC-1GH: double-slot design...

Pretty much our only reservation with the reference HD 4850 centres on the cooling package. The HD 4850's graphics chip employs 800 unified shaders that generate a fair amount of heat. AMD chose a single-slot design for the HD 4850 even though the dual-slot HD 4870 has an enormous cooler. The single-slot form-factor makes it easy to slip an HD 4850 into almost any PC, and there’s more good news: AMD has selected a gentle fan speed that makes the HD 4850 surprisingly quiet.

The combination of a slimline heatsink and low fan speed means that the heat produced by the HD 4850 gets trapped in the casing, and we concluded our original review by saying: "We’d give the HD 4850 the nod on this one despite its toasty hotness."

Gigabyte has decided that the cooling package on the HD 4850 could stand some improvement and the result is the GV-R485MC-1GH, which is passively cooled. The model code breaks down thus: GV for Gigabyte VGA; R485 denotes a Radeon HD 4850; MC stands for Multi-Core cooling; and 1GH refers to the 1GB of memory.

You can see an animation that explains the Multi-Core Cooling feature here, but our photos should make things clear enough.

Gigabyte GV-R485MC-1GH

...with some serious metal for passive cooling

One cooling core sits directly on top of the GPU, and there are two more cooling cores which are each connected to the main core by a pair of heatpipes. These cooling cores are quite sizeable affairs so Gigabyte has used a dual-slot design which means that this HD 4850 has a packaging envelope that's similar to an HD 4870. One of the coolers projects through the mounting bracket by a few millimetres but this looks like a means of supporting the cooler rather than a way of shedding heat into the air at the rear of the case.

Narrower topics


Other stories you might like

  • Nvidia wants to lure you to the Arm side with fresh server bait
    GPU giant promises big advancements with Arm-based Grace CPU, says the software is ready

    Interview 2023 is shaping up to become a big year for Arm-based server chips, and a significant part of this drive will come from Nvidia, which appears steadfast in its belief in the future of Arm, even if it can't own the company.

    Several system vendors are expected to push out servers next year that will use Nvidia's new Arm-based chips. These consist of the Grace Superchip, which combines two of Nvidia's Grace CPUs, and the Grace-Hopper Superchip, which brings together one Grace CPU with one Hopper GPU.

    The vendors lining up servers include American companies like Dell Technologies, HPE and Supermicro, as well Lenovo in Hong Kong, Inspur in China, plus ASUS, Foxconn, Gigabyte, and Wiwynn in Taiwan are also on board. The servers will target application areas where high performance is key: AI training and inference, high-performance computing, digital twins, and cloud gaming and graphics.

    Continue reading
  • Nvidia taps Intel’s Sapphire Rapids CPU for Hopper-powered DGX H100
    A win against AMD as a much bigger war over AI compute plays out

    Nvidia has chosen Intel's next-generation Xeon Scalable processor, known as Sapphire Rapids, to go inside its upcoming DGX H100 AI system to showcase its flagship H100 GPU.

    Jensen Huang, co-founder and CEO of Nvidia, confirmed the CPU choice during a fireside chat Tuesday at the BofA Securities 2022 Global Technology Conference. Nvidia positions the DGX family as the premier vehicle for its datacenter GPUs, pre-loading the machines with its software and optimizing them to provide the fastest AI performance as individual systems or in large supercomputer clusters.

    Huang's confirmation answers a question we and other observers have had about which next-generation x86 server CPU the new DGX system would use since it was announced in March.

    Continue reading
  • Microsoft Azure to spin up AMD MI200 GPU clusters for 'large scale' AI training
    Windows giant carries a PyTorch for chip designer and its rival Nvidia

    Microsoft Build Microsoft Azure on Thursday revealed it will use AMD's top-tier MI200 Instinct GPUs to perform “large-scale” AI training in the cloud.

    “Azure will be the first public cloud to deploy clusters of AMD's flagship MI200 GPUs for large-scale AI training,” Microsoft CTO Kevin Scott said during the company’s Build conference this week. “We've already started testing these clusters using some of our own AI workloads with great performance.”

    AMD launched its MI200-series GPUs at its Accelerated Datacenter event last fall. The GPUs are based on AMD’s CDNA2 architecture and pack 58 billion transistors and up to 128GB of high-bandwidth memory into a dual-die package.

    Continue reading
  • AMD bests Intel in cloud CPU performance study
    Overall price-performance in Big 3 hyperscalers a dead heat, says CockroachDB

    AMD's processors have come out on top in terms of cloud CPU performance across AWS, Microsoft Azure, and Google Cloud Platform, according to a recently published study.

    The multi-core x86-64 microprocessors Milan and Rome and beat Intel Cascade Lake and Ice Lake instances in tests of performance in the three most popular cloud providers, research from database company CockroachDB found.

    Using the CoreMark version 1.0 benchmark – which can be limited to run on a single vCPU or execute workloads on multiple vCPUs – the researchers showed AMD's Milan processors outperformed those of Intel in many cases, and at worst statistically tied with Intel's latest-gen Ice Lake processors across both the OLTP and CPU benchmarks.

    Continue reading

Biting the hand that feeds IT © 1998–2022