Intel offers $179 Arc A580 GPU to gamers on a budget
640K, sorry, 1080p ought to be enough for anybody
Intel filled out its Arc graphics portfolio Tuesday with a $179 card aimed at 1080p gamers on a shoestring budget.
The Arc A580 slots in between the x86 giant's entry-level A380 and mid-tier A700-series GPUs introduced last year. In fact, a glance over the card's spec sheet reveals the A580 is essentially a cutdown version of the A750.
Both cards feature 8 GB of GDDR6, good for 512 GB/s of memory bandwidth, but the lower-end card's Xe graphics and ray-tracing units have been pared back from 28 to 24. In addition to fewer cores, the A580s are also technically clocked slower at 1.7 GHz versus 2.05 GHz. However, Intel notes that all three of its launch partners — Asrock, Sparkle, and Gunnir — have overclocked their cards to 2 GHz.
The curbs do come at the advantage of a 40-watt-lower thermal design power (TDP) at 185 watts compared to the 225 watts demanded by its more powerful siblings.
Intel's Arc A580 GPU slots in betwen the entry-level A380 and mid-tier A750 - Click to enlarge. Source: Intel
In terms of performance, Intel claims the card is good for 1080p 60-plus FPS gaming at high-quality settings in a variety of titles. The chipmaker even showed charts where the card achieved 94FPS in Shadow of the Tomb Raider — a game that's become a popular benchmark in recent years — with ray-traced shadows enabled and without resorting to AI-upscaling trickery.
AI upscaling tech has become quite popular in recent years among GPU vendors for extracting more frames from less powerful GPUs. Nvidia has Deep Learning Super Sampling (DLSS), AMD has FidelityFX Super Resolution, and Intel has Xe Super Sampling (XeSS).
Each works more or less the same by using various algorithms and machine learning techniques to upgrade low-res frames to 1080p or higher, allowing for higher frame rates than would otherwise be possible. The GPU and game engine kick out low-res frames at a comfortable rate and then upscale them. More recent implementations by AMD and Nvidia go so far as using ML to generate whole frames from previously drawn ones. This insertion of AI-made frames boosts the frame rate.
Of course, game developers have to utilize these technologies for it all to work properly, and Intel's XeSS, perhaps unsurprisingly, isn't as well supported with 64 titles claimed. For comparison, AMD's FSR boasts 139 titles and Nvidia's DLSS claims 300-plus in all.
But if your game does happen to support XeSS, Intel claims gamers can expect up to 63 percent higher frame rates when the feature is enabled. Of course, with any vendor-supplied numbers, we recommend you take these with a grain of salt.
- AMD graphics card users report gremlins with Windows 11
- Arm patches GPU driver bug exploited by spyware to snoop on targets
- Raspberry Pi 5 revealed, and it should satisfy your need for speed
- Intel slaps forehead, says I got it: AI PCs. Sell them AI PCs
In addition to gaming, Intel also talked up the A580's dual AV1 media decoders and encoders. AV1 is a relatively new, royalty-free codec that's proven to be very space efficient. As we've discussed previously, the codec is anywhere from 20 to 40 percent more efficient at compressing video compared to existing codecs, like H.265.
With that said, we suspect the Arc A580's main selling point is going to be its price. Starting at $179, the card undercuts the market price of its "closest competitors" — the Nvidia RTX 3050 and AMD RX 6600 — by between $25-$50 at current numbers. ®