This article is more than 1 year old

AMD trumpets next-gen GPU architecture

The road to the Holodeck

Fusion Summit AMD has trumpeted its next-generation GPU architecture, painting the design as a radical departure that has one foot in the graphics world and the other in what AMD, Microsoft, ARM, and others dub "heterogeneous computing".

Essentially, the new architecture is a parallel-processor, throughput engine that can serve both graphics and compute tasks. For some time, AMD GPUs – formerly ATI-branded – have been based on multiple graphics engines with VILW (very long instruction word) cores. Not so AMD's next-generation parts.

Speaking at the company's Fusion Developer Summit on Thursday, AMD graphics CTO Eric Demers described the new GPU as an MIMD (multiple-instruction-stream, multiple-data-stream) architecture with a SIMD (single-instruction-stream, multiple-data-stream) vector array. "There are four wavefronts, every cycle, executing on the vector and scalar units, And these can come from four completely different applications or from the same application," he explained.

"And then there's up to 40 wavefronts living in a CU [compute unit], that any four of which can run at any cycle, so its sorta got SMT [simultaneous multi-threading] properties."

But he doesn't have a good name for it. "The reality is that it's leveraging all that goodness from all those different architectures, and to put one perfect label on it would not be fair," he said.

AMD's goal is to blur the line between the data which CPUs and GPUs are munching on. "Our plan is that ... eventually all these devices – whether they're CPUs or GPUs – are in the same unified 64-bit address space."

Although the first parts based on the new architecture should appear by the end of this year, Demers laid out a series of capabilities that AMD plans to roll out between the first new-architecture GPUs and then 'incrementally" by 2014: GPU support for C++ and other "high-level constructs", virtual address space, support for page faults, memory coherence at the L2 level and shared among the CUs and between the CPU and GPU, and the ability to save and reload the device state.

This last ability, Demers said, will make context switching "much, much easier", and although some fixed-function elements in the pipe will require some work, "fundamentally this core can support and will support context switching and preemption."

These capabilities are not limited to just discrete graphics. "I'm not talking about APU, I'm not talking about GPU, I'm talking about an IP of a core that's going to be used in all our products going forward," he said. "Over the next few years we're going to be bringing you all of this throughout all our products that have GPU cores."

Demers added that the new architecture won't require apps to be rewritten to take advantage of it. "Almost without exception, everything runs the same or faster," he said. "There are going to be cases, particularly on the compute side and more so on the graphics side where this really gives you a fourfold jump."

But he aims to provide more than speed. A lot more. "I want to create realities that you can't tell that you're not looking through a window," he said. "In fact, I'd rather that you can't tell you're not inside my reality."

AMD's next-generation graphics architecture, he contends, is one step on what he called "the road to the Holodeck." It's part of the continued progression from the fixed-function, graphics-only GPUs of the mid-1990s to the simple shaders of 2002 to 2006, and on to the introduction of parallel-core, unified shader architectures of 2007 and later.

His point in this historical review wasn't mere misty-eyed reminiscence. He was leading his audience from GPUs' graphics-only past to their increasingly compute-supportive role in what AMD envisions as the heterogeneous-computing future, in which GPUs are equal partners with CPUs and specialized cores. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like