Nvidia today launched its latest top-of-the-range graphics chip, the GeForce 6800, largely as expected.
Two products derive from the chip formerly known as NV40: the GeForce 6800 and the GeForce 6800 Ultra, both 220 million-transistor beasts.
Both provide a new architecture based that can process 16 pixels in parallel with full floating-point precision throughout its pipeline length and with support for DirectX 9.0c's version 3.0 pixel and vertex shaders. That includes the ability to deal with multiple render targets in memory. Version 3.0 of Nvidia's own Intellisample system provides 16x anisotropic filtering and lossless compression for colour, texture and z-buffer data to speed it through the pipeline. Nvidia has also tweaked its UltraShadow shadow-casting technology for better performance, it said.
Nvidia also redesigned the texture engine to allow up to 16 textures to be applied per rendering pass. "Non-power of two textures" are supported, along with the sRGB texture format and both DirectX and S3TC texture compression algorithms.
The new chips can handle GDDR 3 SDRAM across a 256-bit memory interface. Memory runs at 1.1GHz on the 6800 Ultra for 32.5GBps of bandwidth. The Ultra can churn out 6.4 billion texels per second and process 600 million vertices in the same time.
Nvidia's also touted the parts' on-chip programmable video processing engine. That already gives it support for MPEG 2 and WMV 9 with motion compensation which can also be applied to other formats, such as MPEG 4, H.264 and DiVX. Its programmable nature makes it particularly suitable for pro tasks such as '3:2 pulldown', the process of converting an moving image encoded as interlaced fields to a frame-based picture sequence more suitable for playback on a computer display. It can also be put to gamma correction, colourspace conversion and a host of video effects.
The 130nm 6800 and 6800 Ultra are shipping to card vendors, Nvidia said, who are expected to ship boards to end users within the next 45 days. The chips support AGP 8x, but can be added to PCI Express boards courtesy of Nvidia's own bridge chip. ®