Not just Big Navi, but the biggest Navi of them all. AMD’s Radeon RX 6900 XT completes the trio of next-gen graphics card launches from Team Red and on the face of it, it does seem like… the least interesting of them all? In all areas of the specification – bar one – RX 6900 XT is identical to the RX 6800 XT, a graphics card that costs a full $350 less than the new GPU’s colossal $999 price-point. In essence, the only palpable difference between 6800 XT and 6900 XT is that the top-end offering gets the full complement of 80 compute units vs the slightly cut-back 72 in the cheaper product. But perhaps there’s more to the spec than meets the eye and in combination with the fact that this is the absolute top-end, maybe that’s enough to convince the most die-hard of AMD fanatics that this is the card for them.
Of course, the concept of a halo product commanding a colossal price premium is nothing new. Whether it’s CPUs, GPUs, motherboards, storage or memory, it seems that at the top-end, there’s a long history of a bit of extra frequency, a few more shaders or some extra RAM allowing manufacturers to ramp up prices in ways that are divorced from the concept of ‘value’ in every way. Intel had its Extreme Editions, Nvidia built a whole Titan brand around the concept, and while it offers much more memory, more compute and an out-there cooling set-up, the RTX 3090’s price-point is far more audacious than AMD’s.
But on paper at least, what we’re offered here with the RX 6900 XT is just 11.1 per cent of extra compute power via the extra CUs, but remarkably AMD offers no other premium-level spec – the 16GB of GDDR5 is the same, memory bandwidth is unaltered and even power levels are the same. AMD’s review materials mention a high-end PCB with 16 power stages, a 14 layer PCB with four layers of two-ounce copper, premium IT-170 material and ‘extra thermal and acoustic margins’ built in a visual inspection of the board also looks like a match for RX 6800 XT. The cooler and chassis also looks and feels the same, though AMD has increased the notional top-end frequency limit of the RX 6900 XT to 3000Hz, up from 6800 XT’s 2800MHz.
We’re not going to focus on overclocking in this review. While I appreciate that high-end users may appreciate wringing every iota of extra power out of the hardware, the reality is that both AMD and Nvidia have got awfully good at delivering most of a card’s full power to the user straight out of the box via boost clocks, which adjust frequency according to power, thermals and other factors. AMD offers ‘rage mode’ auto overclocking in the control panel, along with manual options but even in the firm’s own benchmarks, the improvements to performance range from circa four per cent to seven per cent. That’s an awful lot of effort for limited returns and the ever-present spectre of instability – a perfectly fine OC on one game can lead to artefacting, instability and crashes on another.
However, there are other features I was interested in testing – such as Smart Access Memory, which uses the new PCI Express resizable BAR feature present in Ryzen 5000-based systems to allow the CPU full access to the GPU’s VRAM, without having to work through the standard 256MB I/O buffer. The good news here is that ‘ReBAR’ is slowly rolling out to Intel-based systems, and later on in this review you’ll find our results there using an Intel 10900K system – but to cut to the chase, Smart Access Memory doesn’t just work on our system, it delivers equivalent boosts to what we saw with Ryzen. Supported titles deliver much more in performance improvements than overclocking can, at 1440p and 1080p resolutions at least.
AMD GPU Specs CUs Boost Clock VRAM Mem Interface TDP Price
RX 6900 XT 80 2250MHz 16GB GDDR6 256-bit + 128MB IC 300W $999
RX 6800 XT 72 2250MHz 16GB GDDR6 256-bit + 128MB IC 300W $649
RX 6800 60 2105MHz 16GB GDDR6 256-bit + 128MB IC 250W $579
RX 5700 XT 40 1905MHz 8GB GDDR6 256-bit 225W $399
RX 5700 36 1725MHz 8GB GDDR6 256-bit 180W $349
RX 5600 XT 36 1560MHz 6GB GDDR6 192-bit 150W $279
RX 5500 XT 22 1845MHz 8GB GDDR6 128-bit 130W $199
The table above represents the challenge AMD has in delivering much in the way of a meaningful improvement from the RX 6900 XT. On prior GCN architectures, Team Red has delivered products based on the same processor with different CU counts that at stock settings at least, deliver vanishingly small performance gains. The Radeon RX 380 and RX 380X had 28 and 32 compute units respectively, yet benched very similarly. A Radeon RX 570 had fewer CUs, lower clocks and pared back memory, yet could still overclock to match a stock RX 580. In many situations RX Vega 56 got remarkably close to the top-end Vega 64 too. RX 6900 XT has to beat the RX 6800 XT – and do so convincingly through compute power alone, according to the specs, at least. Compute is the only differentiating factor based on this technical read-out: everything else is the same.
But this is Big Navi, this is RDNA 2 – an architecture that offers a lot of improvements and new features when stacked up against the RDNA 1 technology that debuted in the RX 5700 XT (also benchmarked in this article). Hardware-accelerated ray tracing is added to the Radeon line-up for the first time, ensuring that DXR-equipped titles can run on AMD cards now, not just Nvidia’s RTX line. DirectStorage features are supported via DX12 Ultimate, as is variable rate shading and mesh shaders – next-gen graphics features we should expect to see more fully deployed across the upcoming gaming generation.
At the nuts and bolts level, AMD has also delivered some exceptional engineering here. As the saying goes, there’s no replacement for displacement and Team Red ticks this off by delivering big, big improvements to core frequency. While the boost clock here is rated at 2250MHz, I saw a range of frequencies during my testing, starting at just beneath 2.0GHz and even hitting 2.3GHz – which may explain a couple of benchmarks in our line-up that do actually see 6900 XT vs 6800 XT performance exceed the notional 11.1 per cent of extra compute power.
At its core, it is still the same Navi 21 processor though and this means that all of the impressive innovations we saw in RX 6800 and 6800 XT are present and correct in RX 6900 XT. One of the most intriguing features is the dramatically named Infinity Cache, which essentially allows a bog standard 256-bit memory interface with GDDR6 memory deliver performance that shouldn’t really be possible with such a narrow bus paired with such a large processor. According to AMD, Infinity Cache allows for 2.17x the effective bandwidth, and also delivering power efficiency advantages. There is a cost in terms of silicon area, but it’s offset by AMD using fewer memory controllers on the die, and it also means that fewer chips can be used – and AMD doesn’t need to use the new, higher-end GDDRX modules Nvidia has deployed for RTX 3080 and RTX 3090. The cost saving for sticking with GDDR6 may also explain how AMD is able to deliver 16 gigs of memory across the stack, whereas Nvidia opts for 10GB and 8GB for RTX 3080 and RTX 3070 respectively. We’ve yet to find games that require more than 10GB at 4K, but more memory suggests further future-proofing.
We’ve already discussed the cooling set-up in the RX 6800/ 6800 XT review, but suffice to say that the chassis, vapour chamber and tri-axial fan set-up looks to be the same on the RX 6900 XT. I noted maximum temperatures of 80 degrees Celsius on an open PC set-up (but usually in the 70s) but AMD is at pains to tell us that Big Navi is designed to run hot. Regardless, the cooler itself is very, very quiet – but I did hear ‘coil whine’ from the board when running at very, very high frame-rates on some titles. To be fair, I tested the same titles in the same situations on an RTX 3080 and while not quite as noticeable, the effect was there too. As we are looking at the same board as the RX 6800 XT, there’s parity in display outputs too – we’re looking at an HDMI 2.1 output, capable of 8K resolution support at 60Hz and 4K at 120Hz, working in combination with dual DisplayPort 1.4a outputs and USB-C – which can be used as a third DisplayPort, HDMI or even DVI-D if you have access to the right adapters.
On to the description of our test set-up. We considered AMD’s Ryzen 3950X as a candidate for our GPU test bench build, as this allows access to the full PCIe 4.0 bandwidth supported by AMD’s 6000 series and Nvidia’s 30-series, but we eventually settled on an Intel Core i9 10900K for our test system, replacing the Core i7 8700K we used for our prior generation tests. Though the Intel system is technically constrained to PCIe 3.0 bandwidth, thus potentially limiting performance in GPU-bound scenarios, we still got higher 4K frame-rates with the Intel chip than with the fastest Ryzen processor available at the time. The Intel system showed a more significant frame-rate lead at 1080p and 1440p, thus making it our best bet for avoiding CPU bottlenecking as much as possible to allow different graphics cards to differentiate themselves. We’ll consider Ryzen 5000 or its successors the next time we update our test bench, but for now it makes sense to stick with it so we don’t have to retest dozens of graphics cards in the most busy period of the year (and during a new console cycle).
To ensure our results are comparable, the Core i9 10900K is locked to a 5GHz all-core frequency and cooled by a 240mm Alpacool Eisbaer Aurora AiO which keeps the overclocked system at around 75C under full load. The 10900K is backed by an Asus Maximus 12 Extreme Z490 motherboard and two 8GB sticks of G.Skill Trident Z Royal 3600MHz CL16. Our games are run from a capacious 2TB Samsung 970 Evo Plus NVMe drive provided by Box. The whole rig is powered by a 850W gold-rated Gamer Storm power supply.
Now that you’re familiar with our test bed and have a good overview of the cards themselves, let’s get into some gaming tests to see how they perform.