The red team fires back, finally.
AMD has finally responded to Nvidia’s Turing architecture launch by throwing down a 7nm gauntlet named the Radeon VII. It’s the successor to the Radeon Vega GPUs from 2017 with twice as much High Bandwidth Memory (HBM) and a die shrink all the way down to 7nm, making it the world’s first GPU manufactured on this node. At $699 it’s looking to cut the RTX 2080’s legs off, and promises to excel at both 4K gaming as well as GPU compute tasks thanks to its massive 16GB of memory. AMD is sweetening the deal even further by offering three AAA games: Resident Evil 2, The Division 2, and Devil May Cry 5, which are effectively worth $180. I’m not sure if the game bundle is just for the AMD branded card or if it applies to partner cards, so be sure to check that before you pull the trigger.
AMD Radeon VII – Design and Features
The Radeon VII is a lot like the previous Vega GPUs in that it’s a GCN card that uses what AMD calls its “second generation Vega” architecture. In addition to the die shrink from 14nm FinFET in the previous generation down to 7nm for this GPU, the company says it’s added a ton of optimizations to boost clock speeds, reduce latency, and increase memory bandwidth along with a host of optimizations for workstation tasks. AMD is claiming performance similar to Nvidia’s RTX 2080 GPU, but for less money, with the added benefit of it being more future proof and better for rendering and video editing thanks to its 16GB of HBM memory. Let’s take a look at the spec chart to see how they stack up.
As you can see in the chart AMD pulled no punches with this GPU, stacking an outlandish 16GB of High Bandwidth Memory right onto the PCB next to the die, and clocking the GPU ten percent higher than the previous Vega GPU. Double the memory, higher clock speeds is the sales pitch this time around.
The memory is a pretty big deal, as AMD is the only company using it right now. Nobody knows for sure why Nvidia chose GDDR6 instead of HBM for its Turing cards but most people speculate it’s due to its high cost with few practical advantages in gaming over higher clocked DDR memory. Since HBM is mounted next to the GPU die instead of off to the side, the bus width and overall bandwidth is much higher than what is possible with GDDDR6. For example, whereas the RTX 2080 uses a 256-bit wide bus, the Radeon’s is a massive 4,096 bit, allowing for up to 1TB per second of data to traverse it as opposed to the 448GB/s that’s possible with the GeForce cards.
A mock Radeon VII die showing the four stacks of HBM alongside the die.
Indeed, the fat 16GB memory buffer is easily the Radeon VII’s biggest selling point, as it’s really the only thing not offered by its competition at this time (software and other sundries excluded). AMD claims this not only makes the card more future proof but also allows it to be more effective in GPU compute scenarios since it can handle larger data sets. AMD also says the larger memory buffer allows for more even frames per second (FPS) in certain memory challenged games, especially when HDR is a factor.
Otherwise you’ve got a Boost clock of 1,750MHz (peak is 1800MHz), and the VII requires two eight-pin power connectors as it’s a 300w GPU, which is 40w more than Nvidia’s flagship RTX 2080 Ti. The card comes in a sleek aluminum shell with an illuminated Radeon logo on the side, as well as a lit up cube in the corner with an R on it. The GPU is heavy and feels like a lead brick. The top part of the GPU that you can see when it’s installed has an interesting design with some slits placed all over to break up the monolithic look somewhat. Out back it’s rocking three DisplayPort connectors and one HDMI port. The RADEON logo on the side and the “R” cube light up in red when the GPU is powered up.
AMD Radeon VII – Benchmarks
To test the Radeon VII I tossed it into the IGN test bench that was assembled with love in our office. It sports a Core I7-7700K CPU, 16GB of DDR4 memory, an Asus Z270 motherboard, Intel SSD, and an EVGA power supply. I ran it through our gauntlet of gaming benchmarks, using Fraps when I couldn’t just run a benchmark to record the average framerate. Since the GPU costs $699 I’m comparing it to similarly priced GPUs at all three common resolutions.
Just as the previous Vega GPU was close to the GTX 1080, the Radeon VII is close to its successor, the GTX 1080 Ti as well as the RTX 2080, depending on the game. Overall this is a good showing from AMD, though just like with the previous Vega card, it’s surprising it took them this long to catch up to a GPU that launched two years ago. Still, AMD fans will take whatever they can get, and this is truly a very decent 4K GPU from the red team, finally. It’s a lot like the GTX 1080 Ti in that it can run 4K/60fps at max settings on some games but not all, though by modifying the settings a bit it should be doable on most games. Keep in mind my tests were run at maximum possible settings, sans anti-aliasing.
Overall I’d say AMD’s claim that matching the RTX 2080 is mostly true. In games like The Witcher, Battlefield 1, and Shadow of the Tomb Raider performance was quite close at all resolutions. I did experience some wild fluctuations in some titles where I was using Fraps, such as PUBG, and that could come down to variations in maps used by our previous tester. Still, it was over 60fps at every resolution. It performed best at 4K in Far Cry 5 and BF1, however. It’s also a very solid upgrade from the original Vega, so if you’re Red Team for life and have been saving your pennies, it’s a no brainer upgrade assuming cards will actually be available at MSRP. Now that the mining craze has died down I hope that’s the case. It’s selling for $699 on AMD’s website now, so that give me hope.
AMD Radeon VII – Overclocking
Just like with the Vega cards, I didn’t have much luck overclocking the VII, and when I tried I saw some funky behavior. My advice; just let the card do its thing, and don’t worry about it too much. The main issue I had was that none of my usual software programs recognized the card (MSI Afterburner mainly), so I had to use AMD’s Wattman software, which isn’t easy to use nor is it exactly intuitive. It does have a button labeled “Auto overclock” though, which seems to work a little but, when I typed in values for the max clock and hit apply they never seemed to be applied.
Left to its own devices, the GPU clocked itself at around 1720 to 1750MHz but I did see the GPU register a peak of 1875MHz or so, but again, i saw some weird behavior when I enabled “Auto overclock” such as applications crashing, or giving me messages about not having enough VRAM (see image below). When I clicked it with Heaven running the application bugged out and looked like an acid trip, though on future runs things worked just fine.
One other thing to note – this GPU ships with a really aggressive fan curve, making it super loud when running at full load. Again, echoes of Vega once again. However, the good news is it’s totally unnecessary as the card has a lot of headroom and the cooling seems over-engineered. The card never got higher than 66C with the default fan curve in the Radeon Wattman software. I lowered the curve to the point where I could barely hear the GPU at all, and even then it never got hotter than 70C. I have no idea why the default fan speed is so high, but it’s not necessary.
AMD Radeon VII – Some Highlights Worth Noting
There are a few things I need to point out that weren’t covered in my official testing. The first is this card is much faster in DX12 than in DX11 in games that offer both, such as Shadow of the Tomb Raider. At 4K I was able to go from 47fps to 59 using DX12, and 57fps to 77fps at 2560 x 1440. That’s a massive increase, and is a major benefit for the Radeon, though not all games use DX12, even newer ones like Far Cry 5.
My benchmark assistant, Donut. He has a head tilt.
Also, AMD promoting this GPU as one that’s excellent at workstation tasks due to its huge 16GB frame buffer and massive 1TB/s of memory bandwidth. I’m admittedly not an expert in these types of applications, which include image rendering, and video editing with programs like Adobe Premier, etc. However, I was able to run a quick test using the application Luxmark to see if I could ascertain any benefits in this type of workload. In my tests there was quite a difference, but as always it will boil down to the programs you use and the kind of data are working with. That said, here’s a quick rundown of the scores I saw in Luxmark with several GPUs:
- Radeon VII: 51,680
- RTX 2080: 30,133
- GTX 1080 Ti: 21,632
As you can see that’s a pretty massive win for AMD, but I don’t know enough about these programs to make a declarative statement about it other than in this one test it certainly outperformed similarly priced GPUs by a wide margin.
Finally, obviously this card does not support features like ray tracing, but it is safe to say at this point that nobody will miss it. It has only been implemented in one game so far, and even in Battlefield the game looks great without it. Most people agree that ray tracing is a game changer, but it will take several years for both the hardware and software to make notable progress on that front, so for now it’s exclusion won’t be missed by too many people.
The AMD Radeon VII is launching today for $699 with a three game bundle. As we went to press it was only available on AMD’s website.