Skip to main content

AMD Radeon Vega RX 64 8GB Review

Board Layout & Components

Board Layout

AMD’s RX Vega 64 and Vega Frontier Edition employ the exact same board and components. The only differences are the package that’s soldered to the board, with RX Vega 64 getting half of the Vega Frontier Edition’s memory and adjusted firmware. The only reason the card needs to be this long is to accommodate the cooler and large radial fan. It could be shortened to the size of a Radeon R9 Nano with ease.

A pair of eight-pin auxiliary power connectors have one coil each; they help smooth out certain voltage peaks. Interestingly, though, there are no large capacitors to be seen.

On the back of the board, we spy a densely-packed area under the GPU/memory, a PWM controller, and several other surface-mounted components.

GPU Power Supply

At the center of it all is International Rectifier's IR35217, a poorly-documented, dual-output, multi-phase controller able to provide six phases for the GPU and two additional phases. A closer look reveals 12 regulator circuits though, not just six. This is a result of doubling, allowing the load from each phase to be distributed between two regulator circuits.

Image 1 of 2

Image 2 of 2

Six IR3598 interleaved MOSFET drivers on the back of the board are responsible for the doubling. These are the parts we pointed out earlier. The following video we created using AMD's Frontier Edition card starts at idle and shows how the PWM controller switches the load back and forth between circuits. This keeps efficiency high by using only one phase, but also avoids overloading a single circuit over prolonged periods of time.

The actual voltage conversion for each of the 12 regulator circuits is handled by an IRF6811 on the high side and an IRF6894 on the low side, which also contains the necessary Schottky diode. Both are International Rectifier HEXFETs that we've seen AMD use before.

Image 1 of 2

Image 2 of 2

For the coils, AMD went with encapsulated ferrite core chokes that are soldered in the front. At 190nH, their inductivity is a bit lower than the 220nH we often see.

Memory Power Supply

The memory's one phase is supplied by the IR35217 also. One phase is plenty, since on-package HBM2 needs a lot less power. A CHL815 gate driver is found on the back of the board. For the voltage converters, AMD went with ON Semiconductor's NTMFD4C85N, a dual N-channel MOSFET that supplies the high and low sides.

Image 1 of 2

Image 2 of 2

It’s interesting that AMD went with flat SMD capacitors instead of can caps. The somewhat lower capacity is compensated for by simply running two of them in parallel on the back of the board. It does make sense to spread the hot-spots and make the thermal solution's job a little easier. Waste heat is kept to a minimum, as is the cost associated with cooling.

Image 1 of 2

Image 2 of 2

At 220nH, the coils are a bit larger this time around. The ones corresponding to the "partial voltage" converters, which operate at a much lower frequency, are even larger at 820nH. They don’t have to deal with the same amounts of power, though.

Other Voltage Converters

Creating the VDDCI isn’t a very difficult task. But it's an important one because this regulates the transition between the internal GPU and memory signal levels. It’s essentially the I/O bus voltage between the GPU and the memory. As such, two constant sources of 1.8V and 0.8V are supplied.

Image 1 of 2

Image 2 of 2

Underneath the GPU, there’s an Anpec APL5620 low drop-out linear regulator, which provides the very low voltage for the phase locked loop (PLL) area.

Image 1 of 2

Image 2 of 2

ON Semiconductor's MC74HC238A demultiplexer drives the LED bar that shows the power supply’s load. It’s a fun gimmick, but does get annoying in a dark room at night due to its brightness.


MORE: Best Graphics Cards


MORE: Desktop GPU Performance Hierarchy Table


MORE: All Graphics Content

  • 10tacle
    We waited a year for this? Disappointing. Reminds me of the Fury X release which was supposed to be the 980Ti killer at the same price point ($649USD if memory serves me correctly). Then you factor in the overclocking ability of the GTX 1080 (Guru3D only averaged a 5% performance improvement overclocking their Vega RX 64 sample to 1700MHz base/boost clock and a 1060MHz memory clock). This almost seems like an afterthought. Hopefully driver updates will improve performance over time. Thankfully AMD can hold their head high with Ryzen.
    Reply
  • Sakkura
    For today's market I guess the Vega 64 is acceptable, sort of, since the performance and price compare decently with the GTX 1080. It's just a shame about the extreme power consumption and the fact that AMD still has no answer to the 1080 Ti.

    But I would be much more interested in a Vega 56 review. That card looks like a way better option, especially with the lower power consumption.
    Reply
  • envy14tpe
    Disappointing? what. I'm impressed. Sits near a 1080. Keep that in mind when thinking that FreeSync sells for around $200 less than Gsync. So pair that with this GPU and you have awesome 1440p gaming.
    Reply
  • SaltyVincent
    This was an excellent review. The Conclusion section really nailed down everything this card has to offer, and where it sits in the market.
    Reply
  • 10tacle
    20060001 said:
    Disappointing? what. I'm impressed. Sits near a 1080.

    The GTX 1080 has been out for 15 months now, that's why. If AMD had this GPU at $50 less then it would be an uncontested better value (something AMD has a historic record on both in GPUs and CPUs). At the same price point however to a comparable year and three month old GPU, there's nothing to brag about - especially when looking at power use comparisons. But I will agree that if you include the cost of a G-Sync monitor vs. a FreeSync monitor, at face value the RX 64 is the better value than the GTX 1080.
    Reply
  • redgarl
    It`s not a bad GPU, however I would not buy one. I am having an EVGA 1080 FTW that I am living to hate (2 RMAs in 10 months), however even if I wanted to switch to Vega, might not be a good idea. It will not change anything.

    However two Vega 56 in CF might be extremely interesting. i did that with two 290x 2 years ago and it might be still the best combo out there.
    Reply
  • blppt
    IIRC, both AMD and Nvidia are moving away from CF/SLI support, so you'd have to count on game devs supporting DX12 mgpu (not holding my breath on that one for the near future).
    Reply
  • cknobman
    I game at 4k now (just bought 1080ti last week) and it appears for the time being the 1080ti is the way to go.

    I do see promise in the potential of this new AMD architecture moving forward.
    As DX12 becomes the norm and more devs take advantage of async then we will see more performance improvements with the new AMD architecture.

    If AMD can get power consumption under control then I may move back in a year or two.

    Its a shame too because I just built a Ryzen 7 rig and felt a little sad combining it with an Nvidia gfx card.
    Reply
  • AgentLozen
    I'm glad that AMD has a video card for enthusiasts who run 144hz monitors @ 1440p. The RX 580 and Fury X weren't well suited for that. I'm also happy to see that Vega64 can go toe to toe with the GTX 1080. Vega64 and a Freesync monitor are a great value proposition.

    That's where the positives end. I'm upset with the lack of progress since Fury X like everyone else. There was a point where Fury X was evenly matched with nVidia's best cards during the Maxwell generation. Nvidia then released their Pascal generation and a whole year went by before a proper response from AMD came around. If Vega64 launched in 2016, this would be totally different story.

    Fury X championed High Bandwidth Memory. It showed that equipping a video card with HBM could raise performance, cut power consumption, and cut physical card size. How did HBM2 manifest? Higher memory density? Is that all?

    Vega64's performance improvement isn't fantastic, it gulps down gratuitous amounts of power, and it's huge compared to Fury X. It benefits from a new generation of High Bandwidth memory (HBM2) and a 14nm die shrink. How much more performance does it receive? 23% in 1440p. Those are Intel numbers!

    Today's article is a celebration of how good Fury X really was. It still holds up well today with only 4GB of video memory. It even beat the GTX 1070 is several benchmarks. Why didn't AMD take the Fury X, shrink it to 14nm, apply architecture improvements from Polaris 10, and release it in 2016? That thing would be way better than Vega64.

    edit: Reworded some things slightly. Added a silly quip. 23% comes from averaging the differences between Fury X and Vega64.
    Reply
  • zippyzion
    Well, that was interesting. Despite its flaws I think a Vega/Ryzen build is in my future. I haven't been inclined to give NVidia any of my money for a few years now, since a malfunction with an FX 5900 destroyed my gaming rig... long story. I've been buying ATI/AMD cards since then and haven't felt let down by any of them.

    Let us not forget how AMD approaches graphics cards and drivers. This is base performance and baring any driver hiccups it will only get better. On top of that this testing covers the air cooled version. We should see better performance on the water cooled version that would land it between the 1080 and the Ti.

    Also, I'd really like to see what low end and midrange Vega GPUs can do. I'm interested to see what the differences are with the 56, as well as the upcoming Raven Ridge APU. If they can deliver RX 560 (or even just 550) performance on an APU, AMD will have a big time winner there.
    Reply