AMD gathered press and partners late last year to talk about a hectic 2017 roadmap that includes Ryzen for the desktop in Q1; FreeSync 2, new server and laptop platforms launching later in the year; a big push into machine learning; and of course Vega, the hotly-anticipated successor to its third-generation Graphics Core Next (GCN) design.
Let’s cut right to it: Products based on AMD’s high-end Vega GPU architecture aren’t ready yet, but they’re expected sometime between now and the end of June.
In the meantime, AMD is milking this one for all it’s worth. We have early details that the company is willing to disclose, plus a promise that there will be a lower-level tech day closer to Vega’s launch. So let’s dig in (as far as we’re able to go).
Raja Koduri, senior vice president and chief architect of the Radeon Technologies Group, stood onstage at AMD’s Tech Summit and excitedly held up a newborn Vega-based GPU, making a point of the 200+ changes that went into creating Vega. This may be a revision to GCN, but AMD clearly wants the world to consider it a fresh endeavor. Any why not? As DirectX 12-based games surface with increasing frequency, the architecture’s previously under-utilized strengths become more apparent. Match-ups that previously favored Nvidia under DX11 are more commonly turning up even or going the other way in DX12. Despite all of those modifications, though, Koduri narrowed his presentation down to just four main points.
A Scalable Memory Architecture Based on HBM2
As a lead-in to Vega’s first architectural upgrade, Koduri presented slides showing the rapidly increasing sizes of game installs on desktop PCs, petabyte-class professional graphics workloads, and exabyte-plus training set sizes applicable to machine learning. He also mapped the growth of compute performance versus memory capacity.
Both AMD and Nvidia are working on ways to reduce host processor overhead, maximize throughput to feed the GPU, and circumvent existing bottlenecks—particularly those that surface in the face of voluminous datasets. Getting more capacity closer to the GPU in a fairly cost-effective manner seemed to be the Radeon Pro SSG’s purpose. And Vega appears to take this mission a step further with a more flexible memory hierarchy.
It’s no secret that Vega makes use of HBM 2; that information was on roadmaps through 2016. But we now know that AMD wants to call this pool of on-package memory—previously the frame buffer—a high-bandwidth cache. Got it? HBM2 equals high-bandwidth cache now. Why? Because AMD says so.
No really, why? Well, according to Joe Macri, corporate fellow and product CTO, the vision for HBM was to have it be the highest-performance memory closest to the GPU. However, he also wanted system memory and storage available to the graphics processor, as well. In the context of this broader memory hierarchy, sure, it’s logical to envision HBM2 as a high-bandwidth cache relative to slower technologies. But for the sake of disambiguation, we’re going to continue calling HBM2 what it is.
After all, HBM2 in and of itself represents a significant step forward. An up-to-8x capacity increase per vertical stack, compared to first-gen HBM, addresses questions enthusiasts raised about Radeon R9 Fury X’s longevity. Further, a doubling of bandwidth per pin significantly increases potential throughput. If AMD uses the same 4-hi (four HBM dies) stacks of 700 MHz HBM2 that Nvidia introduced with the Tesla P100 accelerators last year, it would have a 16GB card pushing up to 720GB/s. There’s room for more capacity and bandwidth from there.
That’s the change we expect to have the largest impact on gamers as far as Vega's memory subsystem goes. However, AMD also gives the high-bandwidth cache controller (no longer just the memory controller) access to a massive 512TB virtual address space for those large datasets Raja discussed early in his presentation. Clearly, there’s some overlap between this discussion and the Radeon Instinct lineup AMD announced back in December.
We came away from this presentation with plenty of questions about how the Vega architecture’s broader memory hierarchy will be utilized, but Scott Wasson, senior product marketing manager at AMD, helped add clarity to the discussion by describing some of what the high-bandwidth cache controller can do. According to Wasson, Vega can move memory pages in fine-grained fashion using multiple, programmable techniques. It can receive a request to bring in data and then retrieve it through a DMA transfer while the GPU switches to another thread and continues work without stalling. The controller can go get data on demand but also bring it back in predictively. Information in the HBM can be replicated in system memory like an inclusive cache, or the HBCC can maintain just one copy to save space. All of this is managed in hardware, so it’s expected to be quick and low-overhead.
A New Programmable Geometry Pipeline
The Hawaii GPU (Radeon R9 290X) incorporated some notable improvements over Tahiti (Radeon HD 7970), one of which was a beefier front end with four geometry engines instead of two. The more recent Fiji GPU (Radeon R9 Fury X) maintained that same four-way Shader Engine configuration. However, because it also rolled in goodness from AMD’s third-gen GCN architecture, there were some gains in tessellation throughput, as well. Most recently, the Ellesmere GPU (Radeon RX 480) implemented a handful of techniques for again getting more from a four-engine arrangement, including a filtering algorithm/primitive discard accelerator.
AMD’s backup slides tell us that Vega’s peak geometry throughput is 11 polygons per clock, up from the preceding generations' four, yielding up to a 2.75x boost. That specification comes from using a new primitive shader stage being added to the geometry pipeline. Instead of using the fixed-function hardware, this primitive shader uses the shader array for its work.
Mike Mantor, AMD corporate fellow, described this as having similar access as a compute shader for processing geometry in that it’s lightweight and programmable, with the ability to discard primitives at a high rate. AMD’s Wasson clarified further that the primitive shader’s functionality includes a lot of what the DirectX vertex, hull, domain, and geometry shader stages can do but is more flexible about the context it carries and the order in which work is completed.
The front-end also benefits from an improved workgroup distributor, responsible for load balancing across programmable hardware. AMD said this comes from its collaboration with efficiency-sensitive console developers, and that effort is now going to benefit PC gamers, as well.
The Vega NCU (Next-Generation Compute Unit)
Using its many Pascal-based GPUs, Nvidia is surgical about segmentation. The largest and most expensive GP100 processor offers a peak FP32 rate of 10.6 TFLOPS (if you use the peak GPU Boost frequency). A 1:2 ratio of FP64 cores yields a double-precision rate of 5.3 TFLOPS, and support for half-precision compute/storage enables up to 21.2 TFLOPS. The more consumer-oriented GP102 and GP104 processors naturally offer full-performance FP32 but deliberately handicap FP64 and FP16 rates so you can’t get away with using cheaper cards for scientific or training datasets.
AMD, on the other hand, looks like it’s trying to give more to everyone. The Compute Unit building block, with 64 IEEE 754-2008-compliant shaders, persists, only now it’s being called an NCU, or Next-Generation Compute Unit, reflecting support for new data types. Of course, with 64 shaders and a peak of two floating-point operations/cycle, you end up with a maximum of 128 32-bit ops per clock. Using packed FP16 math, that number turns into 256 16-bit ops per clock. AMD even claimed it can do up to 512 eight-bit ops per clock. Double-precision is a different animal—AMD doesn’t seem to have a problem admitting it sets FP64 rates based on target market.
The impetus for this flexibility may have very well come from the console world. After all, we know Sony’s PlayStation 4 Pro can use half-precision to achieve up to 8.4 TFLOPS—twice its performance using 32-bit operations. Or perhaps it started with AMD’s aspirations in the machine learning space, resulting in products like the upcoming Radeon Instinct MI25 that aim to chip away at Nvidia’s market share. Either way, consoles, datacenters, and PC gamers alike stand to benefit.
AMD claimed the NCUs are optimized for higher clock rates, which isn’t particularly surprising, but it also implemented larger instruction buffers to keep the compute units busy.
A Next-Generation Pixel Engine
The fourth topic of AMD’s early Vega disclosures is actually a two-parter. First up is the draw stream binning rasterizer, attached to the traditional rasterization hardware, which Koduri said is able to improve performance and save power. At a high level, an on-chip bin cache allows the rasterizer to fetch data only once for overlapping primitives, and then shade pixels only once by culling pixels not visible in the final scene.
Second, AMD is fundamentally changing its cache hierarchy by making the render back-ends clients of the L2. In architectures before Vega, AMD had non-coherent pixel and texture memory access, meaning there was no shared point for each pipeline stage to synchronize. In the example of texture baking, where a scene is rendered to a texture for reuse later and then accessed again through the shader array, data has to be pulled all the way back through off-die memory. Now, the architecture has coherent access, which AMD said particularly boosts performance in applications that use deferred shading.
Although much of what Koduri presented requires development effort on the software side to utilize, he still wrapped his slide deck with a demo of Doom running at 4K under its most taxing detail settings on an early Vega board in the 70+ FPS range. Based on the numbers we were seeing in our Titan X review, that’s somewhere in between a GTX 1080 and Titan X.
As AMD's driver team gets its feet wet with Vega, we expect to see those performance numbers increase. We just hope it doesn't take the company another six months to feel comfortable giving us the full story. By then, its principal competition will be over a year old.