According to Moore's Law Is Dead, Intel's successor to the DG1, the DG2, could be arriving sometime later this year with significantly more firepower than Intel's current DG1 graphics card. Of course it will be faster — that much is a given — but the latest rumors have it that the DG2 could perform similarly to an RTX 3070 from Nvidia. Could it end up as one of the best graphics cards? Never say never, but yeah, big scoops of salt are in order. Let's get to the details.
Supposedly, this new Xe graphics card will be built using TSMC's N6 6nm node, and will be manufactured purely on TSMC silicon. This isn't surprising as Intel is planning to use TSMC silicon in some of its Meteor Lake CPUs in the future. But we do wonder if a DG2 successor based on Intel silicon could arrive later down the road.
According to MLID and previous leaks, Intel's DG2 is specced out to have up to 512 execution units (EUs), each with the equivalent of eight shader cores. The latest rumor is that it will clock at up to 2.2GHz, a significant upgrade over current Xe LP, likely helped by the use of TSMC's N6 process. It will also have a proper VRAM configuration with 16GB of GDDR6 over a 256-bit bus. (DG1 uses LPDDR4 for comparison.)
Earlier rumors suggested power use of 225W–250W, but now the estimated power consumption is around 275W. That puts the GPU somewhere between the RTX 3080 (320W) and RTX 3070 (250W), but with RTX 3070 levels of performance. But again, lots of grains of salt should be applied, as none of this information has been confirmed by Intel. TSMC N6 uses the same design rules as the N7 node, but with some EUV layers, which should reduce power requirements. Then again, we're looking at a completely different chip architecture.
Regardless, Moore's Law Is Dead quotes one of its 'sources' as saying the DG2 will perform like an RTX 3070 Ti. This is quite strange since the RTX 3070 Ti isn't even an official SKU from Nvidia (at least not right now). Put more simply, this means the DG2 should be slightly faster than an RTX 3070. Maybe.
That's not entirely out of the question, either. Assuming the 512 EUs and 2.2GHz figures end up being correct, that would yield a theoretical 18 TFLOPS of FP32 performance. That's a bit less than the 3070, but the Ampere GPUs share resources between the FP32 and INT32 pipelines, meaning the actual throughput of an RTX 3070 tends to be lower than the pure TFLOPS figure would suggest. Alternatively, 18 TFLOPS lands half-way between AMD's RX 6800 and RX 6800 XT, which again would match up quite reasonably with a hypothetical RTX 3070 Ti.
There are plenty of other rumors and 'leaks' in the video as well. For example, at one point MLID discusses a potential DLSS alternative called, not-so-creatively, XeSS — and the Internet echo chamber has already begun to propogate that name around. Our take: Intel doesn't need a DLSS alternative. Assuming AMD can get FidelityFX Super Resolution (FSR) to work well, it's open source and GPU vendor agnostic, meaning it should work just fine with Intel and Nvidia GPUs as well as AMD's offerings. We'd go so far as to say Intel should put it's support behind FSR, just because an open standard that developers can support and that works on all GPUs is ultimately better than a proprietary standard. Plus, there's not a snowball's chance in hell that Intel can do XeSS as a proprietary feature and then get widespread developer support for it.
Other rumors are more believable. The encoding performance of DG1 is already impressive, building off Intel's existing QuickSync technology, and DG2 could up the ante signficantly. That's less of a requirement for gaming use, but it would certainly enable live streaming of content without significantly impacting frame rates. Dedicated AV1 encoding would also prove useful.
The DG2 should hopefully be available to consumers by Q4 of 2021, but with the current shortages plaguing chip fabs, it's anyone's guess as to when these cards will actually launch. Prosumer and professional variants of the DG2 are rumored to ship in 2022.
We don't know the pricing of this 512EU SKU, but there is a 128EU model planned down the road, with an estimated price of around $200. More importantly, we don't know how the DG2 or its variants will actually perform. Theoretical TFLOPS doesn't always match up to real-world performance, and architecture, cache, and above all drivers play a critical role for gaming performance. We've encountered issues testing Intel's Xe LP equipped Tiger Lake CPUs with some recent games, for example, and Xe HPG would presumably build off the same driver set.
Again, this info is very much unconfirmed rumors, and things are bound to change by the time DG2 actually launches. But if this data is even close to true, Intel's first proper dip into the dedicated GPU market (DG1 doesn't really count) in over 10 years could make them decently competitive with Ampere's mid-range and high-end offerings, and by that token they'd also compete with AMD's RDNA2 GPUs.