Intel's First Discrete GPU, the DG1, Listed with 96 EUs
A leak gives some interesting details about Intel's first discrete GPU.
According to information that has appeared online, Intel’s first discrete graphics chip, referred to as DG1, will pack 96 execution units (EUs), as many as Tiger Lake’s integrated graphics. Although by working together with the CPU, DG1 might double Tiger Lake's graphics performance.
The information originated from the Eurasian Economic Union on Wednesday and was posted on Twitter. The register entry contains the development kit with in its name its number of execution units: ‘DG1 External FRD1 96EU Accessory Kit’.
DG1 is Intel’s first discrete graphics chip based on the Xe LP architecture. It is known that Tiger Lake’s integrated graphics will contain 96 Xe EUs, up from 64 on Ice Lake. This would imply that the DG1 is basically just a discrete version of Intel’s ordinary integrated graphics, probably with limited upside performance potential.
However, a Linux patch in October indicated that Intel is working to enable its integrated and discrete graphics together. In that case, the combined graphics hardware would consists of 192EUs, a respectable 3x gain over Ice Lake and 8x of the last few generations of 14nm processors.
According to a recent rumor, DG1 would indeed be just a fraction faster than Tiger Lake’s GPU and face development issues, to which Intel Graphics responded on Twitter that it is humming along, echoed by Raja Koduri’s optimism.
DG1: Everything we Know so Far
The first signs of DG1 appeared this summer through an Intel graphics driver, which listed its name, but not execution unit count. The accompanying Gen 12 LP information indicated that it would be targeted at the low power (mobile) segment, with a TDP likely no higher than 25W. The driver also mentioned DG2 with Gen 12 HP and possibly up to 512 EUs.
In October, Raja Koduri might have hinted in a tweet that the first Xe graphics would be released in June 2020.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Later in October, Intel formally announced DG1 as its first discrete graphics chip and disclosed that it had gone through the power on process in the third quarter of the year. This would be roughly in accordance with a mid-2020 launch. It is based on the new Xe architecture.
Meanwhile, Raja Koduri calls Xe HP (DG2?) amongst the “largest [silicon designed] anywhere”.
-
twotwotwo Even if nothing in the silicon were inherently better, a discrete card might be able to run a bit faster just because it gets its own power and thermal budget apart from the CPU's. It could also get a latency/BW boost from RAM on the card, especially GDDR or HBM but even eDRAM like Crystalwell.Reply
If they ship it reasonably close to their schedule in sufficient volume (to be seen given manufacturing issues), and the initial pricing and marketing aren't delusional, they've probably successfully wedged their way into the discrete GPU market. It doesn't have to be the new performance leader, just offer a legit jump over iGPUs of the time at a proportional price. -
bit_user as many as Tiger Lake’s integrated graphics.
This is the key point. If these EUs are comparable to those of their traditional iGPUs, you're only talking about a 768-shader GPU. So, maybe about GTX 1050 Ti-level performance.
Of course, they could always make the new EUs much wider... which would only be reasonable, considering how much narrower they are than everybody else (8 vs 32 or 64).
Although by working together with the CPU, DG1 might double Tiger Lake's graphics performance.
Yeah, I'll believe it when I see it. IMO, it would make more sense to run physics on the iGPU and do rendering on the dGPU, for instance. Perhaps some engines already do that, since they wouldn't even have to be the same architecture.
The information originated from the Eurasian Economic Union on Wednesday
Not saying it's fake news, or anything, but...
From http://www.eaeunion.org/?lang=en#aboutThe Member-States of the Eurasian Economic Union are the Republic of Armenia, the Republic of Belarus, the Republic of Kazakhstan, the Kyrgyz Republic and the Russian Federation.
...interesting.
The driver also mentioned DG2 with Gen 12 HP and possibly up to 512 EUs.
This is nuts! And I really mean nuts, because that's not the most efficient way to scale. As I said, they should really make the EUs wider, before making them more numerous. There's a reason for SIMD's enduring popularity. You'd think Intel, with their 16-wide AVX-512 would get this. -
TerryLaze
Adding more of already available units is much more efficient than designing something new from the ground up...bit_user said:This is nuts! And I really mean nuts, because that's not the most efficient way to scale. As I said, they should really make the EUs wider, before making them more numerous. There's a reason for SIMD's enduring popularity. You'd think Intel, with their 16-wide AVX-512 would get this.
Let them release their first gen,they will make wider units in the future. -
JayNor Intel states 8K60 video encoding on Tiger Lake. Their Xe gpu is probably going to occupy more layout than all their cores combined. Interesting design choices...Reply -
InvalidError
You make the choices that should get you the most sales in your target markets. As for other stuff accounting for more die area than the cores themselves, you can say the same thing about Ryzen: 1/1+ had cores occupying 1/3 of the die area and that goes down to around 1/4 with Zen 2 largely thanks to doubling L3 cache. The cores are getting smaller but the amount of infrastructure required to keep them fed is increasing.JayNor said:Their Xe gpu is probably going to occupy more layout than all their cores combined. Interesting design choices... -
SBnemesys From all the leaks, I am far from impressed. But I won't make any judgements until we get to see the final product. I honestly think Intel is making a mistake though. They really need to focus on their cpus considering amd is smashing it right now.Reply -
InvalidError
The CPU stuff is already done, Intel is two generations ahead on architecture. What is bogging Intel down is lacking the process required to make it actually work without the substantial clock frequency penalty seen on mobile Ice Lake vs mobile Coffee Lake. The Tiger Lake ES leaks look promising, may very well give Zen 3 a run for its money.SBnemesys said:I honestly think Intel is making a mistake though. They really need to focus on their cpus considering amd is smashing it right now. -
bit_user
They've been working on this for... how long? Since probably 2016, at least, with Gen11 as a way point.TerryLaze said:Adding more of already available units is much more efficient than designing something new from the ground up...
And their iGPUs were not designed to scale up, so a lot of redesign was clearly necessary.
At the ISA level, they already introduced major ABI incompatibilities.
Nearly every instruction field, opcode, and register type is updated and there are other big changes like removing the hardware register scoreboard logic that leaves it up to the compiler now for ensuring data coherency between register reads and writes and a new sync hardware instruction.
Source: https://www.phoronix.com/scan.php?page=news_item&px=Intel-Gen12-Gfx-Compiler-Big
Compared to all that, doubling or quadrupling their SIMD width should be easy.
Even on a superficial level, it wouldn't make sense for Intel to release something that's not competitive, simply because it's "easy". That's how companies lose large amounts of money and how product lines get cancelled. They can't afford to go to market with something they're not confident will be competitive.
I'm just wondering why they think an entry level card with 96 EUs will be competitive, when you consider that Radeon VII has only 60 CUs and even the mighty Titan V has just 80 SMs. What AMD and Nvidia both know is that each unit adds overhead. So, it's important to strike the right balance between the number and their width.
If you're going to offer a counter-points, please try harder. -
bit_user
Intel's 2018 revenues were $70.8 B. AMD's were $6.48 B. The two companies are on completely different scales.SBnemesys said:I honestly think Intel is making a mistake though. They really need to focus on their cpus considering amd is smashing it right now.
Intel can walk and chew gum, at the same time. They have like 5 major divisions, in the company. And they've been building GPUs or iGPUs for well over 20 years.
https://en.wikipedia.org/wiki/List_of_Intel_graphics_processing_units
In fact, it would a lot harder for Intel to redirect personnel and resources to their CPU team (or manufacturing, per I.E.'s point). Once an organization reaches a certain size, it's a lot easier for them to branch out into more areas, rather than have people stepping all over each other, trying to work on the same products. Ideas along these lines were most famously described in a classic book:
https://en.wikipedia.org/wiki/The_Mythical_Man-Month
Finally, they need competitive iGPUs, for the lucrative laptop market. Specifically because AMD is showing resurgence on the CPU front, and has long had an edge on the GPU front, Intel can't afford not to have a competitive GPU offering, if they want any hope of keeping this market locked up. -
InvalidError
Why are you expecting Intel's 96 EUs to be competitive with VII or a Titan when Intel itself is labeling it as entry-level which would be more along the lines of RX570/RX5500/GTX1650 at best? It isn't the number of EUs that matter, it is how much stuff you cram into them (not very much for DG1) and how fast you can run them.bit_user said:I'm just wondering why they think an entry level card with 96 EUs will be competitive, when you consider that Radeon VII has only 60 CUs and even the mighty Titan V has just 80 SMs.
Having more EUs may have some amount of overhead but so does making EUs wider, need wider everything to match (IO from registers, IO from local buffers, IO from cache, register count, etc.) and all of that extra width needs to get managed and scheduled too. I would expect Intel to model its architectures in C or some other language to determine the optimal combination of EU count and EU width for its 3D rendering pipeline by throwing scenes grabbed from actual games and apps at it. I'm fairly sure Intel did its homework before settling on plentiful narrow EUs instead of fewer fatter ones.