Hardware sleuth Tum_Apisak spotted not one but two unannounced Intel Gen12 Xe GPUs. According to the SiSoftware submission, the two GPUs worked in unison in some sort of dual-GPU configuration. This will likely further speculation that the Tiger Lake-U processors could theoretically work in a multi-GPU arrangement with Intel's discrete graphics card, which is possible through the driver updates issued last year, but that remains unconfirmed.
The thought of a discrete GPU working in tandem with the integrated graphics sounds interesting, especially since the leaked benchmarks of Xe devices have always pointed to a single GPU. Nonetheless, we can't discard the likelihood of SiSoftware misreporting the GPU. Xe is unreleased hardware after all, and it's always tricky to pick those up.
The dual-GPU setup reportedly features a total of 192 Execution Units (EUs), which comes out to 1,536 shader cores. Apparently, there's 6.2GB of memory and 2MB of L2 cache onboard as well. The dual-GPU completed the benchmark with a 1.25 GHz clock speed. If we dissect the setup, it falls perfectly in line with our theory.
Intel's DG1 (opens in new tab) GPU is rumored to sport up to 96 EUs. That would leave us with 96 EUs. As a recap, the maximum graphical configuration on Tiger Lake-U is 96 EUs so the leftover EUs could belong to the processor. The SiSoftware entry states that the test platform is based on a Tiger Lake-U processor so it kind of confirms our theory.
The Tiger Lake-U (UP3-class) parts with 96 EUs feature maximum clock speeds that vary between 1.3 GHz and 1.35 GHz. Similar to Nvidia's SLI or AMD's CrossFire technology, our hypothesis is that the Intel DG1 and the Xe iGPU would have to match clocks to play nice together. On SLI and CrossFire setups, the GPUs normally operate at the slower clock speed out of the two. Intel doesn't list the iGPU base clock speeds for its Tiger Lake-U chips. For now, we can only assume that the 1.25 GHz may correspond to the DG1.
However, Intel could be preparing more powerful offerings. Last week, one particular Gen12 GPU (via @Tum_Apisak) emerged with a 128 EUs and 3GB of memory. The unit also had 1MB of L2 cache and operated at a 1.4 GHz clock speed. This particular sample resided on a Coffee Lake-S platform, suggesting that it could be the desktop DG1.
It's clear that Intel is progressing quite nicely with the Xe lineup, and we can expect to see more 'inadvertent' teasers of the GPUs in the near future.
it would make sense to create a technology that would give an advantage when components from the same brand are put together.
exemples : accelerating Ray tracing with iGPU, or something like physx , or something that would accelerate VR stuff, i don't know... something.
for now, iGPUs are shutdown when a discrete GPU is installed. that's dumb. it's raw computing power left unused !
Would be funny if intel makes less powerfull CPUs than AMD, and less powerfull GPUs than nvidia, and in the same time CPU+GPU put together gives an edge, they would sell like hotcakes.
With Intel as they seem to be on equal footing(50/50 split), real probability this may work, my concern is what about the i5 or i3 which have less IGPU EU's to pair with a potential discrete option, likely be better to just use the discrete GPU which will likely have dedicated VRAM resources and a larger power budget.
And the million dollar question, what does this do against the real competition Nvidia? Are we talking 1050/1060M to 1650M Performance? how well does an Intel 192 EU stack up (Disregarding infant drivers and all those issues)?
Also don't get your hopes up for Xe either, computational workloads might work on multiple GPUs but gaming is way too complex to balance between uneven gpus.
With that being said....it wouldn't really make sense for anyone to buy a low end AMD CPU then pair it with a higher end GPU...
If they started putting IGPUs in every processor then that could work.
One Tile: 10588 GFLOPs (10.6 TF) of FP32
Two Tile: 21161 GFLOPs (21.2 TF) of FP32 (1.999x)
Four Tile: 42277 GFLOPs (42.3 TF) of FP32 (3.993x)"
Part of the problem is that most games do not know how to take advantage of crazy amounts of EUs/Shaders.
Xe-HP are intended for data center.
I see plenty of indication that gaming on 4K can take advantage of bigger processors.
The primary advantage demonstrated is that Intel can scale up performance with their tiled architecture. Their Xe-HP demo scaled linearly to 4 tiles.
Intel has a 16 tile Xe-HPC GPU in development that adds 64 bit operations... on the roadmap for 2021.
Intel reported a Xe-HPG GPU is in the lab, which is their DG2 gaming gpu that includes hardware ray tracing.
Or at least divide the tasks. Have for example the integrated GPU run desktop, explorer, all the mundane OS stuff, while the dedicated GPU would be reserved and truly dedicated to the actual graphics intensive aplications.
Until we see the DG2 actually in action in the hands of a reputable reviewer, we won't actually know how it performs.
At this point i'm quite skeptical atleast for the first 2 generations that intel will be able to make anything actually competitive, both in terms of price and power/performance ratios.
Sure Intel spends the most on R&D and has a huge engineering department but they aren't and never were focused on GPUs. I would find it difficult to believe that this generation as well as the next will catch Nvidia or AMD with their pants down.
I surely hope so though, extra competition is always better for us consumers.