Dual-GPU Intel Xe Graphics Shows Up in New Benchmark Leak

(Image credit: Shutterstock)

Hardware sleuth Tum_Apisak spotted not one but two unannounced Intel Gen12 Xe GPUs. According to the SiSoftware submission, the two GPUs worked in unison in some sort of dual-GPU configuration. This will likely further speculation that the Tiger Lake-U processors could theoretically work in a multi-GPU arrangement with Intel's discrete graphics card, which is possible through the driver updates issued last year, but that remains unconfirmed. 

The thought of a discrete GPU working in tandem with the integrated graphics sounds interesting, especially since the leaked benchmarks of Xe devices have always pointed to a single GPU. Nonetheless, we can't discard the likelihood of SiSoftware misreporting the GPU. Xe is unreleased hardware after all, and it's always tricky to pick those up.

The dual-GPU setup reportedly features a total of 192 Execution Units (EUs), which comes out to 1,536 shader cores. Apparently, there's 6.2GB of memory and 2MB of L2 cache onboard as well. The dual-GPU completed the benchmark with a 1.25 GHz clock speed. If we dissect the setup, it falls perfectly in line with our theory.

Intel's DG1 GPU is rumored to sport up to 96 EUs. That would leave us with 96 EUs. As a recap, the maximum graphical configuration on Tiger Lake-U is 96 EUs so the leftover EUs could belong to the processor. The SiSoftware entry states that the test platform is based on a Tiger Lake-U processor so it kind of confirms our theory. 

The Tiger Lake-U (UP3-class) parts with 96 EUs feature maximum clock speeds that vary between 1.3 GHz and 1.35 GHz. Similar to Nvidia's SLI or AMD's CrossFire technology, our hypothesis is that the Intel DG1 and the Xe iGPU would have to match clocks to play nice together. On SLI and CrossFire setups, the GPUs normally operate at the slower clock speed out of the two. Intel doesn't list the iGPU base clock speeds for its Tiger Lake-U chips. For now, we can only assume that the 1.25 GHz may correspond to the DG1.

However, Intel could be preparing more powerful offerings. Last week, one particular Gen12 GPU (via @Tum_Apisak) emerged with a 128 EUs and 3GB of memory. The unit also had 1MB of L2 cache and operated at a 1.4 GHz clock speed. This particular sample resided on a Coffee Lake-S platform, suggesting that it could be the desktop DG1.

It's clear that Intel is progressing quite nicely with the Xe lineup, and we can expect to see more 'inadvertent' teasers of the GPUs in the near future. 

Zhiye Liu
News Editor and Memory Reviewer

Zhiye Liu is a news editor and memory reviewer at Tom’s Hardware. Although he loves everything that’s hardware, he has a soft spot for CPUs, GPUs, and RAM.

  • neojack
    I am still astonished as why AMD is not pushing a technology like this. i mean, they make CPUs, they make GPUs.
    it would make sense to create a technology that would give an advantage when components from the same brand are put together.

    exemples : accelerating Ray tracing with iGPU, or something like physx , or something that would accelerate VR stuff, i don't know... something.
    for now, iGPUs are shutdown when a discrete GPU is installed. that's dumb. it's raw computing power left unused !

    Would be funny if intel makes less powerfull CPUs than AMD, and less powerfull GPUs than nvidia, and in the same time CPU+GPU put together gives an edge, they would sell like hotcakes.
    Reply
  • JfromNucleon
    neojack said:
    I am still astonished as why AMD is not pushing a technology like this. i mean, they make CPUs, they make GPUs.
    it would make sense to create a technology that would give an advantage when components from the same brand are put together.

    exemples : accelerating Ray tracing with iGPU, or something like physx , or something that would accelerate VR stuff, i don't know... something.
    for now, iGPUs are shutdown when a discrete GPU is installed. that's dumb. it's raw computing power left unused !

    Would be funny if intel makes less powerfull CPUs than AMD, and less powerfull GPUs than nvidia, and in the same time CPU+GPU put together gives an edge, they would sell like hotcakes.
    A very interesting proposition
    Reply
  • cyrusfox
    neojack said:
    I am still astonished as why AMD is not pushing a technology like this. i mean, they make CPUs, they make GPUs.
    it would make sense to create a technology that would give an advantage when components from the same brand are put together.

    exemples : accelerating Ray tracing with iGPU, or something like physx , or something that would accelerate VR stuff, i don't know... something.
    for now, iGPUs are shutdown when a discrete GPU is installed. that's dumb. it's raw computing power left unused !

    Would be funny if intel makes less powerfull CPUs than AMD, and less powerfull GPUs than nvidia, and in the same time CPU+GPU put together gives an edge, they would sell like hotcakes.
    AMD did try this with their Trinity APU. I bought one and the synergy was non-existent, Either they didn't put enough effort in the dfrivers or the APU GPU was to anemic to really offer any benefit. I found the system ran better with Discrete GPU (Was either 7650m or 7750m). Nothign but bad tast in my mouth from that laptop (Won't buy HP again, went throuhg 5 Mobo's trying to keep that thing alive, garbage).

    With Intel as they seem to be on equal footing(50/50 split), real probability this may work, my concern is what about the i5 or i3 which have less IGPU EU's to pair with a potential discrete option, likely be better to just use the discrete GPU which will likely have dedicated VRAM resources and a larger power budget.

    And the million dollar question, what does this do against the real competition Nvidia? Are we talking 1050/1060M to 1650M Performance? how well does an Intel 192 EU stack up (Disregarding infant drivers and all those issues)?
    Reply
  • TerryLaze
    neojack said:
    for now, iGPUs are shutdown when a discrete GPU is installed.
    No they aren't, they work perfectly fine there is just no software that can use vastly different GPUs at the same time.

    Also don't get your hopes up for Xe either, computational workloads might work on multiple GPUs but gaming is way too complex to balance between uneven gpus.
    Reply
  • xtc-604
    neojack said:
    I am still astonished as why AMD is not pushing a technology like this. i mean, they make CPUs, they make GPUs.
    it would make sense to create a technology that would give an advantage when components from the same brand are put together.

    exemples : accelerating Ray tracing with iGPU, or something like physx , or something that would accelerate VR stuff, i don't know... something.
    for now, iGPUs are shutdown when a discrete GPU is installed. that's dumb. it's raw computing power left unused !

    Would be funny if intel makes less powerfull CPUs than AMD, and less powerfull GPUs than nvidia, and in the same time CPU+GPU put together gives an edge, they would sell like hotcakes.
    Because for one...none of AMD's high end chips have iGPUs.

    With that being said....it wouldn't really make sense for anyone to buy a low end AMD CPU then pair it with a higher end GPU...

    If they started putting IGPUs in every processor then that could work.
    Reply
  • JayNor
    Intel demoed 1, 2 and 4 tile linear scaling with their Xe-HP GPU. I haven't seen any similar demo of Xe-LP.

    "

    One Tile: 10588 GFLOPs (10.6 TF) of FP32

    Two Tile: 21161 GFLOPs (21.2 TF) of FP32 (1.999x)

    Four Tile: 42277 GFLOPs (42.3 TF) of FP32 (3.993x)"

    https://www.anandtech.com/show/16018/intel-xe-hp-graphics-early-samples-offer-42-tflops-of-fp32-performance
    Reply
  • xtc-604
    JayNor said:
    Intel demoed 1, 2 and 4 tile linear scaling with their Xe-HP GPU. I haven't seen any similar demo of Xe-LP.

    "

    One Tile: 10588 GFLOPs (10.6 TF) of FP32

    Two Tile: 21161 GFLOPs (21.2 TF) of FP32 (1.999x)

    Four Tile: 42277 GFLOPs (42.3 TF) of FP32 (3.993x)"

    https://www.anandtech.com/show/16018/intel-xe-hp-graphics-early-samples-offer-42-tflops-of-fp32-performance
    As we all know Tflops of performance does not relate to actual gaming capability, tflops calculations are only useful for work specific applications or mining.

    Part of the problem is that most games do not know how to take advantage of crazy amounts of EUs/Shaders.
    Reply
  • JayNor
    xtc-604 said:
    As we all know Tflops of performance does not relate to actual gaming capability, tflops calculations are only useful for work specific applications or mining.

    Part of the problem is that most games do not know how to take advantage of crazy amounts of EUs/Shaders.

    Xe-HP are intended for data center.

    I see plenty of indication that gaming on 4K can take advantage of bigger processors.

    The primary advantage demonstrated is that Intel can scale up performance with their tiled architecture. Their Xe-HP demo scaled linearly to 4 tiles.

    Intel has a 16 tile Xe-HPC GPU in development that adds 64 bit operations... on the roadmap for 2021.

    Intel reported a Xe-HPG GPU is in the lab, which is their DG2 gaming gpu that includes hardware ray tracing.
    Reply
  • veldrane2
    neojack said:
    I am still astonished as why AMD is not pushing a technology like this. i mean, they make CPUs, they make GPUs.
    it would make sense to create a technology that would give an advantage when components from the same brand are put together.

    exemples : accelerating Ray tracing with iGPU, or something like physx , or something that would accelerate VR stuff, i don't know... something.
    for now, iGPUs are shutdown when a discrete GPU is installed. that's dumb. it's raw computing power left unused !

    Would be funny if intel makes less powerfull CPUs than AMD, and less powerfull GPUs than nvidia, and in the same time CPU+GPU put together gives an edge, they would sell like hotcakes.


    Yes

    Or at least divide the tasks. Have for example the integrated GPU run desktop, explorer, all the mundane OS stuff, while the dedicated GPU would be reserved and truly dedicated to the actual graphics intensive aplications.
    Reply
  • xtc-604
    JayNor said:
    Xe-HP are intended for data center.

    I see plenty of indication that gaming on 4K can take advantage of bigger processors.

    The primary advantage demonstrated is that Intel can scale up performance with their tiled architecture. Their Xe-HP demo scaled linearly to 4 tiles.

    Intel has a 16 tile Xe-HPC GPU in development that adds 64 bit operations... on the roadmap for 2021.

    Intel reported a Xe-HPG GPU is in the lab, which is their DG2 gaming gpu that includes hardware ray tracing.
    One can only hope that all titles/engines will scale as linearly as the selected demo would.

    Until we see the DG2 actually in action in the hands of a reputable reviewer, we won't actually know how it performs.

    At this point i'm quite skeptical atleast for the first 2 generations that intel will be able to make anything actually competitive, both in terms of price and power/performance ratios.

    Sure Intel spends the most on R&D and has a huge engineering department but they aren't and never were focused on GPUs. I would find it difficult to believe that this generation as well as the next will catch Nvidia or AMD with their pants down.

    I surely hope so though, extra competition is always better for us consumers.
    Reply