Skip to main content

Intel Xe Graphics: Release Date, Specs, Everything We Know

Intel Xe Graphics mock up
(Image credit: Intel)

Note (8/13/2020): We just dropped a ton of new information on Intel Xe Graphics and are in the process of updating this article. You can read more about Xe LP, Xe HPG, and Xe HP / HPC elsewhere for now. We will remove this note once we are finished updating this central hub.

Last year, Intel Xe Graphics was announced, along with Intel's intention to re-enter the discrete GPU space, the first time we'll have a dedicated Intel GPU since the i740 back in 1998. The competition among the best graphics cards is fierce, and Intel's current integrated graphics solutions basically don't even rank on our GPU hierarchy (UHD Graphics 630 sits at 2.5% of the Titan RTX, or about 1/3 the performance of the Nvidia GT 1030).

Could Intel, purveyor of low performance integrated GPUs—"the most popular GPUs in the world"—possibly hope to compete? Yes, actually, it can. But while it appears ready to do so inside the data center, questions remain as to what we'll see for the consumer segment.

This year promises a massive shakeup in the PC graphics card market. AMD is working on Big Navi / RDNA 2, Nvidia's RTX 3080 / Ampere GPUs are coming, and along with Intel's Xe Graphics there are rumblings of a fourth player potentially entering the PC GPU space. Huawei is entering the data center GPU market, so it's not a huge leap to imagine it making consumer models at some point. But for this article, we're focusing on Intel.

(Image credit: @IntelGraphics Twitter)
Intel Xe Graphics At A Glance:

Specs: Up to 512 EUs / 4096 shader cores
Performance: We're hoping for at least RTX 2080 level
Release Date: September 2, 2020
Price: Intel will need to be competitive

Intel's Xe Graphics aspirations hit center stage in 2018, with the hiring of Raja Koduri from AMD, followed by chip architect Jim Keller and graphics marketer Chris Hook, to name just a few. Raja was the driving force behind AMD's Radeon Technologies Group that was created in November 2015, along with the Vega and Navi architectures, and clearly the hope is that he can help lead Intel's GPU division into new frontiers.

Not that Intel hasn't tried this before. Besides the i740, Larrabee and the Xeon Phi had similar goals back in 2009, though the GPU aspect never really panned out. So, third time's the charm, right?

Of course, there's a lot more to building a good GPU than just saying you want to make one, and Intel has a lot to prove. Here's everything we know about the upcoming Intel Xe Graphics, including release date, specifications, performance expectations, and pricing. 

Intel's Gen11 Graphics at a high level appears to be quite similar to Xe Graphics. (Image credit: Intel)

Intel Xe Graphics Architecture 

Intel may be a newcomer to the dedicated graphics card market, but it's by no means new to making GPUs. Current Intel Ice Lake CPUs use the Gen11 Graphics architecture, which as the name implies is the 11th generation of Intel GPUs. Incidentally, the first generation of Intel GPUs was in its last discrete graphics card, the i740 (along with Intel's 810/815 chipsets for socket 370 Pentium III and Celeron CPUs, circa 1998-2000). Xe Graphics is round 12 for Intel GPU architectures, in other words, with Gen5 through Gen11 being integrated into Intel CPUs of the past decade. Also note that Gen10 Graphics never actually saw the light of day, as it was part of the aborted Cannon Lake CPU line.

While it's common for each generation of GPUs to build on the previous architecture, adding various improvements and enhancements, Intel is reportedly making major changes with Xe Graphics. Some of those changes focus on enabling the expansion of GPU cores, others address the need for dedicated VRAM, and there will also be changes focused on improving per-core performance and IPC.

Recent Intel GPUs have been divided up into a number of 'slices' and 'sub-slices,' with the sub-slices being somewhat analogous to AMD's CUs and Nvidia's SMs. Gen9/Gen9.5 Graphics has a sub-slice size of 8 EUs, and each EU has two 128-bit floating point units (FPUs). For FP32 computations, each EU can do up to eight instructions per clock, and FMA (fused multiply add) instructions count as two FP operations, giving a maximum throughput of 16 FP operations per clock.

So: EUs * 8 * 2 * clock speed = GFLOPS. In that sense, an EU counts as eight 'GPU cores' when compared with AMD and Nvidia GPUs, and a slice of eight EUs is equal to an AMD CU or Nvidia SM. Intel tends to refer to these as ALUs (arithmetic logic units), which is arguably a better definition than a 'core.' It's also worth pointing out is that in the supercomputer world, each SM or CU from an Nvidia or AMD GPU gets counted as a 'core,' which is why Nvidia's Selene is listed as having 272,800 cores. (That's 280 DGX A100 systems, each with two 64-core EPYC 7742 CPUs and eight Nvidia A100 GPUs. 275 * 128 cores + 275 * 8 * 108 SMs, with five additional DGX A100 apparently as hot spares.)

Stepping out one level, the slices in previous Intel graphics have been classified as GT1, GT2, GT3, and GT4 (with Ice Lake / Gen11 adding a GT1.5 option). For Gen9, GT2 models have three sub-slices with eight EUs each, GT1 has two sub-slices with six EUs enabled in each, and GT3 has six sub-slices and eight EUs each. Gen11 changed to each slice having four sub-slices of eight EUs, so Ice Lake GT2 has two slices, 64 EUs, and 512 cores. For Xe Graphics, Intel will be going for significantly higher EU counts and larger GPU sizes.

Gen11 was a big jump from Gen9, and Xe Graphics could scale to eight or more slices.  (Image credit: Intel)

Current indications are that the base 'slice' size for Xe Graphics will have up to 64 EUs enabled, with different configurations having different numbers of slices and sub-slices that can be partially disabled as needed. The fundamental building block for Xe Graphics ends up being basically the same as Gen11 Graphics, at least for the first iteration. The big changes will involve adding all the logic for dedicated VRAM, scaling to much higher core counts and multi-chip support, along with any other architectural changes that have yet to be revealed. Xe Graphics will have full DX12 and Vulkan support, but beyond that is unknown.

Intel has talked about three broad classifications of Xe Graphics: Xe LP for low power / low performance devices, Xe HP for high performance solutions, and Xe HPC for data center applications. Xe LP as far as we can tell is mostly for integrated graphics solutions, and the upcoming Tiger Lake CPUs as well as the Xe Graphics DG1 developer card appear to have 96 EUs. It will be the next iteration of Intel's processor graphics, in other words.

At the other end of the spectrum, there have been images and details regarding Xe HPC and Intel's Exascale ambitions for supercomputers, which as you might imagine means incredibly powerful and expensive chips—we don't anticipate Xe HPC GPUs showing up in consumer cards any time soon. Our current understanding, based on the most recent tweets from Raja Koduri, is that Xe HPC refers specifically to Ponte Vicchio, the 7nm successor to the first generation Xe Graphics.

The most interesting chips from our perspective will fall under the Xe HP umbrella, which spans a wider gamut. These should show up in a variety of professional and consumer graphics cards. Raja has confirmed that the above package is Xe HP hardware, and indications are that it's a dual-chip (or perhaps quad-chip) configuration. That's not going into any consumer graphics card built for gaming, unless Intel really pulls a 180 on graphics.

One thing that's still unclear is whether the first Xe Graphics solutions will support hardware ray tracing or not. Intel has said it will support ray tracing, but it hasn't specifically stated that it will happen with the initial Xe Graphics architecture. It seems more likely that ray tracing will come in the second generation of Xe Graphics, the 7nm Ponte Vecchio and related chips. Or perhaps ray tracing support will be in a limited subset of the first gen parts—high-end Xe HP but not Xe LP, for example. We don't know yet, but it would be quite surprising to have full ray tracing arrive before AMD's ray tracing solution.

These architectural updates are critical, as current Intel GPUs are at best underwhelming when it comes to gaming performance. Take UHD Graphics 630 as an example: 24 EUs (192 cores) at 1.2 GHz in a Core i9-9900K gives a theoretical 460.8 GFLOPS—or 422.4 GFLOPS in the slightly lower clocked (1.1 GHz) Core i3-9100. The AMD Ryzen 5 3400G by comparison has 11 CUs, 704 GPU cores, and a 1.4 GHz clock speed, yielding 1971.2 GFLOPS of theoretical performance. It's no surprise that AMD's Vega 11 Graphics are roughly three times faster than Intel's UHD Graphics 630—it could have been more, but both integrated graphics solutions are at least somewhat limited by the system memory bandwidth. 

Intel's Ice Lake processors have a 64 EU GPU that gives clues on how Xe Graphics could scale.  (Image credit: Intel)

Intel Xe Graphics Die Shots and Analysis 

Besides mostly undisclosed architectural changes, there are some other interesting tidbits on Xe Graphics that are worth discussing. For example, we can get a pretty good idea of what to expect in regards to size and transistor counts. First, looking at Intel's Ice Lake wafer, we can see how big the 64 EU GPU is on Intel's 10nm node. Analyzing the die shot, it looks like 64 EUs with Gen11 takes up roughly 40-45mm square of die space. That's actually quite small and it means Intel can scale to much larger GPUs.

Even if we take the higher end of that estimate (45mm square), and then assume the Xe Graphics architecture will increase the size by nearly 50%—for any enhancements and architectural changes—we're still only at 65mm square per 64 EU slice. There's a lot of logic related to display outputs, video codecs and more that doesn't need to be duplicated on a larger GPU, but let's aim high.

Doubling that to 130mm square would give Intel a 128 EU chip, 260mm square would be 256 EUs, and 520mm square would yield 512 EUs. And again, actual chip sizes should be quite a bit smaller, as our initial 50% larger estimate is quite excessive.

Making a GPU that's around 500mm square might seem a huge jump for a company that's best known for consumer CPUs that are less than 200mm square (the Core i9-9900K is right around 180mm square for reference). However, Intel's HEDT and Xeon processors are substantially larger. The 18-core HCC Cascade Lake-X CPUs for example have a die size of around 470mm square, and the 28-core XCC chips are close to 690mm square. Obviously those cost more to make, but GPUs from AMD and Nvidia routinely hit 500mm square or more.

Given the die sizes we're talking about, it would make sense for Intel to go with custom silicon for various products. There's a good indication the top Xe HP chip will have 512 EUs, but that we'll see a different Xe Graphics solution with 128 EUs in a dedicated card (because anything less would be pointless), and maybe there a 256 EU chip staking out the middle ground as well. All three could be lumped under the Xe HP umbrella. If Intel goes the custom silicon route, the 128 EU GPU could be about 150mm square, 256 EUs could fit into about 250mm square, and a large 512 EU chip might only need 400-450mm square. Such sizes are absolutely within reach for GPUs.

512 EUs in a single chip would mean the equivalent of 4096 AMD/Nvidia 'GPU cores,' which would be pretty impressive. AMD's RX 5700 XT by comparison has 2560 GPU cores, while Nvidia's RTX 2080 Ti sports 4352 GPU cores—not that the AMD, Intel, and Nvidia cores are all equivalent, but it's at least a baseline measure of potential performance. Theoretical compute for a 512 EU chip could actually surpass the current kings of the desktop graphics card sector. Does that sound like fantasy land? Check out this Xe Graphics wafer shot Raja Koduri posted on Twitter in February 2020.

We've analyzed that photo, which presumably shows the first generation 10nm+ Xe HP GPU. Frankly, the die appears to be massive! We've seen other analyses, but our own estimate is that the GPU die on that wafer is approaching maximum reticle size—around 800mm square, give or take. Or given the blurriness, perhaps the chips are half or even one fourth that size. Either way, that coincides with what Intel has publicly stated in regards to its second generation Ponte Vecchio architecture, which will move to a 7nm node.

Ponte Vecchio will include Foveros, Intel's die stacking technology, and Intel mentioned in it's 2019 Investor Meeting that with the current PC-centric approach, product size is "restricted by reticle." In other words, the maximum size of a chip is a hard limit based on the fabrication machinery. This applies to all microprocessors, and the limit is right around 850mm square. Intel's future plans move to a data-centric model that will allow further scaling through die stacking, but that doesn't apply to the current 10nm+ Xe HP GPU.

More recently, IntelGraphics just posted this tweet:

That's almost certainly a dual- or quad-die package, with four HBM2e chips under the IHS as well. The total package size looks to be similar to the current Xeon and EPYC CPUs, measuring roughly 80 x 52 mm (give or take). Raja Koduri also tweeted that the latest chips he's working on (i.e., Xe HP) have "tens of billions of transistors and tens of thousands of ops/clk." That second part is important, because it implies at least 30K ops per clock, and potentially more.

So, reading between the lines, some Xe HP data center solutions will use the above GPU die and package that appears to be quite large. Again, that's not going to be used in a consumer product, but given what we know of Intel's Gen11 Graphics, such a GPU could easily house 2048 EUs total across all four (or two?) chips. Intel has talked about future GPUs moving to "thousands of EUs," and it seems to be on track. Toss in some HMB2e memory, add in INT8 and FP64 support, and data centers should come running.

But the consumer parts don't need to be quite so extreme. An accidental Intel graphics driver posting from June 2019 gave a clear indication of what to expect. Intel apparently has 128 EU, 256 EU, and 512 EU Xe HP graphics cards in the works, in addition to Xe LP models that most likely will be limited to 64/96 EUs. That also coincides with statements Intel has made regarding Xe LP scaling from 5W to 20W designs—there's no need for a dedicated graphics card with a 20W TDP GPU. That brings us to the actual Xe Graphics specifications. 

Intel's Xe Graphics DG1 dev board, shown at CES 2020.  (Image credit: Intel)

Potential Intel Xe Graphics Specifications 

There have been various leaks and rumors about Intel Xe Graphics, each becoming slightly more credible. Intel also demonstrated the Xe Graphics DG1 developer board at CES 2020. While Intel insisted the board was not a final design for consumers, we wouldn't be surprised to see something similar shipping to consumers in the future. However, Xe Graphics DG1 supposedly uses Xe LP silicon, which means that it's a low-power dedicated GPU for test purposes only right now.

Intel revealed that there are three brands of Xe Graphics, scaling from ultra mobile through gaming desktops, then on to workstations and data center applications. Given what we've said above, Intel plans to release a suite of Xe Graphics cards, presumably using Xe HP silicon, and here are the configurations we expect to see:

Intel Xe Graphics Potential Specifications
GPUXe High-EndXe Mid-RangeXe Budget
Process (nm)10+10+10+
Transistors (billion)~3.0x~1.67xx
Die size (mm^2)3y1.67yy
EUsUp to 512Up to 256Up to 128
GPU cores (ALUs)Up to 4096Up to 2048Up to 1024
Clock (GHz)1.5-2.0?1.5-2.0?1.5-2.0?
VRAM Speed (Gbps)14?14?14?
VRAM (GB)8-16 GDDR6?6 GDDR6?4 GDDR6?
Bus width256?192?128?
ROPs64?48?32?
TMUs256?128?64?
TFLOPS12.3-16.4?6.1-8.2?3.1-4.1?
Bandwidth (GB/s)448?336?224?
TBP (watts)<300?<150?<75?
Launch Date202020202020
Launch Price$599?$299?$149?

Based on the chip shots and other information, we expect the Xe HP GPUs to be the fundamental building block of the consumer Xe Graphics cards. Intel's EMIB (Embedded Multi-Die Interconnect Bridge) could make an appearance, allowing for multi-chip GPU configurations but without the complexity of AMD CrossFire or Nvidia SLI. It's like AMD's chiplet approach on the Ryzen CPUs, except applied to graphics instead.

EMIB would effectively allow two or four chips to behave as one, more or less, sharing rendering duties and memory. It's a bit ironic, as Intel initially made fun of AMD's 'gluing' chiplets together with Ryzen, and we can see how that turned out: in looking at AMD vs Intel CPUs, Ryzen has quickly scaled to much higher core counts and performance that Intel currently can't match. But Intel is smart enough to recognize the advantages of such an approach, and applying it to GPUs could make a lot of sense. However, it's far more likely that EMIB will be reserved for the data center variants.

That leaves us with the three Xe HP configurations. Whether Intel will call these 1 slice, 2 slice, and 4 slice or something else isn't known, but a base building block of 128 EUs would mean the equivalent of 1024 GPU cores (ALUs), and as noted above the underlying architecture of the cores could be improved in various as-yet-undisclosed ways. Depending on what Intel does, it could end up with GPU cores that are closer to parity with AMD and Nvidia GPU cores—that's sort of the best-case scenario, and what we're hoping happens.

Adding more GPU cores, slices, EUs or whatever you want to call them will help Intel a lot. 128 EUs / 1024 cores doesn't exactly set our hearts racing, considering Nvidia already offers GPUs with up to 4608 cores (Titan RTX), AMD has offered GPUs with up to 4096 cores (RX Vega 64), and both AMD and Nvidia are likely going to go even higher with the upcoming Big Navi and Ampere architectures. By the end of the year, we could see AMD and Nvidia GPUs with anywhere from 5120 to 8192 GPU cores.

Intel doesn't appear to be shooting quite that high for the consumer space, but we expect to see Xe Graphics models sporting 96 EUs up to 512 EUs, and everything in between. Combined with clock speeds of 1.5-2.0 GHz, which seems reasonable considering previous designs plus the move to 10nm+, and Intel could be pushing 12-16 TFLOPS of computational power on a 512 EU chip. Add in 8GB of GDDR6 memory (or maybe double that to 16GB) and Intel's highest performance Xe Graphics card could be a viable competitor to AMD and Nvidia GPUs. That's the theory at least, though we still don't know if ray tracing support will happen.

Drop down to a smaller GPU and we get mid-range performance. Half the EUs and GPU cores, half the raw compute, but drop to 6GB VRAM and keep six memory channels—a mid-range tier GPU in 2020 without at least 6GB VRAM simply isn't going to fly. Theoretical performance of 6-8 TFLOPS would put this middle class Xe Graphics solution in the same ballpark as Nvidia's RTX 2060 and AMD's RX 5600 XT, though of course drivers and other factors still need to be tested.

And finally, at the bottom of the heap, we have the budget Xe Graphics configuration. This could have a single GPU chiplet or the smallest Xe Graphics dedicated GPU, 4GB or 8GB of VRAM, and roughly half the performance of the mid-range model. With 128 EUs, that's 1024 cores and a potential 3-4 TFLOPS of compute, depending on clock speeds. There would likely be a higher and lower tier model, one with 96 EUs and no PCIe power connection required, and a second higher performance budget card with 128 EUs and a 6-pin power connector.

It's worth noting that Intel did say at one point during its CES presentation that Xe Graphics would be four times as fast as Gen9 Graphics. The above configurations would certainly hit that mark, and even exceed it. However, we don't know if Intel was saying four times simply from the architecture, i.e. when equalizing clock speeds and EU counts, or four times as fast overall. Xe LP integrated solutions already appear to be targeting the 4x increase relative to an integrated GT2 UHD Graphics 630 configuration. A 128 EU dedicated Xe Graphics card should have no trouble surpassing everything Intel has previously offered.

This concept rendering of Intel's Xe Graphics is probably a reasonable guess at what a larger card could look like.  (Image credit: Intel)

Intel Xe Graphics Card Models 

What will Intel call the dedicated Xe Graphics cards? It showed off the DG1 SDV (Discrete Graphics 1 Software Development Vehicle) at CES 2020, and while it repeatedly stated that the design wasn't representative of the final product, it was a nice looking card and doesn't really need any major changes in our view. Of course, it might also be too nice, particularly given the DG1 SDV would be the 'budget' version—there's no PCIe power connector. The metal shroud is certainly an extravagance that isn't needed for a sub-75W card.

Regardless, we anticipate multiple consumer models will be released, potentially using two or more variants of each chip. There should be a budget card without a PCIe power connector, probably with 96 EUs and a 50W TDP (give or take). A step up from that will be the full single chip variant with 128 EUs, a 75W TDP and a 6-pin PCIe power connector—just to be safe, but maybe also to enable a bit of overclocking. It could still easily work with the cooling shown on the test vehicle card.

Then there could be mid-range cards with larger 256 EU chips—one partially enabled (i.e., 192 EUs) and one fully enabled. Those could have <150W TDP for the former, and perhaps 175-200W TDP for the latter, depending on clock speeds.

Finally, the top consumer cards would have 512 EUs on a chip, but with some of those disabled to improve yields. Give the chip higher boost clocks and up to a 300W TDP with dual PCIe 8-pin power connectors. A step down model could have slightly lower clocks and 400 EUs (give or take) with a 225-250W TDP.

What about the naming of the various models? Intel could play it straight with names like Intel Xe Graphics 96/128/192/256/384/512. That might be too easy. Intel might also go with DG2 branding—DG1 would be for the test platform, but also it would let the original Intel i740 discrete graphics solution keep that title. Or maybe Intel will go with something similar to it's Core branding: Xe9, Xe7, Xe5, and Xe3 families, with varying suffixes based on clock speeds. The latter seems more probable, though likely with a completely new brand reveal in the coming days.

There's also a question about whether Intel will be the sole provider of Xe Graphics cards, or if it will partner with other companies for third-party cards. For the initial launch, or at least until performance is more of a known quantity, we expect Intel will be the only provider of cards. That's basically what AMD and Nvidia do at launch as well with their reference designs, and it helps set the baseline expectations for performance, power and pricing. If Xe Graphics proves to be capable and desirable, third-party boards from the various motherboard and AIB (add-in board) companies could come later.

Intel does have a history of keeping things in-house as much as possible—it makes CPUs and chipsets, SSDs, Xeon Phi cards, NUCs and more. However, graphics cards have a lot of similarities to motherboards, so it's not hard to imagine a future where Intel mostly focuses on providing the GPUs, leaving the graphics card production and assembly to its partners. Well, except for Xe HPC, which will almost certainly be an in-house only product (i.e., like Xeon Phi).

Intel's Tiger Lake processors will feature Xe Graphics and are set to launch on September 2, according to the latest rumors.  (Image credit: Intel)

Intel Xe Graphics Release Date 

Intel has repeatedly targeted a 2020 release, and all indications are that it's still on track for that. Coronavirus delays seem to have pushed everything back (e.g., we initially expected Comet Lake to have launched in March, and that didn't happen until May 20). Given that Intel is primarily responsible for the manufacture of the CPUs and GPUs, a late summer or early fall 2020 launch are likely.

Current indications are that Intel will be revealing Tiger Lake and perhaps additional hardware on September 2, so at least the integrated version of Xe Graphics will arrive by then. We expect the upcoming Tiger Lake line of CPUs to target laptops and mobile devices, just like the current Ice Lake lineup. But will there be desktop Tiger Lake chips any time soon, with more than 4-core/8-thread CPU configurations? That's far less clear. We should also see Rocket Lake CPUs with up to 8-core CPU configurations and perhaps 64 or 96 EU Xe Graphics solutions by the end of 2020, or early in 2021.

What about dedicated Xe Graphics solutions? We still want to see those, but Intel isn't saying much on the subject. It has shown off engineering prototypes and said it plans to make dedicated graphics cards, or at least GPU accelerators. Much will depend on performance. Intel isn't likely to release a dedicated GPU for consumers if it can't compete with AMD and Nvidia is our thought. A GPU targeting machine learning and data center applications that will replace the current Xeon Phi lineup might be the best we can hope for in the near term, in light of Intel's ongoing 10nm and now 7nm struggles.

How Much Will Intel Xe Graphics Cost? 

This is perhaps the most difficult question to answer. Intel traditionally doesn't like dealing with low margin parts. It entered, left, and re-entered the SSD storage market multiple times over the past decade due to profitability concerns. We also know that Intel traditionally wants to sell even its lowest performance Core i3 processors for at least $125, with Core i5 usually being priced closer to $200, Core i7 at $300 and up, and Core i9 at $500 or more. We mention those as a point of reference, and note that building graphics cards inherently means higher base costs compared to CPUs.

With a CPU all you get is a small package and maybe a cooler. A graphics card needs the GPU, VRAM for the GPU, a PCB to hold the GPU and VRAM and other components, all the video ports, power connectors, and a good cooling solution. That means higher costs and lower margins. However, unlike the CPU realm, Intel is completely unproven in the GPU world. Actually, that's not true: Intel has repeatedly proven over the past decade that it makes inferior GPUs and bundles them into its CPUs.

Put simply, there's no way Intel can charge a price premium with consumer Xe Graphics (data center Xe HP / HPC is a different matter). It needs to clearly beat Nvidia on performance as well as pricing—and matching Nvidia on features would help as well. AMD has been coming in second place for ages, and we can see how that's turned out (Nvidia runs about 80% of gaming PCs and over 90% of professional GPU solutions). Intel marketing isn't going to make up for a performance deficit or an inferior product in the GPU space.

Realistically, then, an Intel budget Xe Graphics solution needs to be priced around $125-$150 and be able to clearly match or exceed the performance of the GTX 1650 Super and RX 5500 XT. A $200-$250 model needs to at least match if not beat the GTX 1660 Super and hopefully come close to RX 5600 XT performance, while high-end models priced at $300 and up will need to take on Nvidia's RTX GPUs. Except, Xe Graphics will also have to contend with AMD's RDNA 2 / Big Navi as well as Nvidia's Ampere / RTX 30-series. We fully expect both of those to deliver better performance than the current RTX 20-series, and Intel will need to keep up.

Right now, that's a massive hurdle to clear. We expect Nvidia Ampere graphics cards to arrive in the next month or so, and based on the Nvidia A100 details, they're going to be beastly. A high-end 512 EU Xe Graphics solution probably won't even match RTX 2080 Super, never mind the upcoming RTX 3080.

Intel's steampunk Oblivion concept graphics card, coming in 2035. Or 1865.  (Image credit: Intel)

Final Thoughts on Intel Xe Graphics 

The bottom line is that Intel has its work cut out for it. It may be the 800-pound gorilla of the CPU world, but even there Intel has stumbled and faltered over the past several years. AMD's Ryzen has gained ground, closed the gap, and is now ahead of Intel in most metrics, and Intel's manufacturing woes have recently caused stock prices to tumble about 20%. It really needs some good news, and soon.

As the graphics underdog, Intel needs to come out with aggressive performance and pricing, and then iterate and improve at a rapid pace. And please don't talk about how Intel sells more GPUs than AMD and Nvidia. Technically, that's true, but only if you count incredibly slow integrated graphics solutions that are at best sufficient for light gaming and office work. Then again, a huge chunk of PCs and laptops are only used for office work, which is why Intel has repeatedly stuck with weak GPU performance.

If Intel quadruples the performance of its current Gen9.5 Graphics, meaning UHD Graphics 630, that will still fall well short of the GTX 1650 Super and RX 5500 XT. Not only does Intel need to deliver better performance at viable prices, but it needs to prove that it can do more in the way of graphics drivers and regular releases. A 'game ready' Intel driver that basically recommends you set everything to minimum quality and run at 720p, and then hope you can still break 30 fps, is not a viable solution. Intel needs drivers and GPUs that keep up with AMD and Nvidia if it wants to become a viable graphics card provider.

Ideally, competition from Intel should help the graphics industry. A viable third player—maybe even a fourth if Huawei starts doing consumer GPUs—means more choice, and hopefully better prices. But that's all contingent on Intel actually delivering the goods. We'll find out in the coming months if Intel can finally join the dedicated GPU market in a meaningful way, or if it needs to head back to the drawing board yet again. Stay tuned.

  • waltc3
    Reminds me of the kind of nonsense people wrote about Larrabee years ago--although hyped for years by people who knew nothing about it at all, Intel cancelled it before the first product hit the market. Intel has a long, long way to go before it will catch AMD/nVidia on the discreet GPU side of the street.
    Reply
  • 2Be_or_Not2Be
    I had an i740 - that model was called "Starfighter", if I recall correctly.

    If they had stuck with their graphics development, they could have been on or even surpassing the levels of Nvidia/AMD today, given all of the resources they could have devoted to it.
    Reply
  • Deicidium369
    :Intel has repeatedly proven over the past decade that it makes inferior GPUs and bundles them into its CPUs : Yet those Inferior IGP graphics are one of the reasons AMD cannot get a hold in the OEMs for business customers - as "inferior" as they are - they mean that a video card is not needed in the BoM and no need to be supported by IT departments.
    Reply
  • mdd1963
    admin said:
    Intel's Xe Graphics will join the dedicated graphics card market this year, promising a new architecture and vastly improved features and performance for Intel GPUs.

    Intel Xe Graphics: Release Date, Specs, Everything We Know : Read more

    Ahhh, the good 'ole i740....; I bought one, installed it in my K6-2/350 rig and proceeded to reformat, reinstall OS/ chipset /i740 drivers, install everything fresh 4 times trying to get it to work, but Quake 2 consistently played with what looked like a moving broken glass wireframe superimposed on top of the well rendered game. Forced to return it and go to the 2 card solution, with a Voodoo2!
    Reply
  • mdd1963
    They (Intel) don't need to beat or even match AMD or Nvidia at the mid/upper end of cards, even a good '1650 Super' and/or 5600XT equivalent card for a lesser price would sell like hotcakes...
    Reply
  • JarredWaltonGPU
    waltc3 said:
    Reminds me of the kind of nonsense people wrote about Larrabee years ago--although hyped for years by people who knew nothing about it at all, Intel cancelled it before the first product hit the market. Intel has a long, long way to go before it will catch AMD/nVidia on the discreet GPU side of the street.
    Larrabee actually had a ton of potential and was mostly killed by internal politics at Intel -- the CPU guys didn't want it to take over any of their turf, and Intel viewed gaming as something for kids. Still, Larrabee's descendants would live on in the form of the Xeon Phi -- not the best at everything, but certainly capable in the right workloads.

    Xe Graphics is a completely different beast, however. Intel isn't trying to do software GPU running on x86 cores this time. It's doing a proper scale up of its existing GPU, with hopefully better driver support. Yes, Intel UHD Graphics 630 is weak compared to any modern dedicated GPU. But it's also only a 460 GFLOPS architecture, built with 24 EUs. Scaling that up isn't trivial, but Gen11 already did a lot of the legwork. Instead of one slice, eight sub-slices, and 64 EUs, Xe Graphics will scale up to at least eight slices, 64 sub-slices, and 512 EUs. Get properly functioning drivers behind that and it's going to be pretty impressive.

    Question is, will Intel sell such a GPU at a reasonable price? 512 EUs for $500 would be very competitive I think. 512 EUs for $1000? Not so much.
    Reply
  • digitalgriffin
    Deicidium369 said:
    :Intel has repeatedly proven over the past decade that it makes inferior GPUs and bundles them into its CPUs : Yet those Inferior IGP graphics are one of the reasons AMD cannot get a hold in the OEMs for business customers - as "inferior" as they are - they mean that a video card is not needed in the BoM and no need to be supported by IT departments.

    Business leases are into laptops these days. And Intel thanks to bulldozer->excavator just didn't have anything that was competitive in this space power efficiency/heat wise.

    Now look at the design wins of intels latest versus AMD's latest laptop CPU's and you see a big difference. Smaller power envelope and greater performance for a similar price point with AMD is causing a lot of manufacturers to look twice. Just a matter of convincing people at the top to change their ways. They are more concerned about running a business than the latest tech advancements. Their life is mired in in the day to day operations and what they know. They no longer look at the latest and greatest, but rather look at what they know works, even if it's not the best choice.
    Reply
  • Zizo007
    Deicidium369 said:
    :Intel has repeatedly proven over the past decade that it makes inferior GPUs and bundles them into its CPUs : Yet those Inferior IGP graphics are one of the reasons AMD cannot get a hold in the OEMs for business customers - as "inferior" as they are - they mean that a video card is not needed in the BoM and no need to be supported by IT departments.
    Do you even read tech reviews or its your 1st time? AMD have iGPU for a very long time. AMD's integrated GPUs are superior to Intel's. It has always been like that.

    The 599$ Xe GPU seems decent for the price if Intel can provide it with fast bug-free and wide compatibility software.
    Reply
  • spongiemaster
    Zizo007 said:
    Do you even read tech reviews or its your 1st time? AMD have iGPU for a very long time. AMD's integrated GPUs are superior to Intel's. It has always been like that.

    Sure, which has always landed it in that wonderful no man's land of unnecessarily power for business desktops while still not fast enough to do any real gaming. A feature with no market.
    Reply
  • Zizo007
    spongiemaster said:
    Sure, which has always landed it in that wonderful no man's land of unnecessarily power for business desktops while still not fast enough to do any real gaming. A feature with no market.
    Who told you it doesn't game loll? Even the Intel can play games like CSS. AMD is just better at gaming also. It can play any game at 720p. It can play not heavy games at 1080p.
    There's many ppl here on the forum with APUs.

    Both ways, he's wrong by saying Intel is better because AMD doesn't have iGPU lmao
    Reply