Intel revealed its first details of the DG1 discrete graphics here at CES 2020, but chose to peel the covers back slowly on its fledgling project with a few details sprinkled in a pre-brief, and then an onstage demo of a discrete laptop graphics chip inside of a Tiger Lake-powered notebook. Today the company pulled the covers back further as it revealed the new DG1 development kit and its DG1 mobile graphics in a standard graphics card form factor that the company seeds to independent software vendors (ISV) to help them optimize for its new graphics architecture. It also demoed the card running Warframe at 1080p.
Intel says the new Gen12 architecture is a major step forward that is four times faster than the current Gen9.5 graphics architecture inside its current-gen processors, and given that this is a scalable design, that could prove to be impressive when ported over to larger chips.
Intel is adamant that this current card is simply a vehicle to speed testing of its mobile GPU, but the Xe architecture will eventually arrive in some form as a discrete graphics card. It certainly looks an awful lot like an actual discrete graphics card, and even comes with addressable RGB lighting that Intel had set to its favorite shade of blue. When asked why Intel would go to this much trouble to create such an elaborate design for a development card, the company said it wants to show off its first discrete graphics unit in style and get feedback from the community.
While Intel's first reveal of the new discrete mobile graphics unit was disappointing for some, particularly because it didn't come in the form of a graphics card for desktop PCs, it foreshadows Intel's imminent assault on the stomping grounds of AMD and Nvidia. It takes time to build out a full stack, and these are just the first steps on Intel's long journey to deliver a new end-to-end family of graphics solutions that spans from integrated graphics up to gaming cards and the data center.
From Intel's prior disclosures, we know that the Xe architecture will span across three segments: Xe-LP for low-power chips like the DG1 that will land in ultra-mobile products, Xe-HP for high performance variants, and Xe-HPC for chips destined for the high performance computing market.
The development card, which houses Intel's DG1 graphics chip, is surprisingly heavy given that this is an Xe-LP model and likely consumes less than 25W, though Intel won't confirm any hard power details. The underlying DG1 test chip obviously doesn't consume more than 75W, which is evidenced by the lack of a PCIe auxiliary power connector (like an eight-pin), meaning this card pulls all of its power from the PCIe slot. The PCIe interface is x16-capable, but we aren't sure if it uses all 16 lanes, or if it has PCIe 3.0 or 4.0.
Here we see one HDMI port and three DisplayPort outputs, which implies the chip can support a minimum of four screens. The hefty outer shroud, which appears to be metallic, covers a thinner heatsink inside that runs the length of the card and is cooled by the fan.
Intel didn't show the actual DG1 silicon, or provide any technical details, like architectural design points, die size, or transistor counts. Intel also didn't verify the process node, though it is largely thought to be 10nm+. We also don't know what type of memory, or how much, feeds the graphics processor.
Here we see the metal backplate that covers the rear of the card. Intel emblazoned the backplate with Xe branding and "Software Development Vehicle."
We also see exhaust ports on the rear I/O bracket, but air can also flow through the front edge of the card via its slotted grill.
Intel currently samples the graphics card to ISVs via its developer kit, which is pretty much a standard Coffee Lake-based computer that comes in a small chassis with glass panels on either side. While Intel demoed DG1 in a Tiger Lake system at its press conference, the company said the GPU works with other processors as well. With the graphics card under power, we can see a row of LEDs that line the rear of the shroud near the I/O plate, and we're told these LEDs are addressable. Naturally, Intel had them set to a shade of blue.
Intel demoed the card running Warframe at 1080p with unspecified quality settings. Warframe is a first-person shooter that isn't very graphically demanding: The game merely requires a "DirectX 10-capable graphics card," but we did see visible tearing during the demo and our own run through the game. Intel cautions that this is early silicon, but the game did appear to run slowly with visible hitching and tearing during our brief hands-on. At times, the game appeared to use some type of foveated rendering technique, like variable rate shading (VRS), but that's hard to confirm given our limited time and the single title.
The DG1 is rumored to come with 96 EUs, and given the surfacing of drivers designed to enable multiple graphics units to work in tandem, it's possible it could work in an SLI-type implementation with the integrated graphics in Intel's processors, enabling powerful solutions. That would make sense because, as we learned with the Ponte Vecchio architecture announcements, Xe's inherent design principles appear to be based on a multi-chiplet arrangement. Intel hasn't verified either of those speculations but did mention that the card supports dynamic tuning to provide balanced power delivery. This technology sounds similar to AMD's new SmartShift technology unveiled here at CES that allows mobile platforms to shift power to the component under heaviest use, be it the CPU or GPU, to boost overall performance. Intel already has the underpinnings of this technology to manage processor thermals in low-power devices, but now it will extend that framework to the GPU, too.
Intel currently has a graphics install base of over a billion users, which comes courtesy of the integrated graphics chips in its CPUs that make Intel the world's largest GPU manufacturer. The company also has an IP war chest (at one point, it owned more graphics patents than the other vendors combined) to do battle. Intel pointed out that DG1 has "fabulous" media display engines, supports the latest codecs, and has "superfast" QuickSync capabilities. Intel also previously disclosed that the graphics engine supports INT8 for faster AI processing. That capability will enable exciting new levels of performance for applications, like creative and productivity apps, designed to use INT8.
However, the company faces an uphill battle in getting developers to optimize for its new architecture, particularly because the software community is notoriously slow to optimize for new architectures. To be successful, Intel has to lay the groundwork and provide both hardware and software tools to developers to speed the process. Remember, Nvidia, the dominant graphics manufacturer, has more software engineers than hardware engineers, so the importance of having mature drivers, software, and development tools can't be overstated.
Intel has also taken initial steps to ramp up its graphics drivers by moving to a faster cadence of day-zero driver updates for its integrated graphics, and deploying its Graphics Command Center, which is a new user interface for Intel's control panel that allows you to alter several key aspects of your integrated graphics. We also know from our recent exclusive Intel overclocking lab tour that the company is already integrating the Xe graphics cards into its one-click overclocking software and other utilities. All of these efforts are designed to lay the groundwork for the eventual arrival of the more powerful models.
Intel is also engaging the enthusiast community through its far-ranging "Join the Odyssey" program, which is an outreach program designed to keep enthusiasts up to date on the latest developments through an Intel Gaming Access newsletter, outreach events, and even a beta program. The information-sharing goes both ways, though, as the company also plans to use the program to gather feedback from gamers to help guide its design decisions. That means Intel may choose to take some design cues from community feedback. The company recently added support for retro scaling due to the feedback program.
So, it's actually happening. Even though Intel announced its new Xe graphics project seemingly eons ago, and even though we've seen plenty of evidence the silicon is real, the vestiges of Intel's failed Larabee project hang thick in the air, serving as a reminder that even the world's largest semiconductor producer can fail in its attempt to penetrate a market that hasn't had any meaningful competition outside of the AMD and Nvidia duopoly in 20 years.
But now it's real. The arrival of actual working silicon, even though we can't see the chip and have no information on the key specs, lends credibility to Intel's efforts and shows that it is serious about fleshing out the full stack of Xe graphics solutions for the gaming market. To help speed the uptake, Intel will also help ISVs with its own team of ISV engineers and developer teams.
The company has already gotten the stamp of approval from the Department of Energy, the world's leading supercomputing organization, for its Ponte Vecchio graphics cards that will debut in the Aurora supercomputer. That design is far more complex than what we'd see in the consumer space, but that overarching design undoubtedly pulls in much of the same architectural components we'll see on Intel's future graphics cards.
And that sharing can go both ways: True to Intel's new six-pillar strategy, Intel's Ponte Vecchio uses all of the company's next-gen tech, like intricate interconnects and 3D chips stacking via Foveros, along with innovative new features like Rambo Cache. That gives the company an incredibly vast technology canvas to paint a new picture of the desktop GPU, but as always, it all boils down to process technology and software support.
As we know, software is often the biggest challenge, but Intel has certainly invested heavily to assure its new graphics initiative has experience behind the helm by recruiting top talent from several companies, like Raja Koduri and Jim Keller. Intel's developer outreach is a crucial step in the process of bringing up its graphics silicon, but it could be tough sledding, particularly if its architecture significantly diverges too far away from traditional designs. Only time will tell how long this effort will take, and complexity is going to be a huge factor in uptake.
Intel has also amassed an amazing array of companies as it seeks to move into lucrative new markets that span from the edge to the cloud, but its success in all of these areas hinges on one central factor that has proven problematic: Process technology.
Intel's transition to the 10nm architecture has been painful, and many speculate that it still isn't suitable for economical high-volume manufacturing of large dies, so even though GPUs are well-suited to absorb high defect rates due to their repeating structures, using multiple small dies would be wise. Unfortunately, moving to a multi-chip arrangement as the key underlying design would be a first for a mainstream GPU, which could slow game developer optimizations. We've already seen dual-GPU systems die out over time, so transparently implementing a multi-chip architecture is going to be key.
We don't know all of the details of the underlying hardware yet, but it is clear that Intel is moving quickly to get its silicon validated and in the hands of ISVs. We expect to learn more as the company moves closer to the launch of its Tiger Lake laptop processors, which are expected to debut this year.