Intel's Ponte Vecchio Xe Graphics Card Shows Up in Add-In Card Form Factor

As spotted by @momomo_us, Intel is preparing its Ponte Vecchio graphics cards for shipment in a reference validation platform (RVP), which are systems used by vendors to begin optimizing hardware and software. According to the listing at the Eurasian Economic Commission, the graphics cards come in three flavors and a standard AIC form factor. 

The finer-grained details of the Ponte Vecchio graphics cards are still shrouded in a fog of secrecy, but Intel has shared some of the broader details in several disclosures. We know that the cards, which are designed for exascale computing and will debut in the Aurora Supercomputer at Argonne National Laboratory in 2021, will eventually come armed with the 7nm process. Intel plans to pair six of the cards and HBM with two Sapphire Rapids CPUs in each node with an innovative new Xe Link fabric that uses the CXL interface to tie them to a central Rambo Cache.

This blindingly-complex arrangement uses eight 7nm chiplets with Foveros 3D packaging combined into each GPU, which is then apparently mounted onto a large motherboard. It's easy to imagine that these validation cards, which are listed as pre-alpha in an add-in-card form factor like a standard GPU, will consist of a trimmed-back version with either fewer chiplets, or even a single chiplet. 

Reference validation platforms are designed to foster the ecosystem of software developers before the launch of new devices. Given the nature of Intel's new OneAPI programming model, which Intel designed to simplify programming across its GPU, CPU, FPGA, and AI accelerators, they'll be plenty of work on the development side. 

Intel splits the Xe Architecture up into three designs that each address different segments: Data center, consumer graphics cards, and AI use-cases (HP); integrated graphics on its processors (LP); and the high-tier Xe HPC for high performance computing, with the latter (Ponte Vecchio) being designed specifically for compute. The consumer version of the Xe Graphics card for gaming will lead the way in 2020, likely on the 10nm process. 

It's unknown if the Ponte Vecchio validation cards use Intel's 10nm or 7nm process, with the former probably far more likely in the early stages of development. We do know that Intel's first GPU silicon in its DG1 uses the 10nm process, and the company has already powered on the leading-edge prototypes.

In either case, the listing implies these systems are (or will be) shipping soon, which could mean Intel is moving along on its Ponte Vecchio architecture as planned. That's a good sign given the company's previous attempts at developing its own Larabee GPUs ended without a single productized model, and nagging delays to its 10nm process have hindered its more recent efforts.  

Paul Alcorn
Managing Editor: News and Emerging Tech

Paul Alcorn is the Managing Editor: News and Emerging Tech for Tom's Hardware US. He also writes news and reviews on CPUs, storage, and enterprise hardware.

  • jimmysmitty
    That's a good sign given the company's previous attempts at developing its own Larabee GPUs ended without a single productized model

    I mean not quite true considering that Knights Ferry was launched and was basically the HPC Larrabee. We just never saw a consumer version of it since it probably wouldn't have kept up with AMD and Nvidia well enough to sell. But in HPC, it performed well and could be sold for a healthy margin.
    Reply
  • Deicidium369
    When the Larrabee derived Knight's Landing were released the move to GPU based accelerators was in it's infancy - they were NEVER going to be a consumer card. Same with Itanium - was never going to be a consumer facing product (Itanium was a VLIW 64bit CPU developed with and for HP)
    Reply
  • jimmysmitty
    Deicidium369 said:
    When the Larrabee derived Knight's Landing were released the move to GPU based accelerators was in it's infancy - they were NEVER going to be a consumer card. Same with Itanium - was never going to be a consumer facing product (Itanium was a VLIW 64bit CPU developed with and for HP)

    Intels original plan for Itanium was actually to go server and eventually move consumer to it since it was a distinct uArch from x86 and Intel would not have to share licensing with AMD and VIA, they would have x86 and of course x86-64.

    I am somewhat sad as I think a pure 64bit uArch would have been better than holding onto the ancient x86 base.
    Reply
  • Deicidium369
    jimmysmitty said:
    Intels original plan for Itanium was actually to go server and eventually move consumer to it since it was a distinct uArch from x86 and Intel would not have to share licensing with AMD and VIA, they would have x86 and of course x86-64.

    I am somewhat sad as I think a pure 64bit uArch would have been better than holding onto the ancient x86 base.
    Itanium was designed with HP for HP. Was never to be released outside of that agreement. Was never going to be the next gen, or replace x86 or anything other than meet HP's goals for the CPU.

    Intel could revoke AMDs x86 license, which means that AMD would have to recall 100% of it's products from the channel and stop taking delivery from TSMC. AMD would be bankrupt over night, and Intel could offer AMD a lifeline - the purchase of x64 ouright.

    Thing is, AMD serves a useful function to both Nvidia and Intel - they are considered "competition" and as such allows both Nvidia and Intel stave off antitrust / monopoly charges, while still not actually providing actual real competition.
    Reply
  • Conahl
    Deicidium369 said:
    Itanium was designed with HP for HP. Was never to be released outside of that agreement. Was never going to be the next gen, or replace x86 or anything other than meet HP's goals for the CPU.

    Intel could revoke AMDs x86 license, which means that AMD would have to recall 100% of it's products from the channel and stop taking delivery from TSMC. AMD would be bankrupt over night, and Intel could offer AMD a lifeline - the purchase of x64 ouright.

    Thing is, AMD serves a useful function to both Nvidia and Intel - they are considered "competition" and as such allows both Nvidia and Intel stave off antitrust / monopoly charges, while still not actually providing actual real competition.


    wrong : Although Itanium did attain limited success in the niche market of high-end computing, Intel had originally hoped it would find broader acceptance as a replacement for the original x86 architecture from Wikipedia also and interesting read about IA-64 : https://www.techworld.com/tech-innovation/will-intel-abandon-the-itanium-2690/
    intel cant revoke AMD's x86 license cause then amd could revoke intel from using amd64, and they would both have to recall their respective products that use x86 and amd64

    " and Intel could offer AMD a lifeline - the purchase of x64 ouright. " um amd owns x64, aka amd64 i assume that should of been x86

    a simple google search would of told you this.
    https://www.quora.com/Could-Intel-revoke-AMD-s-licence-to-produce-x86-cpu-if-they-wanted-to
    Reply
  • bit_user
    Deicidium369 said:
    Itanium was designed with HP for HP. Was never to be released outside of that agreement. Was never going to be the next gen, or replace x86 or anything other than meet HP's goals for the CPU.
    Oh, this is very wrong, indeed.

    The first gen Itanium processors even had a hardware engine to accelerate emulation of x86. Intel's plan was that IA64 (the name for Itanium's ISA) would trickle down to consumers and be the 64-bit replacement for x86.

    Why else do you think AMD beat Intel to extending x86 to 64-bit? Intel didn't want 64-bit x86, but AMD succeeded to such an extent that Intel had to embrace it and scrap its plans for world domination with IA64.

    Deicidium369 said:
    Intel could revoke AMDs x86 license,
    I'm not sure about that. Is that an assumption you're making, or is it based on actual information about the license terms?
    Reply
  • bit_user
    Deicidium369 said:
    When the Larrabee derived Knight's Landing were released the move to GPU based accelerators was in it's infancy - they were NEVER going to be a consumer card.
    Not true. The original Larrabee in fact was intended to be a consumer GPU. It had hardware texturing engines, and there are prototype boards floating around that even have video outputs for not just Larrabee, but the next generation or two (Linus Tech Tips got a hold of one of these cards and tried to get it up and working as a GPU).

    As for Knight's Landing being released when GPU-compute was in its infancy, I have no idea where you got that. CUDA and Nvidia's Tesla line of data center accelerators were launched way back in 2007. KNL launched in 2013.

    References:
    https://www.anandtech.com/show/3738/intel-kills-larrabee-gpu-will-not-bring-a-discrete-graphics-product-to-markethttps://en.wikipedia.org/wiki/Xeon_Phihttps://en.wikipedia.org/wiki/CUDAhttps://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#Tesla
    Reply
  • Deicidium369
    bit_user said:
    Oh, this is very wrong, indeed.

    The first gen Itanium processors even had a hardware engine to accelerate emulation of x86. Intel's plan was that IA64 (the name for Itanium's ISA) would trickle down to consumers and be the 64-bit replacement for x86.

    Why else do you think AMD beat Intel to extending x86 to 64-bit? Intel didn't want 64-bit x86, but AMD succeeded to such an extent that Intel had to embrace it and scrap its plans for world domination with IA64.


    I'm not sure about that. Is that an assumption you're making, or is it based on actual information about the license terms?
    Wrong. For HP with HP. NEVER was going to be a consumer chip - and no AMD did not beat Intel to 64 bits because of this. I worked at HP during this time, and I know of what I speak. What you know you synthesized yourself, but is based on nothing in reality.

    Was to replace PA-RISC for HP - was a VLIW processor and server specific. Had nothing to do with "AMD beating Intel to x64",

    Head canon is not canon.
    Reply
  • bit_user
    Deicidium369 said:
    Wrong. For HP with HP. NEVER was going to be a consumer chip - and no AMD did not beat Intel to 64 bits because of this. I worked at HP during this time, and I know of what I speak. What you know you synthesized yourself, but is based on nothing in reality.

    Was to replace PA-RISC for HP - was a VLIW processor and server specific. Had nothing to do with "AMD beating Intel to x64",

    Head canon is not canon.
    Nope. I don't agree.

    You can work at HP and be focused on it as a replacement for PA-RISC, while still missing Intel's larger plans for it as their 64-bit successor to x86. Both can be true.

    Why the heck do you think Intel messed around with PAE, and why wasn't Intel the one to extend x86 to 64-bit? It's because their only plan for 64-bit was IA64, and they did the PAE hack because it was running late.

    All of the tech press, at the time, was focused on IA64 as the 64-bit successor for x86. It's the main reason Intel had a hardware x86 front end integrated into Itanium. In the late 90's, all of the messaging was around Itanium being the next big thing. Sure, it was going to start out in the server realm, because consumers had no need for 64-bit, but then it was going to trickle down.

    See: https://en.wikipedia.org/wiki/Itanium#Other_markets (which cites: http://features.techworld.com/operating-systems/2690/will-intel-abandon-the-itanium/ ).

    Unfortunately, it was late, slow, and expensive. And the Pentium 3 turned out to be quite good, for the time. Then, when AMD came along and dropped Opteron, it was game over for IA64. Its window had closed, and Intel had no choice but to embrace AMD64.

    BTW, if you wanted to show real street cred, you'd describe IA64 as EPIC or "VLIW-like". EPIC was constraint-based, still requiring realtime scheduling by the CPU, which was done for binary compatibility between various models and generations. VLIW is entirely statically-scheduled, and most commonly used in embedded scenarios, where having to compile for a specific CPU model isn't necessarily a problem. The main benefit of EPIC is that it saves the CPU having to work out data dependencies, on the fly.

    I think some of these ideas could resurface, as x86-64 eventually falls out of favor, with CPU designers struggling to find ever more ways to increase perf/W.
    Reply
  • bit_user
    Deicidium369 said:
    ...
    P.S. I'm glad you're back.

    Your extensive knowledge is welcome here, as long as you buttress your strong opinions with sound logic and quality sources, rather than insults or vitriol.

    Something I keep facing, myself is the fact that internet arguments are fundamentally unwinnable. All you can really do is make your best case. If your counter-party remains unconvinced, accept there's nothing more you can do and move on. I usually let them have the last word - especially if I'm the one who "started it" (i.e. called them out on something they said).
    Reply