Intel Unveils Sunny Cove, Gen11 Graphics, Xe Discrete GPU, 3D Stacking

Intel's new heads of silicon development, led by Raja Koduri, the Senior Vice President of Core and Visual Computing, and Jim Keller, the Senior Vice President of Silicon Engineering, hosted its Architecture Day here in Santa Clara to outline the company's broad new vision for the future. Dr. Murthy Renduchintala, Intel's chief engineering officer and group president of the Technology, Systems Architecture & Client Group (TSCG) also presented at the event, which was held in the home of Intel co-founder Robert Noyce.

Highlights included the unveiling of the company's new Sunny Cove CPU microarchitecture, its new Gen11 integrated graphics, 'Foveros' 3D chip stacking technology, a teaser of the company's new Xe line of discrete graphics cards, and a new "One API" software designed to simplify programming across Intel's entire product stack. We also caught a glimpse of the first 10nm Ice Lake processor for the data center.


Intel has amassed a treasure trove of new technologies over the last several years as it has diversified into new areas like AI, autonomous driving, 5G, FPGAs, and IoT, and other areas. It's even added GPUs to the list. Intel's process technology touches every segment of all that tech, as well as the chips that power them, but its delayed 10nm process has slowed the company's progress.

To help get back on track, Intel brought in Raja Koduri and Jim Keller to outline a new cohesive vision that spans all facets of its operations. Together with the company's leadership, the pair identified six key building blocks that the company will focus on over the coming years. Those pillars include process technology, architectures, memory, interconnects, security, and software. The company hopes that focusing on these key areas will accelerate its pace of innovation and help it regain its competitive footing.


The event was a wide-ranging affair with an almost overwhelming amount of information and insight into the company's plans for the future, but a few new key technologies stood out as particularly promising. Let's take a look at some of the most interesting new technologies Intel is working on.

Create a new thread in the Reviews comments forum about this subject
60 comments
Comment from the forums
    Your comment
  • R_1
    three words jumped out at me, Tile based rendering? Kyro/powerVR did tile based rendering didn't they? is this that or another implementation of the same Idea, I wonder?
  • ervit
    Kevin Costner has a new job? O_O
  • hannibal
    Hmm... Intel's new plan is kill AMD also is GPUCPU front so that AMD will never come back and Intel can go to 1% improvements after AMD is killed... Lets see if they can do that to Nvidia too...
    So far Intel's GPU department has been, well not so stellar, but they have now much more muscle than before... Almost hope that they don't do too well. If intel can kill competition, it is bad to customers. Of Course I hope to see also good Intel products, but this seems so ambitious that AMD may run out of money if the Intel get all gears moving.
    Well lets see. Hopefully AMD will get everything ready durings this year, so that they can react to Intel's manufacturing power with even better products!
  • nufelevas
    Intel always crash and burn at driver support. His graphic driver support is just pathetic.
  • salgado18
    Every Intel fanboy should send a letter to AMD saying 'thank you'. If it weren't for Ryzen, there wouldn't be so much effort to get back to the game.
  • SkyBill40
    So... a refresh of a refresh of a refresh? Or did I miss something?
  • AgentLozen
    I found this article to be really interesting. Its a good look into future Intel technology and I'm happy to see that they still have some tricks up their sleeves for the next few years.
  • stdragon
    Intel has a history of developing dedicated GPU hardware, then killing part way through the cycle. Eventually, they pick up the pieces and adopt them for the iGPU. It's almost as though their dedicated GPU development is really just an R&D path to circle back around for iGPU incorporation. Of course, i'm joking...sorta.

    Can't recall where, but Intel's past efforts in dedicated GPU hardware was something between a traditional nVidia/AMD approach and iGPU/APU design. Meaning, they sacrificed some dedicated hardware for greater flexibility in GPU program-ability. It's a sliding scale - you can go dedicated ASIC which is really fast with IPC performance but ridged capability, or go with greater program-ability with enhanced flexibility in future standards but lower IPC performance.

    Having something of a hybrid approach is far more useful in the data center market where you need GPU performance for VM infrastructure. It would provide a longer lifespan in production while still allowing greater flexibility in changes with the hyper visor and VMs that need dedicated video hardware acceleration.
  • JamesSneed
    I swear it looks like Raja and Jim are at Intels funeral with the roses and greenery. :)

    I wonder when they changed the slide to move process from the outer largest ring to the inner smallest ring. Just saying I thought that was cute of them to de-emphasize the main point that carried Intel for the last 20 years. Ill guess when they get their 7nm EUV process going they switch that back around as a selling point.
  • jimmysmitty
    Anonymous said:
    Hmm... Intel's new plan is kill AMD also is GPUCPU front so that AMD will never come back and Intel can go to 1% improvements after AMD is killed... Lets see if they can do that to Nvidia too...
    So far Intel's GPU department has been, well not so stellar, but they have now much more muscle than before... Almost hope that they don't do too well. If intel can kill competition, it is bad to customers. Of Course I hope to see also good Intel products, but this seems so ambitious that AMD may run out of money if the Intel get all gears moving.
    Well lets see. Hopefully AMD will get everything ready durings this year, so that they can react to Intel's manufacturing power with even better products!


    Intel wont kill the competition. They never have. AMD was down pretty low and they probably could have had a kill shot if they wanted. They don't and wont. It is better business to have AMD around.

    Anonymous said:
    Every Intel fanboy should send a letter to AMD saying 'thank you'. If it weren't for Ryzen, there wouldn't be so much effort to get back to the game.


    It wasn't just Ryzen. It was also the 10nm problems and delays. In fact I would say that was more-so the issue. If they didn't run into those issues they were planning on having 10nm in 2015 which would have made a very different road map especially with the density they were planning on having for it.

    Anonymous said:
    So... a refresh of a refresh of a refresh? Or did I miss something?


    Sunny Cove is quite a bit more than a refresh. A lot of actual changes to the way the CPU works actually. Will be interesting to see.
  • keeperofthestones01
    Make everything big again please. Not everything needs to be shrunk. BIGGER CPU's and GPU's amke sense ....easier to cool , pump much higher voltage, have higher reliability for infrastructure critical deployments and are cheaper to produce ....I dont care if my CPU takes up half a mother board ...good for all the reasons above...Shinking stuff to make it smaller may now mean shorter life span, higher sensitivity to EMS/Spikes and all manner of issues we dont want...So go back to highly effiecient optimised 22nm process double the die size, glue em together and let those puppies ROAR with effective cooling, high voltage and solid state type reliability...... or is that just to simple common sense to ever happen.....so lets shrink them , make them less reliable, harder to cool more prone to issues of signal loss/migration ect ect ect ....how much do they pay INTEL CEO? my advise is free.
  • stdragon
    Die size is expensive because you can only chop up a wafer (sliced from a mono-crystal) so many ways. Besides even if cheap, the laws of physics - the speed of light - is your limitation. Due to signal propagation, you can only travel so far in distance before starting another cycle. The higher the frequency, the less distance you can cover in one cycle.
  • kyotokid
    Anonymous said:
    Hmm... Intel's new plan is kill AMD also is GPUCPU front so that AMD will never come back and Intel can go to 1% improvements after AMD is killed... Lets see if they can do that to Nvidia too...
    So far Intel's GPU department has been, well not so stellar, but they have now much more muscle than before... Almost hope that they don't do too well. If intel can kill competition, it is bad to customers. Of Course I hope to see also good Intel products, but this seems so ambitious that AMD may run out of money if the Intel get all gears moving.
    Well lets see. Hopefully AMD will get everything ready durings this year, so that they can react to Intel's manufacturing power with even better products!


    ...as a CG artist killing the dedicated GPU would be a major step backwards. To get the performance for rendering that a single GPU card can do would require a multi system render farm as for one, CPU core cost is higher than GPU core cost and along with dedicated GDDR memory both of the latter are more efficient and faster than CPU cores/thread and physical memory.

    I used to render in 3DL, Carrara, and Bryce on the CPU with an Intel integrated graphics chipset and it was often glacially slow, taking hours or even days to complete while putting an excessive amount of heat strain on the CPU for lengthy periods of time. With render engines like Octane and Iray, which are GPU based I can get similar results in a fraction of the time often minutes instead of hours, and at worse, a couple hours instead of days. Render speed is particularly important for maintaining a decent production workflow and going back to a very limited number of cores/threads instead of thousands would cripple the process.
  • bit_user
    Anonymous said:
    That means it will scale from teraflops of performance integrated into a standard processor up to petaflops of performance with discrete cards.

    No, not petaflops in a single GPU. Nvidia can deliver a couple PFLOPS of deep-learning performance in an 8+ GPU chassis, however. But that's not exactly comparable to the 1 TFLOPS of performance described in the Gen 11 with 64 EU.

    Anonymous said:
    Intel also tells us that it will select different nodes for different products based on the needs of the segment. That's similar to the approach taken by third-party fabs like TSMC and Global Foundries

    Huh? Those fabs just make whatever customers order, AFAIK. It's more equivalent to ARM, though, who offers each core on a few different nodes.

    Anonymous said:
    It can process seven operations simultaneously

    I think you mean 8. Skylake's execution ports are numbered 0 - 7, so that's a total of 8.
  • bit_user
    Leave it to the marketing geniuses at Intel to name their new graphics architecture Xenon, when they have a CPU product line that's branded as Xeon. That's almost up there with their "Core" architecture branding. The worst part is that I hate to imagine how much they get paid to have such bad ideas, but you can bet it's more than their engineers typically make.

    Also, while the idea of decoupling architecture from process node sounds good and uncontroversial, the decisions underlying an architecture have a lot to do with the expected performance, power, and cost metrics of the target manufacturing node. So, it doesn't feel to me like this development is without compromises.
  • bit_user
    Anonymous said:
    Make everything big again please. Not everything needs to be shrunk. BIGGER CPU's and GPU's amke sense ....easier to cool , pump much higher voltage, have higher reliability for infrastructure critical deployments and are cheaper to produce

    Shrinking stuff is generally what makes it faster, cheaper, and more power-efficient.

    While you might not care if your PC burns a couple kW and sounds like a hairdrier, most of us do.
  • bit_user
    Anonymous said:
    ...as a CG artist killing the dedicated GPU would be a major step backwards.

    Did you miss the part where Intel is also making dGPUs? They seem to recognize the advantages you cite, which is why they're getting into that market.
  • s1mon7
    Wow, so surprised to see Intel being humble and upfront about their roadmaps. That must be the first time it ever happened. It's probably because their competition is coming out with better products soon, so they're like "wait, wait for us! We're going to have good products not long after as well, don't go to them!".
  • kyotokid
    ...reading up on the Larabee project which was Intel's first attempt at a GPU unit around 10 years ago. In spite of hte promise it was eventually cancelled. For what gatehr so far, it was being targeted towards high speed high volume computational uses rather than games or graphics production (save for CAD). Getting late here (02:20) and becoming a bit punchy to put together anything highly detaild so going to sleep on it and get a better start tomorrow.

    From what little I have seen and read about the "reboot" of the concept, a lot of speculation flying around, some FUD, but not much in the way of concrete details except a targeted release date (2020).
  • bit_user
    Anonymous said:
    It's probably because their competition is coming out with better products soon, so they're like "wait, wait for us! We're going to have good products not long after as well, don't go to them!".

    Correct. They're trying to give their customers reasons not to switch to another vendor. It's sort of the flip-side of FUD.

    Also, I think their API story is partly a recognition of how well CUDA worked for Nvidia. If CUDA didn't stand as an example of industry's apparent willingness to embrace a vendor-specific API, I wonder if they'd have tried to push their own.