Intel Unveils Sunny Cove, Gen11 Graphics, Xe Discrete GPU, 3D Stacking

3D Chip Stacking With Foveros


Foveros (Greek for "awesome") is a new 3D packaging technology that Intel plans to use to build new processors stacked atop one another. The concept behind 3D chip stacking is a well-traveled topic that has been under development for decades, but the industry hasn't been able to circumvent the power and thermal challenges, not to mention poor yields, well enough to bring the technology to high-volume manufacturing.

Intel says it built Foveros upon the lessons it learned with its innovative EMIB (Embedded Multi-Die Interconnect Bridge) technology, which is a complicated name for a technique that provides high-speed communication between several chips. That technique allowed the company to connect multiple dies together with a high-speed pathway that provides nearly the same performance as a single large processor. Now Intel has expanded on the concept to allow for stacking die atop each other, thus improving density.

The key idea behind chip stacking is to mix and match different types of dies, such as CPUs, GPUs, and AI processors, to build custom SOCs (System-On-Chip). It also allows Intel to combine several different components with different processes onto the same package. That lets the company use larger nodes for the harder-to-shrink or purpose-built components. That's a key advantage as shrinking chips becomes more difficult.

Intel had a fully functioning Foveros chip on display at the event, that it built for an unnamed customer. The package consists of a 10nm CPU and an I/O chip. The two chips mate with TSVs (Through Silicon Via) that connect the die through vertical electrical connections in the center of the die. The channels then mate with microbumps on an underlying package. Intel also added a memory chip to the top of the stack using conventional a PoP (Package on Package) implementation. The company envisions even more complex implementations in the future that include radios, sensors, photonics, and memory chiplets.


The current design consists of two dies. The lower die houses all of the typical southbridge features, like I/O connections, and is fabbed on the 22FFL process. The upper die is a 10nm CPU that features one large compute core and four smaller 'efficiency' cores, similar to an ARM big.LITTLE processor. Intel calls this a "hybrid x86 architecture," and it could denote a fundamental shift in the company's strategy. The company later confirmed that it is working on building a new line of products based on the new hybrid x86 architecture, which could be the company's response to the Qualcomm Snapdragon processors that power Always Connected laptops. Intel representatives did confirm the first product draws less than 7 watts (2mW standby) and is destined for fanless devices but wouldn't elaborate further.

The package measures 12x12x1mm, but Intel isn't disclosing the measurements of the dies. Stacking small dies should be relatively simple compared to stacking larger dies, but Intel seems confident in its ability to bring the technology to larger processors. Ravishankar Kuppuswamy, Vice President & General Manager of Intel's Programmable Solutions Group, announced that the company is already developing a new FPGA using the Foveros technology. Kuppuswamy claims Foveros technology will enable up to two orders of magnitude performance improvement over the Falcon Mesa FPGAs.

Create a new thread in the Reviews comments forum about this subject
This thread is closed for comments
60 comments
Comment from the forums
    Your comment
  • R_1
    three words jumped out at me, Tile based rendering? Kyro/powerVR did tile based rendering didn't they? is this that or another implementation of the same Idea, I wonder?
  • ervit
    Kevin Costner has a new job? O_O
  • hannibal
    Hmm... Intel's new plan is kill AMD also is GPUCPU front so that AMD will never come back and Intel can go to 1% improvements after AMD is killed... Lets see if they can do that to Nvidia too...
    So far Intel's GPU department has been, well not so stellar, but they have now much more muscle than before... Almost hope that they don't do too well. If intel can kill competition, it is bad to customers. Of Course I hope to see also good Intel products, but this seems so ambitious that AMD may run out of money if the Intel get all gears moving.
    Well lets see. Hopefully AMD will get everything ready durings this year, so that they can react to Intel's manufacturing power with even better products!
  • nufelevas
    Intel always crash and burn at driver support. His graphic driver support is just pathetic.
  • salgado18
    Every Intel fanboy should send a letter to AMD saying 'thank you'. If it weren't for Ryzen, there wouldn't be so much effort to get back to the game.
  • SkyBill40
    So... a refresh of a refresh of a refresh? Or did I miss something?
  • AgentLozen
    I found this article to be really interesting. Its a good look into future Intel technology and I'm happy to see that they still have some tricks up their sleeves for the next few years.
  • stdragon
    Intel has a history of developing dedicated GPU hardware, then killing part way through the cycle. Eventually, they pick up the pieces and adopt them for the iGPU. It's almost as though their dedicated GPU development is really just an R&D path to circle back around for iGPU incorporation. Of course, i'm joking...sorta.

    Can't recall where, but Intel's past efforts in dedicated GPU hardware was something between a traditional nVidia/AMD approach and iGPU/APU design. Meaning, they sacrificed some dedicated hardware for greater flexibility in GPU program-ability. It's a sliding scale - you can go dedicated ASIC which is really fast with IPC performance but ridged capability, or go with greater program-ability with enhanced flexibility in future standards but lower IPC performance.

    Having something of a hybrid approach is far more useful in the data center market where you need GPU performance for VM infrastructure. It would provide a longer lifespan in production while still allowing greater flexibility in changes with the hyper visor and VMs that need dedicated video hardware acceleration.
  • JamesSneed
    I swear it looks like Raja and Jim are at Intels funeral with the roses and greenery. :)

    I wonder when they changed the slide to move process from the outer largest ring to the inner smallest ring. Just saying I thought that was cute of them to de-emphasize the main point that carried Intel for the last 20 years. Ill guess when they get their 7nm EUV process going they switch that back around as a selling point.
  • jimmysmitty
    60597 said:
    Hmm... Intel's new plan is kill AMD also is GPUCPU front so that AMD will never come back and Intel can go to 1% improvements after AMD is killed... Lets see if they can do that to Nvidia too... So far Intel's GPU department has been, well not so stellar, but they have now much more muscle than before... Almost hope that they don't do too well. If intel can kill competition, it is bad to customers. Of Course I hope to see also good Intel products, but this seems so ambitious that AMD may run out of money if the Intel get all gears moving. Well lets see. Hopefully AMD will get everything ready durings this year, so that they can react to Intel's manufacturing power with even better products!


    Intel wont kill the competition. They never have. AMD was down pretty low and they probably could have had a kill shot if they wanted. They don't and wont. It is better business to have AMD around.

    120171 said:
    Every Intel fanboy should send a letter to AMD saying 'thank you'. If it weren't for Ryzen, there wouldn't be so much effort to get back to the game.


    It wasn't just Ryzen. It was also the 10nm problems and delays. In fact I would say that was more-so the issue. If they didn't run into those issues they were planning on having 10nm in 2015 which would have made a very different road map especially with the density they were planning on having for it.

    1442759 said:
    So... a refresh of a refresh of a refresh? Or did I miss something?


    Sunny Cove is quite a bit more than a refresh. A lot of actual changes to the way the CPU works actually. Will be interesting to see.
  • keeperofthestones01
    Make everything big again please. Not everything needs to be shrunk. BIGGER CPU's and GPU's amke sense ....easier to cool , pump much higher voltage, have higher reliability for infrastructure critical deployments and are cheaper to produce ....I dont care if my CPU takes up half a mother board ...good for all the reasons above...Shinking stuff to make it smaller may now mean shorter life span, higher sensitivity to EMS/Spikes and all manner of issues we dont want...So go back to highly effiecient optimised 22nm process double the die size, glue em together and let those puppies ROAR with effective cooling, high voltage and solid state type reliability...... or is that just to simple common sense to ever happen.....so lets shrink them , make them less reliable, harder to cool more prone to issues of signal loss/migration ect ect ect ....how much do they pay INTEL CEO? my advise is free.
  • stdragon
    Die size is expensive because you can only chop up a wafer (sliced from a mono-crystal) so many ways. Besides even if cheap, the laws of physics - the speed of light - is your limitation. Due to signal propagation, you can only travel so far in distance before starting another cycle. The higher the frequency, the less distance you can cover in one cycle.
  • kyotokid
    60597 said:
    Hmm... Intel's new plan is kill AMD also is GPUCPU front so that AMD will never come back and Intel can go to 1% improvements after AMD is killed... Lets see if they can do that to Nvidia too... So far Intel's GPU department has been, well not so stellar, but they have now much more muscle than before... Almost hope that they don't do too well. If intel can kill competition, it is bad to customers. Of Course I hope to see also good Intel products, but this seems so ambitious that AMD may run out of money if the Intel get all gears moving. Well lets see. Hopefully AMD will get everything ready durings this year, so that they can react to Intel's manufacturing power with even better products!


    ...as a CG artist killing the dedicated GPU would be a major step backwards. To get the performance for rendering that a single GPU card can do would require a multi system render farm as for one, CPU core cost is higher than GPU core cost and along with dedicated GDDR memory both of the latter are more efficient and faster than CPU cores/thread and physical memory.

    I used to render in 3DL, Carrara, and Bryce on the CPU with an Intel integrated graphics chipset and it was often glacially slow, taking hours or even days to complete while putting an excessive amount of heat strain on the CPU for lengthy periods of time. With render engines like Octane and Iray, which are GPU based I can get similar results in a fraction of the time often minutes instead of hours, and at worse, a couple hours instead of days. Render speed is particularly important for maintaining a decent production workflow and going back to a very limited number of cores/threads instead of thousands would cripple the process.
  • bit_user
    1920539 said:
    That means it will scale from teraflops of performance integrated into a standard processor up to petaflops of performance with discrete cards.

    No, not petaflops in a single GPU. Nvidia can deliver a couple PFLOPS of deep-learning performance in an 8+ GPU chassis, however. But that's not exactly comparable to the 1 TFLOPS of performance described in the Gen 11 with 64 EU.

    1920539 said:
    Intel also tells us that it will select different nodes for different products based on the needs of the segment. That's similar to the approach taken by third-party fabs like TSMC and Global Foundries

    Huh? Those fabs just make whatever customers order, AFAIK. It's more equivalent to ARM, though, who offers each core on a few different nodes.

    1920539 said:
    It can process seven operations simultaneously

    I think you mean 8. Skylake's execution ports are numbered 0 - 7, so that's a total of 8.
  • bit_user
    Leave it to the marketing geniuses at Intel to name their new graphics architecture Xenon, when they have a CPU product line that's branded as Xeon. That's almost up there with their "Core" architecture branding. The worst part is that I hate to imagine how much they get paid to have such bad ideas, but you can bet it's more than their engineers typically make.

    Also, while the idea of decoupling architecture from process node sounds good and uncontroversial, the decisions underlying an architecture have a lot to do with the expected performance, power, and cost metrics of the target manufacturing node. So, it doesn't feel to me like this development is without compromises.
  • bit_user
    2821755 said:
    Make everything big again please. Not everything needs to be shrunk. BIGGER CPU's and GPU's amke sense ....easier to cool , pump much higher voltage, have higher reliability for infrastructure critical deployments and are cheaper to produce

    Shrinking stuff is generally what makes it faster, cheaper, and more power-efficient.

    While you might not care if your PC burns a couple kW and sounds like a hairdrier, most of us do.
  • bit_user
    332490 said:
    ...as a CG artist killing the dedicated GPU would be a major step backwards.

    Did you miss the part where Intel is also making dGPUs? They seem to recognize the advantages you cite, which is why they're getting into that market.
  • s1mon7
    Wow, so surprised to see Intel being humble and upfront about their roadmaps. That must be the first time it ever happened. It's probably because their competition is coming out with better products soon, so they're like "wait, wait for us! We're going to have good products not long after as well, don't go to them!".
  • kyotokid
    ...reading up on the Larabee project which was Intel's first attempt at a GPU unit around 10 years ago. In spite of hte promise it was eventually cancelled. For what gatehr so far, it was being targeted towards high speed high volume computational uses rather than games or graphics production (save for CAD). Getting late here (02:20) and becoming a bit punchy to put together anything highly detaild so going to sleep on it and get a better start tomorrow.

    From what little I have seen and read about the "reboot" of the concept, a lot of speculation flying around, some FUD, but not much in the way of concrete details except a targeted release date (2020).
  • bit_user
    2809234 said:
    It's probably because their competition is coming out with better products soon, so they're like "wait, wait for us! We're going to have good products not long after as well, don't go to them!".

    Correct. They're trying to give their customers reasons not to switch to another vendor. It's sort of the flip-side of FUD.

    Also, I think their API story is partly a recognition of how well CUDA worked for Nvidia. If CUDA didn't stand as an example of industry's apparent willingness to embrace a vendor-specific API, I wonder if they'd have tried to push their own.