parrallel proccessing, hot carriers, reliability

iron8orn

Admirable
http://www0.egr.uh.edu/courses/ece/ECE6347/Course%20Notes/Chap9_Reliability.pdf

I am not a electrical engineer but i do have a friend with a masters in electrical engineering.

We had a discussion about this that had got cut short that i would like to include others on.

We had come to the conclusion that in some right a 22nm cpu was never a good idea and we will most likely never see a 22nm gpu to match. Also that in some right the advancement of silicone has gone a lil wacky and has reached a point where we can no longer make transistors smaller while maintaining reliability.

What i did not get to discuss is the performance of the 28nm kaveri apu with the discrete graphics disabled and running with a high performance gpu along with in theory how good a 28nm FX or 28 nm hyper threaded Intel could be for high performance computing.
 
Solution


We will seem 14nmish GPUs, but they won't be fabricated by Intel (aside from Intel's IGPs). Other manufactures are working on their own 14nm and 16nm FinFET processes (Intel's 22nm and 14nm 3D Tri-Gate transistors are a variation on FinFET, although they don't use the term in marketing).

Intel has a long history of always being one step ahead of...

iron8orn

Admirable
I do agree that time and technology will march on and we may very well see a 22nm gpu but the performance increase over current 28nm gpu's would be negotiable.

I do believe within the next five years silicone will be refined to its pinnacle for public use and we will begin to see graphene take the scene.. maybe a cell phone first or tablet, game system and followed with servers and high performance personal computers.
 


Damn it, I had a really nice post typed out and lost it when I hit the back button. I hate Chrome sometimes.

Computer Engineer here,

Intel's 22nm node uses a long and narrow transistor channel with a high-dielectric gate surrounding it on three sides. This is ridiculously hard to manufacture, but results in very ideal switching and conducting properties that simply cannot be achieved through older technologies.

The 22nm node and 3D Tri-Gate transistor technology is not only a great idea, it's one of the best ideas that Intel has ever had. Many industry experts view it as being more than a full generation ahead of competitors such as Global Foundries and Samsung. In fact, Global Foundries, Samsung, TSMC, and several other companies are working together to try and leapfrog their next planned node step in an attempt to catch up to Intel.

We will never see a discrete 22nm GPU because Intel is not in the discrete GPU business and they're also not in the business of providing their valuable fabrication services to their competitors. The 14nm shrink of the 22nm fabrication process will be used to fabricate a variety of CPUs and SoCs for Intel, as well as high end Stratix FPGAs for Altera (which I am seriously looking forward to playing with).
 

iron8orn

Admirable
I have heard about the 14nm shrink and i have my doubts along with others but will we see a 14nm gpu to match?

Above all i put my money on parallel processing and that is currently putting me on the doorstep of AMD and the HSA foundation.

I mean what good is the shrink without shrinking gpu's not to mention that the true technology innovation lies beyond silicone into graphene.
 


We will seem 14nmish GPUs, but they won't be fabricated by Intel (aside from Intel's IGPs). Other manufactures are working on their own 14nm and 16nm FinFET processes (Intel's 22nm and 14nm 3D Tri-Gate transistors are a variation on FinFET, although they don't use the term in marketing).

Intel has a long history of always being one step ahead of the competition as far as fabrication goes, so we'll see how the various processes stack up. For example, Intel's 32nm bulk silicon process generally performs better than Global Foundries 32mn Silicon on Insulator process.

Parallel processing is great and all, but be careful not to put snake eggs and chicken eggs in the same basket. There are serious limitations to vector processing that many people aren't aware of. Intel has their own parallel processing platform (Xeon Phi) which may very well seize the supercomputer market by storm when it comes out soon.
 
Solution