Sign in with
Sign up | Sign in
Your question

28nm vs 20nm.

Tags:
  • Temperature
  • Graphics
Last response: in Graphics & Displays
Share
January 29, 2014 8:38:01 AM

How much will the maxwell series benefit from 20nm? (if they're even doing 20nm in their 800 series card.) Will it blow away the current 28nm 780ti or will it just be a little bit faster if not the same? I heard it would have a lower TDP, but does that mean they will be running faster? Wouldn't the temperature get higher that way and still be limited by it? I'm dying deciding to get a 780ti or wait for an 880 as here I hear they will be 60% faster and there I hear they will just be rebranded 700 series. Would 20nm be alot more expensive aswell?

More about : 28nm 20nm

January 29, 2014 8:44:29 AM

Well take a look at 40nm to what we have now, a 780 is twice as fast as a 580. It's probably going to be a pretty big deal.
m
0
l
January 29, 2014 8:50:27 AM

Hopefully that logic turns out to be the case, I'm considering a GPU upgrade by the end of this summer and I really need some better options. The high-end market right now consists of the inflated 290 and 290X, and the aging(by this summer) 780 and the overpriced 780Ti. Anything less than a die shrink and huge performance gains would not be worth the upgrade from my 670.

I do have some fears that the Nvidia 800 series are going to be expensive, based on the debut price of the 780 and the fact that Nvidia is about to release two $999+ cards, the 790 and the Titan Black. It really sounds like $649 is going to be the new debut standard for Nvidia flagships, especially with the 790 and Titan Black setting up the stage.
m
0
l
Related resources
January 29, 2014 9:10:24 AM

I just want to point out that the die size do not directly affect the performance of a CPU or GPU.

At the most basic level, a die shrink allows for lower cost of production per unit because more GPU cores can fit on a 300mm silicon wafer. The more that can be manufactured at a single time, the lower the cost. It also allows for lower power consumption (lower voltage) which should also mean less heat.

If the GTX 580 was shrunk down from 40nm to 20nm die process, the two versions would provide the exact same performance. However, die shrinks can allow CPUs / GPUs to be clocked higher for better performance since as stated above, the die shrink should allow for power consumption and heat assuming the same clock speed.

The major fact in CPU and GPU performance is the architecture itself. If nVidia should somehow totally mess up Maxwell to the point where it performance like a GeForce 2 card from yester-year, then even if it was possible to shrink the die process to 1nm it would still perform like a P.O.S.
m
0
l
January 29, 2014 9:18:40 AM

Very good point there, but I thought shrunk down die sizes were more difficult to manufacture? As for the architecture, we'll find out soon enough with the 750Ti, which is supposed to be the first Maxwell card due out next month.
m
0
l
January 30, 2014 8:38:42 AM

thismafiaguy said:
Very good point there, but I thought shrunk down die sizes were more difficult to manufacture? As for the architecture, we'll find out soon enough with the 750Ti, which is supposed to be the first Maxwell card due out next month.


well i think that is the current problem. as going smaller is difficult it become expensive instead of saving the cost from the initial theory. the process will mature over time but if you wait for the process to truly mature you might get leave behind by your competitor if they jump to the new process node early. as for the performance increase i think it does not depend solely on the die shrink alone. it also depends on the architecture.
m
0
l
February 5, 2014 5:16:48 PM

So it's completely possible that the 20nm Maxwell cards will have the same performance as Kepler, and the only guaranteed improvement being lower power consumption?
m
0
l
February 5, 2014 7:48:11 PM

thismafiaguy said:
So it's completely possible that the 20nm Maxwell cards will have the same performance as Kepler, and the only guaranteed improvement being lower power consumption?


the usual is they can get the same performance but with lower power consumption. but i heard some rumor about that 20nm might not be able to bring power consumption down much from current 28nm node. so for gpu manufacturer they can't rely 100% on die shrink alone to get better efficiency. so in other words they also need to tweak their core architecture so they can get better performance/watt. this is what nvidia do with kepler
m
0
l
!