Rumored Specs of Upcoming GeForce GTX 880 Appear Online
Tags:
-
Graphics Cards
-
Components
Last response: in News comments
Details have been leaked about the upcoming GTX 880 graphics card from Nvidia, though we're not sure how much of it we're supposed to believe.
Rumored Specs of Upcoming GeForce GTX 880 Appear Online : Read more
Rumored Specs of Upcoming GeForce GTX 880 Appear Online : Read more
More about : rumored specs upcoming geforce gtx 880 online
warezme
April 11, 2014 12:16:42 PM
Frank Tizzle
April 11, 2014 12:36:35 PM
Zombie615
April 11, 2014 12:41:59 PM
Bif Turkle
April 11, 2014 12:43:12 PM
JOSHSKORN
April 11, 2014 12:53:20 PM
hannibal
April 11, 2014 1:03:41 PM
leoscott
April 11, 2014 1:07:24 PM
WithoutWeakness
April 11, 2014 1:08:24 PM
Quote:
Why would they give it the GTX 880 designation if it isn't the full featured upper end model? So what if the performance cannibalizes the Titan series. Those are old architecture that need to go away. For the same reason that the GTX 660 used the GK104 chip instead of the full GK110 chip. Nvidia's mid-range GK104 was performance-competetive with AMD"s high-end Tahiti chip found on the HD 7970. Nvidia was able to take their mid-range chip and sell it at high-end prices because it outperformed the competition and would sell at that price. Then while AMD evolved their GCN architecture for the Hawaii chips in the R9 290 series Nvidia was able to sell off their high-end GK110 chips for top dollar as Tesla compute cards and eventually roll those chips into GeForce cards for the 780, 780Ti, Titan, and Titan Black.
My guess for this generation is that it's the same deal. Nvidia feels that their mid-range GM104 chip will be competetive with AMD's offering so they will sell the GM104 as the GTX 880 and hold onto the larger GM110 chips for high-margin Tesla cards and roll them out later as the GTX 900 series.
Score
7
The_One_and_Only
April 11, 2014 1:09:16 PM
hannibal
April 11, 2014 1:09:47 PM
warezme said:
Why would they give it the GTX 880 designation if it isn't the full featured upper end model? So what if the performance cannibalizes the Titan series. Those are old architecture that need to go away. Since it's nVidia, they'll probably have a "Titan 2" down the road to bleed fanboys out of their money later on, haha.
If the specs are actually true, they sound more like an updated revision instead of a higher tier Maxwell GPU (was it Maxwell?). The 4GB of VRAM are actually hurting them in the 4K territory thanks to what the R9-295X showed, so they might be cooking something in between to justify the stupid price tags they are asking as of late.
Cheers!
Score
1
soldier44
April 11, 2014 3:11:46 PM
Score
-13
nolarrow
April 11, 2014 4:00:13 PM
nolarrow
April 11, 2014 4:01:10 PM
Quote:
That card was a goddamn RAID BOSS. Surprisingly, my "aging" gtx 570 is right up there in my all time favorite and long lasting gfx cards at my current 1920x1080 120hz rez.in no particular order:
1. 3dfx voodoo 2s in SLI
2. geforce 256
3. 8800GT
4. GTX 570
I had to log in and copy pasted my message and lost the first part.
It was supposed to start with "I hope it lasts as long as my original 8800 GT"
Sorry for the double post
Score
0
jrharbort
April 11, 2014 5:27:31 PM
One detail not mentioned is how the Maxwell architecture utilizes a much larger L2 cache, allowing it to do far more on-die instead of having to fetch data as often from the higher latency GDDR5 memory. This allows them to get away with a lower memory bus, while still offering even higher performance (read up the details and benchmarks of the GTX 860M, which is already released).
Score
7
Quote:
One detail not mentioned is how the Maxwell architecture utilizes a much larger L2 cache, allowing it to do far more on-die instead of having to fetch data as often from the higher latency GDDR5 memory. This allows them to get away with a lower memory bus, while still offering even higher performance (read up the details and benchmarks of the GTX 860M, which is already released).that's a bold strategy cotton. let's see if it pays off for 'em.
Score
2
MasterMace
April 11, 2014 8:53:55 PM
If it's not a successor to the 780 Ti (15 SMX), then it's likely a Titan Successor (14 SMX), or a 780 successor (12 SMX). Using the Maxwell Model, a 15 SMX Successor would likely have 25 SMMs and 5 GPCs. What gets tricky is the ROPs, as we haven't seen a multi-GPC Maxwell - yet. I believe they are scaling it directly with the GM107. This would give 16 ROPs, 2MB L2 Cache, and 128-bit memory per GPC Disable 1 SMM and you have:
5 GPCs
24 SMMs
10MB L2 Cache
3072 Shader Cores
192 Texture Units
80 ROPs
640-Bit Memory Bus
6GB GDDR5 RAM
If it's a GTX 780 successor, disable 1 GPC:
4 GPCs
20 SMMs
8MB L2 Cache
2560 Shader Cores
160 Texture Units
64 ROPs
512-Bit Memory Bus
3GB GDDR5 RAM
I personally would love this. But if we're throwing out rumors, then here's a rumor via logic.
5 GPCs
24 SMMs
10MB L2 Cache
3072 Shader Cores
192 Texture Units
80 ROPs
640-Bit Memory Bus
6GB GDDR5 RAM
If it's a GTX 780 successor, disable 1 GPC:
4 GPCs
20 SMMs
8MB L2 Cache
2560 Shader Cores
160 Texture Units
64 ROPs
512-Bit Memory Bus
3GB GDDR5 RAM
I personally would love this. But if we're throwing out rumors, then here's a rumor via logic.
Score
0
LORD_ORION
April 11, 2014 9:29:51 PM
danwat1234
April 12, 2014 12:12:08 AM
WithoutWeakness said:
For the same reason that the GTX 660 used the GK104 chip instead of the full GK110 chip. Nvidia's mid-range GK104 was performance-competetive with AMD"s high-end Tahiti chip found on the HD 7970. Nvidia was able to take their mid-range chip and sell it at high-end prices because it outperformed the competition and would sell at that price. Then while AMD evolved their GCN architecture for the Hawaii chips in the R9 290 series Nvidia was able to sell off their high-end GK110 chips for top dollar as Tesla compute cards and eventually roll those chips into GeForce cards for the 780, 780Ti, Titan, and Titan Black.My guess for this generation is that it's the same deal. Nvidia feels that their mid-range GM104 chip will be competetive with AMD's offering so they will sell the GM104 as the GTX 880 and hold onto the larger GM110 chips for high-margin Tesla cards and roll them out later as the GTX 900 series.
Not this stupid myth again...
The GK104 was a high-end GPU. It's almost as big as AMDs Tahiti, and much bigger than AMDs midrange GPU at the time, Pitcairn.
If you want to get into the discussion about who got the most out of each square mm of die, then it's AMD: The R9 290X is only slightly slower than the GTX 780 Ti, even though Hawaii is much smaller than GK110.
The size difference between GK110 and Hawaii is 123 square mm. The difference between Tahiti and GK104 is only 58 square mm. And the difference from what you call Nvidia's high-end GPU, the GK110, to what you call AMDs high-end GPU, Tahiti, is a whopping 209 square mm. There's no way these GPUs are in the same league. It's like comparing a humvee with a tank.
Score
1
Anonymous
a
b
U
Graphics card
April 12, 2014 6:32:32 AM
There was some earlier mention by Nvidia of a built-in ARM processor in the top Maxwell chips, which was rumored to be for processing high-def audio for HDMI. I'm wondering if this proc could also be programed with CUDA to do other tasks, like a compute boost or at least partially off-loading physics overhead. I guess we'll have to wait a while after the review samples get shipped. No matter what, new hardware releases from the big 3 are always pretty exciting for me
.
. Score
0
kiniku
April 12, 2014 9:32:26 AM
Lessthannil
April 12, 2014 10:00:29 AM
WithoutWeakness said:
Quote:
Why would they give it the GTX 880 designation if it isn't the full featured upper end model? So what if the performance cannibalizes the Titan series. Those are old architecture that need to go away. For the same reason that the GTX 660 used the GK104 chip instead of the full GK110 chip. Nvidia's mid-range GK104 was performance-competetive with AMD"s high-end Tahiti chip found on the HD 7970. Nvidia was able to take their mid-range chip and sell it at high-end prices because it outperformed the competition and would sell at that price. Then while AMD evolved their GCN architecture for the Hawaii chips in the R9 290 series Nvidia was able to sell off their high-end GK110 chips for top dollar as Tesla compute cards and eventually roll those chips into GeForce cards for the 780, 780Ti, Titan, and Titan Black.
My guess for this generation is that it's the same deal. Nvidia feels that their mid-range GM104 chip will be competetive with AMD's offering so they will sell the GM104 as the GTX 880 and hold onto the larger GM110 chips for high-margin Tesla cards and roll them out later as the GTX 900 series.
GK110 had such poor yields that it couldnt be used for the GTX 680. GK100 was also cancelled, too. Keep in mind that GK110 was around a 550mm^2 die with a new architecture put on a relatively new 28nm process. That is pretty much the perfect storm for getitng bad yield.
Score
1
Quote:
WithoutWeakness said:
For the same reason that the GTX 660 used the GK104 chip instead of the full GK110 chip. Nvidia's mid-range GK104 was performance-competetive with AMD"s high-end Tahiti chip found on the HD 7970. Nvidia was able to take their mid-range chip and sell it at high-end prices because it outperformed the competition and would sell at that price. Then while AMD evolved their GCN architecture for the Hawaii chips in the R9 290 series Nvidia was able to sell off their high-end GK110 chips for top dollar as Tesla compute cards and eventually roll those chips into GeForce cards for the 780, 780Ti, Titan, and Titan Black.My guess for this generation is that it's the same deal. Nvidia feels that their mid-range GM104 chip will be competetive with AMD's offering so they will sell the GM104 as the GTX 880 and hold onto the larger GM110 chips for high-margin Tesla cards and roll them out later as the GTX 900 series.
Not this stupid myth again...
The GK104 was a high-end GPU. It's almost as big as AMDs Tahiti, and much bigger than AMDs midrange GPU at the time, Pitcairn.
If you want to get into the discussion about who got the most out of each square mm of die, then it's AMD: The R9 290X is only slightly slower than the GTX 780 Ti, even though Hawaii is much smaller than GK110.
The size difference between GK110 and Hawaii is 123 square mm. The difference between Tahiti and GK104 is only 58 square mm. And the difference from what you call Nvidia's high-end GPU, the GK110, to what you call AMDs high-end GPU, Tahiti, is a whopping 209 square mm. There's no way these GPUs are in the same league. It's like comparing a humvee with a tank.
from a consumer standpoint, die size doesnt matter at all. cost, tdp, cooling, performance. those are the bottom lines to be compared.
Score
1
darknate
April 12, 2014 1:28:55 PM
neon neophyte said:
from a consumer standpoint, die size doesnt matter at all. cost, tdp, cooling, performance. those are the bottom lines to be compared.Cost tends to be proportional to die size (at least for fully enabled GPUs). So the bigger GK110 most likely represents more value for Nvidia than Hawaii or Tahiti does for AMD.
Score
0
chaospower
April 12, 2014 3:33:11 PM
WithoutWeakness said:
Quote:
Why would they give it the GTX 880 designation if it isn't the full featured upper end model? So what if the performance cannibalizes the Titan series. Those are old architecture that need to go away. For the same reason that the GTX 660 used the GK104 chip instead of the full GK110 chip. Nvidia's mid-range GK104 was performance-competetive with AMD"s high-end Tahiti chip found on the HD 7970. Nvidia was able to take their mid-range chip and sell it at high-end prices because it outperformed the competition and would sell at that price. Then while AMD evolved their GCN architecture for the Hawaii chips in the R9 290 series Nvidia was able to sell off their high-end GK110 chips for top dollar as Tesla compute cards and eventually roll those chips into GeForce cards for the 780, 780Ti, Titan, and Titan Black.
My guess for this generation is that it's the same deal. Nvidia feels that their mid-range GM104 chip will be competetive with AMD's offering so they will sell the GM104 as the GTX 880 and hold onto the larger GM110 chips for high-margin Tesla cards and roll them out later as the GTX 900 series.
You're absolutely right. It only makes sense for it to be that way. Nvidia is onto a good money making scheme here.
And I'm sure you meant the GTX 680 and not the GTX 660, which isn't a gk104 chip, nor could it or was meant to compete with the hd7970.
Score
0
chaospower said:
You're absolutely right. It only makes sense for it to be that way. Nvidia is onto a good money making scheme here. And I'm sure you meant the GTX 680 and not the GTX 660, which isn't a gk104 chip, nor could it or was meant to compete with the hd7970.
Dude, I just debunked that myth.
Score
0
Quote:
neon neophyte said:
from a consumer standpoint, die size doesnt matter at all. cost, tdp, cooling, performance. those are the bottom lines to be compared.Cost tends to be proportional to die size (at least for fully enabled GPUs). So the bigger GK110 most likely represents more value for Nvidia than Hawaii or Tahiti does for AMD.
there are a lot of other factors in consumer price. die size doesnt matter to the consumer. cost does.
Score
0
neon neophyte said:
there are a lot of other factors in consumer price. die size doesnt matter to the consumer. cost does.Do you even understand what we're talking about?
The claim is that Nvidia was selling a small, cheap (to produce) midrange GPU (GK104) at a premium because the performance matched a larger high-end GPU (Tahiti) from AMD. The point being that Nvidia would have a competitive advantage, because their costs would be lower and/or they could get more GPUs for the same money (lower cost per GPU). If they then set the same price as AMD, the consumer wouldn't see any difference, but Nvidia would be making a bigger profit per GPU than AMD. Alternatively, they could set a lower price and still make the same profit per GPU as AMD, but with consumers preferring them due to the lower price with equal performance.
It's a solid argument, but the basic assumption doesn't really hold. GK104 was only a bit smaller than Tahiti, so they were roughly equally matched GPUs, rather than a midrange GPU going up against a high-end GPU. Both AMD and Nvidia later released even higher-end GPUs, and in that case Nvidia went with a GPU that was much larger than AMDs. So if anything, AMD now has the kind of advantage people are busy crediting Nvidia with.
Score
2
jasonelmore
April 12, 2014 11:33:07 PM
Christopher Shaffer
April 14, 2014 11:51:04 AM
Quote:
neon neophyte said:
there are a lot of other factors in consumer price. die size doesnt matter to the consumer. cost does.Do you even understand what we're talking about?
The claim is that Nvidia was selling a small, cheap (to produce) midrange GPU (GK104) at a premium because the performance matched a larger high-end GPU (Tahiti) from AMD. The point being that Nvidia would have a competitive advantage, because their costs would be lower and/or they could get more GPUs for the same money (lower cost per GPU). If they then set the same price as AMD, the consumer wouldn't see any difference, but Nvidia would be making a bigger profit per GPU than AMD. Alternatively, they could set a lower price and still make the same profit per GPU as AMD, but with consumers preferring them due to the lower price with equal performance.
It's a solid argument, but the basic assumption doesn't really hold. GK104 was only a bit smaller than Tahiti, so they were roughly equally matched GPUs, rather than a midrange GPU going up against a high-end GPU. Both AMD and Nvidia later released even higher-end GPUs, and in that case Nvidia went with a GPU that was much larger than AMDs. So if anything, AMD now has the kind of advantage people are busy crediting Nvidia with.
I don't understand why you guys are making ANY comparison with die size vs. price. Simply having a slightly smaller or larger GPU physical size might impact the material needed to produce it, but that is hardly accounting for the overall cost in production.
The transistor size and count and the cost of the process to product such a processor has a LOT more affect on the price than the physical size of the resulting chip. This is why a 20nm process, for example, is such a big deal.
Anyone who thinks a marginal difference in die size is the #1 contributing factor, or that in some way the production of such a chip doesn't dictate its cost doesn't understand processor manufacturing at all. We're talking about silicon. It's cheap to produce and readily available.
The cost of producing a chip has a lot more to do with the design process of actually creating the transistor arrangement and then customizing the fab to produce this arrangement on the die you want. Making a smaller die actually makes this process more DIFFICULT, not cheaper. Diameter is also not the only factor - transistors are stacked. Putting an extra vertical layer on top rather than spanning the width of the die makes the diameter shrink considerably. Have you ever looked at a Pitcairn and Kepler side by side? You can see the thickness is different.
All of this is bullshit, anyway. Even if their fab process gives them a competitive advantage, then it's AMD's bad for not improving theirs. It's in no way "cheating" or giving consumers less for their money.
This statement also makes zero sense:
Quote:
The claim is that Nvidia was selling a small, cheap (to produce) midrange GPU (GK104) at a premium because the performance matched a larger high-end GPU (Tahiti) from AMD.It makes no sense because it propagates the myth that a more expensive GPU or CPU is expensive only because of the cost to produce. The fact is that you are paying for the cost to produce as an overall factor of production overhead with every product you by, including processors. HOWEVER, what you are paying for in the price differences between GPUs and CPU is PERFORMANCE.
Nvidia should not be faulted for designing a cheap-to-produce GPU that matched a more expensive-to-produce GPU from AMD and selling it at relatively the same price.
The goal in product development is always to produce the best product for the lowest cost to allow the selling price to be competitive and offer comparable or better performance to similar products on the market.
Score
0
Christopher Shaffer said:
I don't understand why you guys are making ANY comparison with die size vs. price. Simply having a slightly smaller or larger GPU physical size might impact the material needed to produce it, but that is hardly accounting for the overall cost in production.The transistor size and count and the cost of the process to product such a processor has a LOT more affect on the price than the physical size of the resulting chip. This is why a 20nm process, for example, is such a big deal.
Research and development is the main cost driver. The way they earn back that huge investment is by making dies with a number of processors on them. R&D cost should be split evenly across dies to determine the cost per die. Now, if a GPU takes up a larger portion of the die, that means it has cost more. It has taken up a large chunk of one of the limited number of dies they can make. If you can get the same performance from a processor that takes up a smaller die area, that is a major advantage.
Score
0
yogalD
April 15, 2014 3:03:11 AM
Zombie615
April 15, 2014 6:24:53 AM
soldier44 said:
Those that say waste of money clearly can't afford one or 2. These cards are for people like me that upgrade every year or 18 months cause they can, its a hobby. I game at 2560 x 1600 and have been for over 3 years now. Next on the list is a 4K display and maybe one or two of these bad boys.Oh aren't you just so big and bad lmao!!! Dude some people have a brain an don't buy into the newest thing rather they can or not. There is a million other things I'd rather spend several hundred dollars on. If you want to spend a ton of money on a 0.01% upgrade every year than go ahead. Just don't sit there and assume someone can't afford something because they say it's a waste of money. The games I play wouldn't benefit from such a card an even if they did it wouldn't affect the gameplay therefore its pointless to me.
I spend my money on memorable things like family vacations or new parts for my car. Gaming is on the sideline in my life. Yeah, I enjoy gaming when I have the time but it's not on the top of my list to have the greatest gaming rig every year just because I got a bank account full of money. Get off your high-horse lol
Score
0
Quote:
WithoutWeakness said:
For the same reason that the GTX 660 used the GK104 chip instead of the full GK110 chip. Nvidia's mid-range GK104 was performance-competetive with AMD"s high-end Tahiti chip found on the HD 7970. Nvidia was able to take their mid-range chip and sell it at high-end prices because it outperformed the competition and would sell at that price. Then while AMD evolved their GCN architecture for the Hawaii chips in the R9 290 series Nvidia was able to sell off their high-end GK110 chips for top dollar as Tesla compute cards and eventually roll those chips into GeForce cards for the 780, 780Ti, Titan, and Titan Black.My guess for this generation is that it's the same deal. Nvidia feels that their mid-range GM104 chip will be competetive with AMD's offering so they will sell the GM104 as the GTX 880 and hold onto the larger GM110 chips for high-margin Tesla cards and roll them out later as the GTX 900 series.
Not this stupid myth again...
The GK104 was a high-end GPU. It's almost as big as AMDs Tahiti, and much bigger than AMDs midrange GPU at the time, Pitcairn.
If you want to get into the discussion about who got the most out of each square mm of die, then it's AMD: The R9 290X is only slightly slower than the GTX 780 Ti, even though Hawaii is much smaller than GK110.
The size difference between GK110 and Hawaii is 123 square mm. The difference between Tahiti and GK104 is only 58 square mm. And the difference from what you call Nvidia's high-end GPU, the GK110, to what you call AMDs high-end GPU, Tahiti, is a whopping 209 square mm. There's no way these GPUs are in the same league. It's like comparing a humvee with a tank.
My god thank you! I am so tired of these fanboy retards! Yes it was calld GK104, but they couldn't even make enough GK110's to get them to market! Hell if you want to talk about competition, consider the fact that the 7970 basically held its crown for a full year before Nvidia finally beat it with a $1000 card.
Score
0
urbanman2004
June 27, 2014 10:52:35 PM
yogalD
June 28, 2014 12:35:21 AM
!