Sign in with
Sign up | Sign in
Your question
Closed

Rumored Specs of Upcoming GeForce GTX 880 Appear Online

Tags:
  • Graphics Cards
  • Components
Last response: in News comments
Share
April 11, 2014 12:16:42 PM

Why would they give it the GTX 880 designation if it isn't the full featured upper end model? So what if the performance cannibalizes the Titan series. Those are old architecture that need to go away.
Score
6
April 11, 2014 12:36:35 PM

I would hope that they researched this enough to find some legitimacy with some of the specs, rather than just re-posting it.
Score
6
April 11, 2014 12:41:59 PM

Sounds like a waste of money once again.....
Score
0
April 11, 2014 12:43:12 PM

I cant wait for more inflated performance on my 1920x1080 60hz monitor. I think $825.99 would be a good price to start this card at too.
Score
-7
April 11, 2014 12:53:20 PM

When can we see GDDR6?
Score
-5
April 11, 2014 1:03:41 PM

Maybe this will come Below titan... And Titan would be next uber model... But early rumors are always best served with salt...
The memory wide is actually quite believable because Kepler seems to be reasonable well feeded even with narrower memory bandwidth.
Score
5
April 11, 2014 1:07:24 PM

Quote:
When can we see GDDR6?

I'd prefer a die shrink and less power consumption or more performance. When is the die shrink from 28nm coming?
Score
3
April 11, 2014 1:08:24 PM

Quote:
Why would they give it the GTX 880 designation if it isn't the full featured upper end model? So what if the performance cannibalizes the Titan series. Those are old architecture that need to go away.

For the same reason that the GTX 660 used the GK104 chip instead of the full GK110 chip. Nvidia's mid-range GK104 was performance-competetive with AMD"s high-end Tahiti chip found on the HD 7970. Nvidia was able to take their mid-range chip and sell it at high-end prices because it outperformed the competition and would sell at that price. Then while AMD evolved their GCN architecture for the Hawaii chips in the R9 290 series Nvidia was able to sell off their high-end GK110 chips for top dollar as Tesla compute cards and eventually roll those chips into GeForce cards for the 780, 780Ti, Titan, and Titan Black.

My guess for this generation is that it's the same deal. Nvidia feels that their mid-range GM104 chip will be competetive with AMD's offering so they will sell the GM104 as the GTX 880 and hold onto the larger GM110 chips for high-margin Tesla cards and roll them out later as the GTX 900 series.
Score
7
April 11, 2014 1:09:16 PM

IF this is not some teenagers wet dream and some what credible, it could be that they plan on having titans from here on out and want the 780 ti guys to pony up more cash for the performance. Just a thought on probably fake info....
Score
0
April 11, 2014 1:09:47 PM

leoscott said:
Quote:
When can we see GDDR6?

I'd prefer a die shrink and less power consumption or more performance. When is the die shrink from 28nm coming?


At the end of this year...
Score
-1
a b U Graphics card
April 11, 2014 1:21:49 PM

warezme said:
Why would they give it the GTX 880 designation if it isn't the full featured upper end model? So what if the performance cannibalizes the Titan series. Those are old architecture that need to go away.


Since it's nVidia, they'll probably have a "Titan 2" down the road to bleed fanboys out of their money later on, haha.

If the specs are actually true, they sound more like an updated revision instead of a higher tier Maxwell GPU (was it Maxwell?). The 4GB of VRAM are actually hurting them in the 4K territory thanks to what the R9-295X showed, so they might be cooking something in between to justify the stupid price tags they are asking as of late.

Cheers!
Score
1
April 11, 2014 3:11:46 PM

Those that say waste of money clearly can't afford one or 2. These cards are for people like me that upgrade every year or 18 months cause they can, its a hobby. I game at 2560 x 1600 and have been for over 3 years now. Next on the list is a 4K display and maybe one or two of these bad boys.
Score
-13
April 11, 2014 4:00:13 PM

That card was a goddamn RAID BOSS. Surprisingly, my "aging" gtx 570 is right up there in my all time favorite and long lasting gfx cards at my current 1920x1080 120hz rez.

in no particular order:

1. 3dfx voodoo 2s in SLI
2. geforce 256
3. 8800GT
4. GTX 570
Score
-2
April 11, 2014 4:01:10 PM

Quote:
That card was a goddamn RAID BOSS. Surprisingly, my "aging" gtx 570 is right up there in my all time favorite and long lasting gfx cards at my current 1920x1080 120hz rez.

in no particular order:

1. 3dfx voodoo 2s in SLI
2. geforce 256
3. 8800GT
4. GTX 570



I had to log in and copy pasted my message and lost the first part.

It was supposed to start with "I hope it lasts as long as my original 8800 GT"

Sorry for the double post
Score
0
April 11, 2014 5:27:31 PM

One detail not mentioned is how the Maxwell architecture utilizes a much larger L2 cache, allowing it to do far more on-die instead of having to fetch data as often from the higher latency GDDR5 memory. This allows them to get away with a lower memory bus, while still offering even higher performance (read up the details and benchmarks of the GTX 860M, which is already released).
Score
7
a b U Graphics card
April 11, 2014 7:41:35 PM

Quote:
One detail not mentioned is how the Maxwell architecture utilizes a much larger L2 cache, allowing it to do far more on-die instead of having to fetch data as often from the higher latency GDDR5 memory. This allows them to get away with a lower memory bus, while still offering even higher performance (read up the details and benchmarks of the GTX 860M, which is already released).


that's a bold strategy cotton. let's see if it pays off for 'em.
Score
2
April 11, 2014 8:53:55 PM

If it's not a successor to the 780 Ti (15 SMX), then it's likely a Titan Successor (14 SMX), or a 780 successor (12 SMX). Using the Maxwell Model, a 15 SMX Successor would likely have 25 SMMs and 5 GPCs. What gets tricky is the ROPs, as we haven't seen a multi-GPC Maxwell - yet. I believe they are scaling it directly with the GM107. This would give 16 ROPs, 2MB L2 Cache, and 128-bit memory per GPC Disable 1 SMM and you have:

5 GPCs
24 SMMs
10MB L2 Cache
3072 Shader Cores
192 Texture Units
80 ROPs
640-Bit Memory Bus
6GB GDDR5 RAM

If it's a GTX 780 successor, disable 1 GPC:

4 GPCs
20 SMMs
8MB L2 Cache
2560 Shader Cores
160 Texture Units
64 ROPs
512-Bit Memory Bus
3GB GDDR5 RAM

I personally would love this. But if we're throwing out rumors, then here's a rumor via logic.
Score
0
April 11, 2014 9:29:51 PM

So maybe this will be another 8800GT

They will release the 880 at $225, and it will be around 85% as fast as a 780 TI
Score
0
April 12, 2014 12:12:08 AM

The 870m/880m mobile GPUs are out, though they are Kepler I guess.
Score
0
a c 86 U Graphics card
April 12, 2014 4:42:56 AM

WithoutWeakness said:
For the same reason that the GTX 660 used the GK104 chip instead of the full GK110 chip. Nvidia's mid-range GK104 was performance-competetive with AMD"s high-end Tahiti chip found on the HD 7970. Nvidia was able to take their mid-range chip and sell it at high-end prices because it outperformed the competition and would sell at that price. Then while AMD evolved their GCN architecture for the Hawaii chips in the R9 290 series Nvidia was able to sell off their high-end GK110 chips for top dollar as Tesla compute cards and eventually roll those chips into GeForce cards for the 780, 780Ti, Titan, and Titan Black.

My guess for this generation is that it's the same deal. Nvidia feels that their mid-range GM104 chip will be competetive with AMD's offering so they will sell the GM104 as the GTX 880 and hold onto the larger GM110 chips for high-margin Tesla cards and roll them out later as the GTX 900 series.

Not this stupid myth again...

The GK104 was a high-end GPU. It's almost as big as AMDs Tahiti, and much bigger than AMDs midrange GPU at the time, Pitcairn.

If you want to get into the discussion about who got the most out of each square mm of die, then it's AMD: The R9 290X is only slightly slower than the GTX 780 Ti, even though Hawaii is much smaller than GK110.

The size difference between GK110 and Hawaii is 123 square mm. The difference between Tahiti and GK104 is only 58 square mm. And the difference from what you call Nvidia's high-end GPU, the GK110, to what you call AMDs high-end GPU, Tahiti, is a whopping 209 square mm. There's no way these GPUs are in the same league. It's like comparing a humvee with a tank.
Score
1
Anonymous
a b U Graphics card
April 12, 2014 6:32:32 AM

There was some earlier mention by Nvidia of a built-in ARM processor in the top Maxwell chips, which was rumored to be for processing high-def audio for HDMI. I'm wondering if this proc could also be programed with CUDA to do other tasks, like a compute boost or at least partially off-loading physics overhead. I guess we'll have to wait a while after the review samples get shipped. No matter what, new hardware releases from the big 3 are always pretty exciting for me :) .
Score
0
a b U Graphics card
April 12, 2014 8:15:50 AM

Interested to see more on this... but I wonder what they will price it. Just south of 1k again?
Score
0
April 12, 2014 9:32:26 AM

I'm using a GTX 580. Its a wattage vampire for sure but still a very fast card for my purposes...mostly MMOs but some others. But an "860" series may be my next GPU upgrade depending on how it pans out.
Score
0
April 12, 2014 10:00:29 AM

WithoutWeakness said:
Quote:
Why would they give it the GTX 880 designation if it isn't the full featured upper end model? So what if the performance cannibalizes the Titan series. Those are old architecture that need to go away.

For the same reason that the GTX 660 used the GK104 chip instead of the full GK110 chip. Nvidia's mid-range GK104 was performance-competetive with AMD"s high-end Tahiti chip found on the HD 7970. Nvidia was able to take their mid-range chip and sell it at high-end prices because it outperformed the competition and would sell at that price. Then while AMD evolved their GCN architecture for the Hawaii chips in the R9 290 series Nvidia was able to sell off their high-end GK110 chips for top dollar as Tesla compute cards and eventually roll those chips into GeForce cards for the 780, 780Ti, Titan, and Titan Black.

My guess for this generation is that it's the same deal. Nvidia feels that their mid-range GM104 chip will be competetive with AMD's offering so they will sell the GM104 as the GTX 880 and hold onto the larger GM110 chips for high-margin Tesla cards and roll them out later as the GTX 900 series.


GK110 had such poor yields that it couldnt be used for the GTX 680. GK100 was also cancelled, too. Keep in mind that GK110 was around a 550mm^2 die with a new architecture put on a relatively new 28nm process. That is pretty much the perfect storm for getitng bad yield.
Score
1
a b U Graphics card
April 12, 2014 12:39:45 PM

Quote:
WithoutWeakness said:
For the same reason that the GTX 660 used the GK104 chip instead of the full GK110 chip. Nvidia's mid-range GK104 was performance-competetive with AMD"s high-end Tahiti chip found on the HD 7970. Nvidia was able to take their mid-range chip and sell it at high-end prices because it outperformed the competition and would sell at that price. Then while AMD evolved their GCN architecture for the Hawaii chips in the R9 290 series Nvidia was able to sell off their high-end GK110 chips for top dollar as Tesla compute cards and eventually roll those chips into GeForce cards for the 780, 780Ti, Titan, and Titan Black.

My guess for this generation is that it's the same deal. Nvidia feels that their mid-range GM104 chip will be competetive with AMD's offering so they will sell the GM104 as the GTX 880 and hold onto the larger GM110 chips for high-margin Tesla cards and roll them out later as the GTX 900 series.

Not this stupid myth again...

The GK104 was a high-end GPU. It's almost as big as AMDs Tahiti, and much bigger than AMDs midrange GPU at the time, Pitcairn.

If you want to get into the discussion about who got the most out of each square mm of die, then it's AMD: The R9 290X is only slightly slower than the GTX 780 Ti, even though Hawaii is much smaller than GK110.

The size difference between GK110 and Hawaii is 123 square mm. The difference between Tahiti and GK104 is only 58 square mm. And the difference from what you call Nvidia's high-end GPU, the GK110, to what you call AMDs high-end GPU, Tahiti, is a whopping 209 square mm. There's no way these GPUs are in the same league. It's like comparing a humvee with a tank.


from a consumer standpoint, die size doesnt matter at all. cost, tdp, cooling, performance. those are the bottom lines to be compared.
Score
1
April 12, 2014 1:28:55 PM

Quote:
The 870m/880m mobile GPUs are out, though they are Kepler I guess.


Actually some are Fermi, some are Keplar and some are Maxwell.
820m = 720m Fermi based
830m,840m 850m and some 860m are Maxwell
870m -775m Keplar based
880m = 780m Keplar based
Score
0
a c 86 U Graphics card
April 12, 2014 2:12:00 PM

neon neophyte said:
from a consumer standpoint, die size doesnt matter at all. cost, tdp, cooling, performance. those are the bottom lines to be compared.

Cost tends to be proportional to die size (at least for fully enabled GPUs). So the bigger GK110 most likely represents more value for Nvidia than Hawaii or Tahiti does for AMD.
Score
0
April 12, 2014 3:33:11 PM

WithoutWeakness said:
Quote:
Why would they give it the GTX 880 designation if it isn't the full featured upper end model? So what if the performance cannibalizes the Titan series. Those are old architecture that need to go away.

For the same reason that the GTX 660 used the GK104 chip instead of the full GK110 chip. Nvidia's mid-range GK104 was performance-competetive with AMD"s high-end Tahiti chip found on the HD 7970. Nvidia was able to take their mid-range chip and sell it at high-end prices because it outperformed the competition and would sell at that price. Then while AMD evolved their GCN architecture for the Hawaii chips in the R9 290 series Nvidia was able to sell off their high-end GK110 chips for top dollar as Tesla compute cards and eventually roll those chips into GeForce cards for the 780, 780Ti, Titan, and Titan Black.

My guess for this generation is that it's the same deal. Nvidia feels that their mid-range GM104 chip will be competetive with AMD's offering so they will sell the GM104 as the GTX 880 and hold onto the larger GM110 chips for high-margin Tesla cards and roll them out later as the GTX 900 series.

You're absolutely right. It only makes sense for it to be that way. Nvidia is onto a good money making scheme here.
And I'm sure you meant the GTX 680 and not the GTX 660, which isn't a gk104 chip, nor could it or was meant to compete with the hd7970.

Score
0
a c 86 U Graphics card
April 12, 2014 3:47:16 PM

chaospower said:
You're absolutely right. It only makes sense for it to be that way. Nvidia is onto a good money making scheme here.
And I'm sure you meant the GTX 680 and not the GTX 660, which isn't a gk104 chip, nor could it or was meant to compete with the hd7970.


Dude, I just debunked that myth.
Score
0
a b U Graphics card
April 12, 2014 4:45:41 PM

Quote:
neon neophyte said:
from a consumer standpoint, die size doesnt matter at all. cost, tdp, cooling, performance. those are the bottom lines to be compared.

Cost tends to be proportional to die size (at least for fully enabled GPUs). So the bigger GK110 most likely represents more value for Nvidia than Hawaii or Tahiti does for AMD.


there are a lot of other factors in consumer price. die size doesnt matter to the consumer. cost does.
Score
0
a c 86 U Graphics card
April 12, 2014 4:55:35 PM

neon neophyte said:
there are a lot of other factors in consumer price. die size doesnt matter to the consumer. cost does.

Do you even understand what we're talking about?

The claim is that Nvidia was selling a small, cheap (to produce) midrange GPU (GK104) at a premium because the performance matched a larger high-end GPU (Tahiti) from AMD. The point being that Nvidia would have a competitive advantage, because their costs would be lower and/or they could get more GPUs for the same money (lower cost per GPU). If they then set the same price as AMD, the consumer wouldn't see any difference, but Nvidia would be making a bigger profit per GPU than AMD. Alternatively, they could set a lower price and still make the same profit per GPU as AMD, but with consumers preferring them due to the lower price with equal performance.

It's a solid argument, but the basic assumption doesn't really hold. GK104 was only a bit smaller than Tahiti, so they were roughly equally matched GPUs, rather than a midrange GPU going up against a high-end GPU. Both AMD and Nvidia later released even higher-end GPUs, and in that case Nvidia went with a GPU that was much larger than AMDs. So if anything, AMD now has the kind of advantage people are busy crediting Nvidia with.
Score
2
a b U Graphics card
April 12, 2014 9:46:25 PM

Anyone else see the rumored 390X specs?

-4224 Stream Processors
-7000 MHz VRAM on 512-bit.
-20nm

Is it just me or would that absolutely destroy this supposed "880"
Score
0
April 12, 2014 11:33:07 PM

where the hell are the ARM Cores? this was supposed to be the flagship feature of Maxwell. Supposed to have Denver Cores to offload some of the overhead of the cpu and the soc was to have unified memory with gpu
Score
0
April 14, 2014 11:51:04 AM

Quote:
neon neophyte said:
there are a lot of other factors in consumer price. die size doesnt matter to the consumer. cost does.

Do you even understand what we're talking about?

The claim is that Nvidia was selling a small, cheap (to produce) midrange GPU (GK104) at a premium because the performance matched a larger high-end GPU (Tahiti) from AMD. The point being that Nvidia would have a competitive advantage, because their costs would be lower and/or they could get more GPUs for the same money (lower cost per GPU). If they then set the same price as AMD, the consumer wouldn't see any difference, but Nvidia would be making a bigger profit per GPU than AMD. Alternatively, they could set a lower price and still make the same profit per GPU as AMD, but with consumers preferring them due to the lower price with equal performance.

It's a solid argument, but the basic assumption doesn't really hold. GK104 was only a bit smaller than Tahiti, so they were roughly equally matched GPUs, rather than a midrange GPU going up against a high-end GPU. Both AMD and Nvidia later released even higher-end GPUs, and in that case Nvidia went with a GPU that was much larger than AMDs. So if anything, AMD now has the kind of advantage people are busy crediting Nvidia with.


I don't understand why you guys are making ANY comparison with die size vs. price. Simply having a slightly smaller or larger GPU physical size might impact the material needed to produce it, but that is hardly accounting for the overall cost in production.

The transistor size and count and the cost of the process to product such a processor has a LOT more affect on the price than the physical size of the resulting chip. This is why a 20nm process, for example, is such a big deal.

Anyone who thinks a marginal difference in die size is the #1 contributing factor, or that in some way the production of such a chip doesn't dictate its cost doesn't understand processor manufacturing at all. We're talking about silicon. It's cheap to produce and readily available.

The cost of producing a chip has a lot more to do with the design process of actually creating the transistor arrangement and then customizing the fab to produce this arrangement on the die you want. Making a smaller die actually makes this process more DIFFICULT, not cheaper. Diameter is also not the only factor - transistors are stacked. Putting an extra vertical layer on top rather than spanning the width of the die makes the diameter shrink considerably. Have you ever looked at a Pitcairn and Kepler side by side? You can see the thickness is different.

All of this is bullshit, anyway. Even if their fab process gives them a competitive advantage, then it's AMD's bad for not improving theirs. It's in no way "cheating" or giving consumers less for their money.

This statement also makes zero sense:
Quote:
The claim is that Nvidia was selling a small, cheap (to produce) midrange GPU (GK104) at a premium because the performance matched a larger high-end GPU (Tahiti) from AMD.


It makes no sense because it propagates the myth that a more expensive GPU or CPU is expensive only because of the cost to produce. The fact is that you are paying for the cost to produce as an overall factor of production overhead with every product you by, including processors. HOWEVER, what you are paying for in the price differences between GPUs and CPU is PERFORMANCE.

Nvidia should not be faulted for designing a cheap-to-produce GPU that matched a more expensive-to-produce GPU from AMD and selling it at relatively the same price.

The goal in product development is always to produce the best product for the lowest cost to allow the selling price to be competitive and offer comparable or better performance to similar products on the market.
Score
0
a c 86 U Graphics card
April 14, 2014 12:18:33 PM

Christopher Shaffer said:
I don't understand why you guys are making ANY comparison with die size vs. price. Simply having a slightly smaller or larger GPU physical size might impact the material needed to produce it, but that is hardly accounting for the overall cost in production.

The transistor size and count and the cost of the process to product such a processor has a LOT more affect on the price than the physical size of the resulting chip. This is why a 20nm process, for example, is such a big deal.

Research and development is the main cost driver. The way they earn back that huge investment is by making dies with a number of processors on them. R&D cost should be split evenly across dies to determine the cost per die. Now, if a GPU takes up a larger portion of the die, that means it has cost more. It has taken up a large chunk of one of the limited number of dies they can make. If you can get the same performance from a processor that takes up a smaller die area, that is a major advantage.
Score
0
April 15, 2014 3:03:11 AM

Quote:
I cant wait for more inflated performance on my 1920x1080 60hz monitor. I think $825.99 would be a good price to start this card at too.

Then buy a 120hz monitor or shut up because it's obviously not aimed at you.
Score
0
April 15, 2014 6:24:53 AM

soldier44 said:
Those that say waste of money clearly can't afford one or 2. These cards are for people like me that upgrade every year or 18 months cause they can, its a hobby. I game at 2560 x 1600 and have been for over 3 years now. Next on the list is a 4K display and maybe one or two of these bad boys.


Oh aren't you just so big and bad lmao!!! Dude some people have a brain an don't buy into the newest thing rather they can or not. There is a million other things I'd rather spend several hundred dollars on. If you want to spend a ton of money on a 0.01% upgrade every year than go ahead. Just don't sit there and assume someone can't afford something because they say it's a waste of money. The games I play wouldn't benefit from such a card an even if they did it wouldn't affect the gameplay therefore its pointless to me.

I spend my money on memorable things like family vacations or new parts for my car. Gaming is on the sideline in my life. Yeah, I enjoy gaming when I have the time but it's not on the top of my list to have the greatest gaming rig every year just because I got a bank account full of money. Get off your high-horse lol
Score
0
a b U Graphics card
June 6, 2014 11:47:43 PM

Quote:
WithoutWeakness said:
For the same reason that the GTX 660 used the GK104 chip instead of the full GK110 chip. Nvidia's mid-range GK104 was performance-competetive with AMD"s high-end Tahiti chip found on the HD 7970. Nvidia was able to take their mid-range chip and sell it at high-end prices because it outperformed the competition and would sell at that price. Then while AMD evolved their GCN architecture for the Hawaii chips in the R9 290 series Nvidia was able to sell off their high-end GK110 chips for top dollar as Tesla compute cards and eventually roll those chips into GeForce cards for the 780, 780Ti, Titan, and Titan Black.

My guess for this generation is that it's the same deal. Nvidia feels that their mid-range GM104 chip will be competetive with AMD's offering so they will sell the GM104 as the GTX 880 and hold onto the larger GM110 chips for high-margin Tesla cards and roll them out later as the GTX 900 series.

Not this stupid myth again...

The GK104 was a high-end GPU. It's almost as big as AMDs Tahiti, and much bigger than AMDs midrange GPU at the time, Pitcairn.

If you want to get into the discussion about who got the most out of each square mm of die, then it's AMD: The R9 290X is only slightly slower than the GTX 780 Ti, even though Hawaii is much smaller than GK110.

The size difference between GK110 and Hawaii is 123 square mm. The difference between Tahiti and GK104 is only 58 square mm. And the difference from what you call Nvidia's high-end GPU, the GK110, to what you call AMDs high-end GPU, Tahiti, is a whopping 209 square mm. There's no way these GPUs are in the same league. It's like comparing a humvee with a tank.


My god thank you! I am so tired of these fanboy retards! Yes it was calld GK104, but they couldn't even make enough GK110's to get them to market! Hell if you want to talk about competition, consider the fact that the 7970 basically held its crown for a full year before Nvidia finally beat it with a $1000 card.
Score
0
a c 273 U Graphics card
June 28, 2014 12:07:26 AM

urbanman2004 said:
Fake


After two and a half months or so is that all you have to say? Really? :lol: 
Score
1
June 28, 2014 12:35:21 AM

Mousemonkey said:
urbanman2004 said:
Fake


After two and a half months or so is that all you have to say? Really? :lol: 


Takes a while to think of these things you know!
Score
0
a c 273 U Graphics card
June 28, 2014 2:01:03 AM

:lol: 
Score
0
!