Report: GTX 870 Put Through Its Paces
Are these benchmarks of the GTX 870? We would like to think so.
A member on the Coolaler forums known as soothepain has posted two screenshots that show an unknown graphics card running 3DMark11 benchmarks.
We can see that the graphics card name is noted as D17U-20, but GPU-Z doesn’t recognize it. The same goes for the GPU, which is noted as 13C2. The member writes that this graphics card is the GTX 870, and we’re tempted to believe him. The CUDA core count of the graphics card is shown as 1664 cores, which run at 1051 MHz stock, with a boost clock that goes up to 1178 MHz. The card carries 4 GB of GDDR5 memory, which runs over a 256-bit memory interface at an effective speed of 7012 MHz.

The test scores are 4625 points for the 3DMark11 extreme preset and 11919 points for the performance preset. In comparison, this is right around the same level as the GTX 780 performs. The tests were performed on a system that carried the Intel Core i7-4820K Ivy Bridge-E processor.
Unfortunately, the device ID and the bios version of the card are obstructed in the images. These bits of information would help us confirm which card it is despite the fact that GPU-Z doesn't recognize it. Even so, given the performance level and the specifications, along with what we know about the Maxwell architecture, it is a reasonably safe assumption that this is, in fact, the GTX 870. Of course, note that this is still a rumor, and it is still unconfirmed. Hopefully we will know more in September, when we expect Nvidia to launch new graphics cards.
Follow Niels Broekhuijsen @NBroekhuijsen. Follow us @tomshardware, on Facebook and on Google+.
780 is 250W TDP, 870 is 190W. Captain Planet smiles.
Maxwell has more cache, which is supposed to allow less of a need for higher memory bandwidth. It's part of the energy efficiency improvements they were doing.
Well, it may be a 256-bit bus but its also 7 GHz memory. That's really, really high; in fact I've heard its the intended limit for the baseline GDDR5 specification. Especially when you consider that this card doesn't sound beefy enough for 4K gaming anyway, I really don't think it will be starved for memory bandwidth.
780 is 250W TDP, 870 is 190W. Captain Planet smiles.
so Maxwell is somehow a refinement to the GTX 700 , adds and sharpen some of the
the hardcore specs and functions of Kepler " GTX 600 " and Kepler Refresh architecture "GTX 700".
All in all this is cheaper card to produce than 780 is, so they can get better margins. (I think so at least, because the chip is somewhat smaller.)
780 is 250W TDP, 870 is 190W. Captain Planet smiles.
According to an other rumor, GTX880 would be $449 unlike the 780ti's MSRP of $699. Scrooge McDuck smiles too
If they price them where you want them (lower), they'd be making as much as AMD. Just about nothing. They haven't made as much as 2007 in 7 yrs. When will people start to understand R&D costs a lot more today, and they CLEARLY are not charging as much as R&D has went up or they'd be making MORE money than 2007 NOT less. Right? Simple math people. If they are gouging us they should be making more money, but that's not the case because they now spend more on R&D than AMD. Their R&D has increased ~50% in the last 4yrs, but profits have been completely stagnant. They are PAYING to give you better stuff, but NOT reaping any benefits financially on the bottom line.
Jarring to see such high prices, but without a the few who DO pay those prices (and can afford them easily), the rest of us would be looking at $1000 mid range cards and $500 bottom rung. You are not the target market for the $1500-3000 cards if you're complaining about them...LOL. I'm not someone who can afford those crazy cards, but I'm sure glad there are enough of them who laugh at those prices and buy immediately upon release so that NV can at least afford to give me a 780ti for $600-700. Without the truly rich buying the ridiculous stuff, that 780TI would surely be $1100+.
With R&D costs skyrocketing, the only way we'll keep pricing the same is by putting more gpus in other devices and expanding the market for gpus/cpus (IE, mobile etc, where lower income people/developing nations can get on the cpu/gpu train in cheaper devices). AMD made 80mil last 12 months. Do you think they're charging enough? Not enough to make you say "RAISE YOUR PRICES AMD!"??? OK, how about this: They lost 6Billion+ in the last 10yrs. Do you think they charge enough knowing that HUGE number?
While NV hasn't lost billions in the last 10, they are nowhere near their historical earnings either and R&D is blowing up for everyone. At some point they will have to PRICE to make more money and AMD should too! Immediately!
You want to talk gouging, start talking Intel who makes $9Bil and is raising prices (haswell $350 now, was $320 last year for top end ivy). Even they are giving a pretty good deal though and their profits are down from 12.3B, to 11B, now to 9B last 12mo (mobile losses is killing Intel's profits, 1.1B/quarter loss on mobile). Same trend as NV basically. Not making as much as before.
so Maxwell is somehow a refinement to the GTX 700 , adds and sharpen some of the
the hardcore specs and functions of Kepler " GTX 600 " and Kepler Refresh architecture "GTX 700".
This is wrong, you're judging maxwell on the 28nm versions which are just to reduce power and give the same perf. The real cards come soon with 20nm and that will be a huge jump in power AND perf (Q1?). AMD will get a good jump from 20nm also. But yes, Pascal will be impressive, but that doesn't mean 20nm Maxwell won't be impressive too, with most of the die shrink going to jacking up perf, where right now they're just dropping power with maxwell's new characteristics while giving basically the same perf or maybe a tad better (and higher margin I hope, if the chips are smaller due to a redesign).
256 bit memory interface makes a lot of sense, maxwell architecture is really efficient about that.
I have the 750Ti and it outperforms the 6950 I had.
Addressing 2GBs of ram is faster than 4GBs. The 6950 1GB version was slightly faster than the 2GB.
Speaking of resolutions higher than 1920x1080 you need 256 to start without filters, 512 bit though is not yet capable of 4K gaming.
There are lots of limits today, any monitor above 1920x1080 increase a lot (really a lot) frametime variance, and the monitor itself has a lot more of latency. This goes far from pure gamer needs. Triple monitors and 4K aswell (not speaking of screen bezels and pricing).
Maxwell 128 bit seems efficient as 192 bit to me. 256 bit should be like 384.
The higher the bits the larger the chip, the more populated the PCB gets and the higher the power requirements.
My personal hope is that they never take again a chip made for HPC to act as gaming videocard.
They should simply build ad hoc gaming videocard. Not even megachips. To me the 290s are a bad choice. Audio resources on a GPU? I have yet to hear that working but I'd rather go for a 150-170 soundcard. It's cool on a budget APU and mainstream videocard. I hope 870 and 880 brutalize the GK110 spawns in gaming. Maybe when a Phantom comes out I'll think about selling my 770 Phantom.
It has nothing to do with manufacturing complexity. Maxwell was designed to not need a wide bus interface because adding a wider bus adds more power consumption. Besides, adding a wider bus may not add much performance anyway, simply because of the way it's designed.
If they price them where you want them (lower), they'd be making as much as AMD. Just about nothing. They haven't made as much as 2007 in 7 yrs. When will people start to understand R&D costs a lot more today, and they CLEARLY are not charging as much as R&D has went up or they'd be making MORE money than 2007 NOT less. Right? Simple math people. If they are gouging us they should be making more money, but that's not the case because they now spend more on R&D than AMD. Their R&D has increased ~50% in the last 4yrs, but profits have been completely stagnant. They are PAYING to give you better stuff, but NOT reaping any benefits financially on the bottom line.
Is this gpu r&d or just general for all their products? Because they can be gouging us in the gpu segment to support all the rest of their pet projects' R&D. As far as GPU, AMD spends enough. They clearly know how to do a lot with what could be a little.