Sign in with
Sign up | Sign in
Your question
Closed

Rumor: Nvidia Prepping to Launch Kepler in February

Last response: in News comments
Share
January 18, 2012 11:08:45 AM

This shall be interesting, can't wait to see the benchmarks!
Score
11
January 18, 2012 11:18:29 AM

Nvidia, please dont rush it...
I own a GTX 285, I allready have 512 bus!
Score
9
Related resources
Can't find your answer ? Ask !
January 18, 2012 11:22:16 AM

I went to the Chinese forum where the rumour started and users there are saying that they have reliable leak on confirmation of the product. The package box for GTX680 is spotted in a Nvidia AIC partner factory.
Score
2
January 18, 2012 11:29:25 AM

"The source says the GTX 680 should be competitive in performance with the HD7970."

Sounds alot like what AMD said about bulldozer and Sandy Bridge. Remember how that turned out?
Score
-7
January 18, 2012 11:35:08 AM

At the same time.. the G107 chip is their budget chip. If the 680 is in fact a "budget" card and can match performance with AMD's flagship.. I can't imagine what the hell Nvidia is coming out with. Then again this could all be completely false.
Score
9
January 18, 2012 11:46:08 AM

Hopefully Kepler kicks butt, ive been waiting to replace my two 285gtx's
Score
2
a b Î Nvidia
January 18, 2012 11:48:00 AM

nice to see nvidia stepping up to the competition. i suspected that kepler yields are bad enough to shut nvidia up. i guess nvidia wouldn't have pushed it's kepler schedule ahead if it didn't have enough gpus to ship.
february will have 7950 availability (jan 31), 78xx cards debuting. this will get very interesting. XD
btw, is the gpu called 680? so the rumor about 6xx being oem was wrong...
Score
-1
January 18, 2012 11:52:45 AM

This is a good move as it coincides with a swell of new platform sells. Can toms leak some kepler info to us soon?
Score
0
January 18, 2012 11:53:23 AM

God i been waiting for this a long time. 2 285gtx sli overclocked and Battlefield 3 on Ultra Settings getting only 10 to 25 fps, 20 fps average on 1920x1200. Nvidia don't let me down man, don't taser me bro!!! :o )
Score
-3
January 18, 2012 12:08:54 PM

NVIDIA 6-series are DOA. Anyone thats been following the production of AMD or NVIDIA's graphics cards knows that the 6-series is no more than just a higher-clocked version of the 5-series. In a few months the REAL Kepler will step forth (the 7-series)... THEN the competition will get interesting.
Score
4
January 18, 2012 12:20:21 PM

Way to go for gtx 690 with 2x512 bit bus and 4gb memory
Score
2
January 18, 2012 12:21:20 PM

A m@n does not live by Chinese Rumors and gossip, research needs and Benchmark.
Score
0
January 18, 2012 12:26:03 PM

Based on the naming scheme it would be fair to say the 680 is supposed to be the high end for the "6" series.

If this is so and it is merely "competitive" with the AMD 7970 then I would say Nvidia is in for some hard times this generation cause the day of release AMD will just undercut Nvidia on price and steal sales.

Im not buying a thing until Nvidia releases this then wait for the AMD price drop and hello new 7970.
Score
3
a b Î Nvidia
January 18, 2012 12:30:02 PM

What about power consumption?
Score
6
January 18, 2012 1:00:07 PM

Its funny because every other site on the net (google for nvidia kepler and narrow the results to the past week) says march-april. But they will be doing a hard launch so thats cool. Ive also posted a few times on here the actual specs to the thing will be almost identical to the 590. I guess Tom's editors just dont read the comments and god forbid there's a link on the homepage to submit news. Hell I even sent the specs to their facebook page!
Score
-2
January 18, 2012 1:53:36 PM

Competition at work; pushing each other forward.
As a customer I say THANK YOU !

:-)
Score
2
January 18, 2012 2:32:41 PM

I'm going to need to see something more substantial from Nvidia here. A rumor is not going to get me to back off from my intention to purchase a 7950 mere weeks from today. I'm not waiting for Q2 or Q3 later this year for Nvidia to come out with a competing part. I want to see benchmarks and prices before I click that button to order a 7950.
Score
0
January 18, 2012 2:39:05 PM

Phishy714"The source says the GTX 680 should be competitive in performance with the HD7970."Sounds alot like what AMD said about bulldozer and Sandy Bridge. Remember how that turned out?

that they were right and in many applications it is more than competitive for its price range, and only showed bad benchmarks in single core performance, and the occasional hic up that could be fixed in a service pack update, with the whole cpu being addressed in most likely windows 8? i mean thats just how i remember it, i wouldn't recommend one, od say phenom over bulldozer for most applications, but if you are mulit core driven, and want to gamble on a service pack update taking the cpu to higher performance than an i7, than i would say go for it...

just like i would never recomend nvidia due to some of the major driver issues they have had in the past, like killing their own card bad... i dont remember amd ever doing that, though they may be a bit slower on the driver side.

cknobmanBased on the naming scheme it would be fair to say the 680 is supposed to be the high end for the "6" series.If this is so and it is merely "competitive" with the AMD 7970 then I would say Nvidia is in for some hard times this generation cause the day of release AMD will just undercut Nvidia on price and steal sales.Im not buying a thing until Nvidia releases this then wait for the AMD price drop and hello new 7970.


i wouldn't buy any high end card unless you have an opencl/cuda application that can use it now... id wait for the 8 or 9000 series to get a high end card, for now, id look at the higher mid range area, more than enough to play most games maxed, and give you great performance on what cant be maxed... than once the wiiu and 720 come out and tessellation is put into most if not all games, you can find out what card works best for your games.
Score
-4
a b Î Nvidia
January 18, 2012 2:42:27 PM

I don't see why they need to rush. People that know the difference will wait for the Nvidia.
Score
3
January 18, 2012 3:07:54 PM

ubercake said:
I don't see why they need to rush. People that know the difference will wait for the Nvidia.

+1
and to further that, nVidia should just calm down themselves and make sure it's right from the jump.
and do not like what was quoted with the GTX 280 release, I was there for that too..
by more testing and finishing touches will also weed out the ones who think they are nVidia enthusiast.
I'll be waiting.
Score
1
January 18, 2012 3:45:47 PM

So are we going to see a GTX 690 this summer?
Score
0
January 18, 2012 4:04:37 PM

It doesn't sound like the high end model but I hope it is a little while longer before those are released. I haven't amortized my GTX590 yet. It has only been about a year since I bought it and I was wanting to get at least two years out of it before a new model had trumped it definitively, not just kind of. We'll see.
Score
0
January 18, 2012 4:19:07 PM

Wonderful thing, that competition is ... : )
Score
1
January 18, 2012 6:33:23 PM

ukulele97In February you'll see Nvidia CEO waving another mockup on a stage and telling you how great the card will be some day in a very distant future...You remember this?http://semiaccurate.com/2009/10/01 [...] oards-gtc/

the mockup just for show you how fermi looks like. It's no problem at all.
check out the pcwatch link and look at the demonstration. Do you think it's possible they use a mockup too?
Score
1
January 18, 2012 6:36:30 PM

ukulele97In February you'll see Nvidia CEO waving another mockup on a stage and telling you how great the card will be some day in a very distant future...You remember this?http://semiaccurate.com/2009/10/01 [...] oards-gtc/


Holy F that blew my mind. Some basement dwellers have far too much time on their hands.

OMGSS!!~11!!!!! The card is a fake!! The Cake is a lie, the world is ending.!!~!!!!
Score
-5
January 18, 2012 8:08:47 PM

phateHoly F that blew my mind. Some basement dwellers have far too much time on their hands. OMGSS!!~11!!!!! The card is a fake!! The Cake is a lie, the world is ending.!!~!!!!


But... uh... the cake IS a lie.
Score
1
January 18, 2012 8:09:33 PM

Thanks Pryee!

I'm a bit of an AMD fan myself, so this is good to see.
Score
0
January 18, 2012 9:17:08 PM

From the rumors and leaks I've seen, Nvidia's first Kepler GPU will be positioned in the upper mid-range segment of their lineup, probably the equivalent of the GTX560ti/gf114. It won't be their high-end 512-bit GDDR5 card, but unfortunately it'll probably be priced like a high-end card if the rumors of it's performance competitiveness with the HD7970 turn out to be true.

The Kepler architecture is based on Fermi, so making predictions about its specs and performance probably isn't quite as pointless as the transition from g70 to g80, or gt200 to gf100. This is pure speculation, but based on the rumored memory capacity, performance, and release schedule, I'm just throwing out my best guess for the specs of Kepler's initial release card:

768 Cuda Cores
2GB 256-bit GDDR5 (high clocked, at least 160+ GBps bandwidth)... seems far more likely to me given the rumored memory capacity, especially if this isn't their high-end GPU.
or 2GB 512-bit GDDR5 (lower clocked, potentially much higher bandwidth... but, really Nvidia?)

Assuming the rumors are true, this seems reasonable, logical, and if clocked right, potentially very performance competitive with the HD7970. The big unknowns are transistor count/die size, power consumption, physical dimensions, and availability.
Score
1
January 18, 2012 10:03:13 PM

I've been kinda worried about all this secrecy surrounding Kepler. I've been looking forward to see nVidia's answer to Southern Islands. The lack of any talk may mean that nVidia's not quite too confident about what their answer is. I'm not thinking that "Kepler will fail," but rather, as others have suggested, that nVidia may pull a bait-and-switch and give us simply yet-another refreshed Fermi. Given how much of a leap SI is over the best Fermi has to offer, this would be a lose-lose proposition for all of us: nVidia's cards would fall way behind, and AMD's would have no reason to drop from their already-unprecedented (and exorbitant) prices; $549US is a steep price for a single-GPU card.

I'm hoping that this sudden move of the release means that nVidia actually has something good on their hands. At the very least, it should mean that we'll get our answers on just what they have in store soon enough.
lord captivusNvidia, please dont rush it...I own a GTX 285, I allready have 512 bus!

In all honesty, there's no DIRECT matter betweeen whether it has a 512-bit, 256-bit, or 384-bit interface. A 512-bit interface is no more advanced than a narrower one, just of a greater scale: kind of like how two GPUs isn't necessarily more advanced than one.

This is the same deal over the Radeon 7970's question over memory type and interface width: both are just means to an end: making the interface wider, or taking more advanced (faster) memory chips are both a means of increasing a video card's memory bandwidth, and increasing the cost of production.

A reason why not everyone just makes arbitrarily wide memory interfaces is twofold: for one, as I've often said, it makes the PCB more complex, as each extra bit requires another pair of pins from the GPU, and the appropriate traces on the motherboard... This doesn't just make it harder to design a (larger, more expensive) board to fit it all, but it also further raises the price by requiring more RAM chips installed: remember that every 32-bits of interface means another chip.

The other reason is more technical: because of those extra pins, a memory interface also requires a specific amount of space along the edge of a GPU's die for all the leads to connect to. So that means that to get a memory interface of a certain width, the GPU's die area must pass a certain threshold. It's been pretty easy to correlate this relationship: with the GF206, there's been a strict boundary in die area where the width of the interface changes:

- less than 110 mm²: 64-bit
- 115-196 mm²: 128-bit.
- 196-368 mm²: 256-bit.
- 484-529 mm²: 384-bit. (G80, GF100, GF110)
- 470 mm², 576 mm²: 512-bit. (GT200b and GT200, respectively)

I know that the GT200b was a die-shrink of an existing chip, so it likely means that here, a 512-bit memory interface is unlikely unless the die is perhaps at least 550 mm² in size.

sanadanosathe mockup just for show you how fermi looks like. It's no problem at all.check out the pcwatch link and look at the demonstration. Do you think it's possible they use a mockup too?

As I recall that event, the problem with the Fermi "fake" was that nVidia's own CEO declared that it was the real thing. Not once did nVidia back away from their obviously-incorrect claims. Showing a mock-up card as a demonstration of what the thing would look like is one thing, but it becomes problematic when the maker claims it's a REAL cards.

dragonsqrrl said:
768 Cuda Cores
2GB 256-bit GDDR5 (high clocked, at least 160+ GBps bandwidth)... seems far more likely to me given the rumored memory capacity, especially if this isn't their high-end GPU.
or 2GB 512-bit GDDR5 (lower clocked, potentially much higher bandwidth... but, really Nvidia?)

I do agree, that 2GB on a 512-bit interface is highly unlikely; while 1-gitabit GDDR5 cells exist, I don't think they might see much use on such a high-end card, in lieu of the 2-gigabit ones as seen for the 7970.

As for that range of bandwidth on a 512-bit interface... the 2.0-3.0 GHz clock range for VRAM is kind of a wasteland: GDDR5 doesn't go that slow, and DDR3 doesn't go that high for video cards. (at least yet) GDDR3 has gotten as high as 2.48 GHz for the GTX 285, but I'd wager such cells cost considerably more than any DDR3 or slower GDDR3. Similarly, GDDR4 is the only thing that's filled the rest of that range, and the speed with which it was dropped indicates that it likely costs nearly as much as GDDR5.

dragonsqrrl said:
Assuming the rumors are true, this seems reasonable, logical, and if clocked right, potentially very performance competitive with the HD7970. The big unknowns are transistor count/die size, power consumption, physical dimensions, and availability.

Well, knowing the (assumed) switch to the latest 28nm fabrication process, and a mere 50% increase in processing elements, I'd estimate a die size of around 400mm²; give or take. (probably closer to 380 for a 256-bit interface; up to 420-450 for a 384-bit one)

As for performance, if nVidia goes with lower memory bandwidth, I would honestly question how competitive it could be to the 7970; at that point the design, I think, may start becoming bottlenecked by its own memory bandwidth: nVidia's top-end cards have stood in the range of 160 GB/sec since the GTX 280. Given that 5.5 GHz appears to be the current ceiling for GDDR5, that means that 176 GB/sec would be the ceiling for 256-bit interfaces. This would likely be insufficient for such a power level; this was hinted at by AMD's unprecedented adoption of a 384-bit interface for Tahiti.
Score
3
January 18, 2012 11:36:39 PM

notthekingWell, knowing the (assumed) switch to the latest 28nm fabrication process, and a mere 50% increase in processing elements, I'd estimate a die size of around 400mm²; give or take. (probably closer to 380 for a 256-bit interface; up to 420-450 for a 384-bit one)As for performance, if nVidia goes with lower memory bandwidth, I would honestly question how competitive it could be to the 7970; at that point the design, I think, may start becoming bottlenecked by its own memory bandwidth: nVidia's top-end cards have stood in the range of 160 GB/sec since the GTX 280. Given that 5.5 GHz appears to be the current ceiling for GDDR5, that means that 176 GB/sec would be the ceiling for 256-bit interfaces. This would likely be insufficient for such a power level; this was hinted at by AMD's unprecedented adoption of a 384-bit interface for Tahiti.

A good assessment, and I basically agree with the concerns you raised with the specs I've posted. Something doesn't quite add up. The question is, is ~160 - 180 GBps enough bandwidth to a feed a 768 Cuda Core GPU? It certainly seems like a potential bottleneck. If Nvidia goes with a 384-bit interface, that would clear up any concerns over memory bottlenecks, but it would also severely limit their options for a mobile version of the GPU. Nvidia traditionally uses desktop GPU's in this performance segment (upper mid-range) in their high-end mobile cards (gf114, gf104, g92...). They could always narrow the interface for the mobile high-end (like the GTX480M, for example), but that's less than ideal. It would also mean the rumors regarding the 2GB memory capacity are completely wrong, but that wouldn't really surprise me.

Who knows... The only alternative I've thought of is a possible 512 CUDA Core GPU, with 2GB GDDR5 on a 256-bit interface. With the architectural enhancements in Kepler, it could potentially perform competitively with the HD7970, the core would just have to be clocked significantly higher than 780 MHz. Ultimately this approach does seem a little more balanced given the potential bandwidth a high clocked 256-bit memory interface could provide.
Score
1
January 19, 2012 4:44:23 AM

I don't care about any PC graphics cards until a new generation of consoles is released. In case you haven't noticed, basically all PC games have been built to easily fit PS360 hardware specifications.
Score
0
January 19, 2012 8:02:45 AM

680? wtf?
Score
1
Anonymous
January 20, 2012 1:54:16 AM

"and AMD's would have no reason to drop from their already-unprecedented (and exorbitant) prices; $549US is a steep price for a single-GPU card."

You didnt live in the time of the Geforce 2 Ultra dont you? ^^
Score
0
Anonymous
February 3, 2012 4:38:41 PM

So the 680 will be about 15-20% faster than a 580 as is the 7970, so much for all the bollox about it being 80% faster then, yet another bunch of retards are going to through all their cash down the toilet again, and then again when the 700 nvidia and ati 8000 series come out in 2013, lmao makes me laugh!
Score
1
February 3, 2012 4:43:42 PM

No point upgrading now, going to wait until the new xbox720 and ps4 are out, nearly all games are ported from those anyway, see what sort of performance they have, then make up a PC that matches those specs. Or you could be a total moron and throw your cash down the toilet, for no gain.
Score
-1
!