Report: Nvidia To Launch GK104-based GTX 660Ti in August
By - Source: WCCF Tech
|
76 comments
The Internet rumor mill has turned up another new Nvidia card.
The GK104-based GTX 660Ti is apparently slated for an H1 August launch and debut with a suggested price of $299.
According to Wccf Tech, the new card will arrive with seven active SMX units, 1,344 processing cores and 1.5 GB memory. The clock speed will below the clock speed of the 670 card, but the performance will be, according to the site, above the GTX 580 and directly competing AMD Radeon 7800 cards.
Wccf Tech also noted that the GTX 660Ti will be Nvidia's last GK104 card and the company will be transitioning to the 700 series going forward. At a price of $299, the 660Ti makes a lot of sense below the $399 670 and the $499 680 cards.
Discuss
Ask a Category Expert
That's not how it works. First off, these cores are not the same as the cores in the 560 TI. These are optimized for single precision math and aren't even capable of dual-precision math. They are also only half as fast as the older cores (although much more power efficient and not only because of the die shrink) due to the abandonment of the inefficient hot-clocking method use previously. The dual-precision capabilities of the GK104 are only from a small amount of 64 bit Kepler CUDA cores that don't do single-precision math. Well, since games run on single-precision math, these were not prioritized. This is why the Kepler cards are somewhat more power efficient than AMD's GCN based Radeon 7000 cards. They are purely designed for gaming performance and that is what they excel at when they're VRAM doesn't cause too severe of problems with it's too-small bandwidth.
Furthermore, there is 1.5GB because it has a 192 bit bus instead of a 256 bit bus... RAM chips have 32 bit buses. You do the math on how many chips a smaller bus can get. That's right, twelve. Twelve chips times 256MiB per chip means 1.5GB of VRAM. 512MiB chips are much more expensive than 256MiB chips. For example, 8GB DDR3 memory modules use 512MiB chips and although their prices have improved substantially in the last few months, they are still oftentimes much more expensive than a similar 2x4GiB memory kit. Also, there is a GTX 670 4GiB at newegg, not that it matters because in any situation where you can use that much VRAM in a gaming situation, the memory bandwidth holds it back so badly that you'd hate to compare it to a multiple 7950 OC or 7970 multiple 7970 system... There might be a 4GB GTX 680 out by now, but I don't really care to check and like I said, it doesn't really matter.
So, there's not a problem with CUDA cores being improperly accessed... The problem is that you don't know the situation. Beyond that, you ignore the other factors in performance... I guess that you didn't know about how increasing the core count linearly does not give a linear increase in performance and there are other limits in performance, such as the memory and more either. Heck, that's all ignoring any CPU bottle-necks and other bottle-necks that aren't directly related to the graphics card that can hold back performance.
wonder how the 700 series will match up to AMD's 8000 series. if the 700 series are indeed GK110 based, then the folks at AMD will have a much better handle on what it's performance might be like, hopefully they use this to their advantage
Also, isn't there constantly something new or updated that gets followed by a price cut somewhere else ?
God, I wish the car companies would do that too. I'd get myself an 'older' Mercedes convertible from the bargain bin
Nvidia often does it up to three times. They could cut it down as far as they want to. Heck, if they really wanted to, they could make all of their dies as full GK100 dies had they made such a die and they could cut it down all the way down to the bottom Kepler cards. It would almost defintiely be a very bad method, but it can be done.
i7 ~= i5 in desktop gaming performance in all modern games and that probably won't change any time soon... Even if it did, I don'st see any way that you could max out the i5 with even two GTX 660 TIs in SLI, so you probably wouldn't get any benefit out of it.
Now we hear that near the end of the year they are now releasing their mid ranged cards? IMO, they are too late, as AMD is already in the talks with the 8000 series. Hopefully Nvidia follows the same strategy as AMD in that they release their cards on a month by month basis.
if i were nvidia, i just let 500 series to co-exist with 600 series to serve the performance under 660ti. then focus to develop 700 series for next year or next iteration.
That's not a very bad idea, but it means that Nvidia would leave older, much more power-hungry cards as their mid/low end lineup competing against AMD's comparatively power-sipping cards that also tend to be cheaper.
Nvidia makes nice cards but they really do need to work on their power and heat efficiencies. AMD has them beat hands-down in that area at the moment. I'm not a fan of having an extra air conditioner to cool my room, just because of my video card.
1344~1536 should show a parallel processing advantage of at least 4~5x that of the 560Ti and the scores on various sites are just pathetic. Hopefully someone comes out with a nice hack to enable or properly access the cores, otherwise, what is the point?
And this new 660Ti with only 1.5gb, what they can't afford to put in parallel 2gb? ORLY? 4gb for a great custom 680 (which I have read about but never seen IRL)
yawn
They are probably the same die as the 680.
They just had to wait long enough to get enough defective chips to launch the 660-TI model.
All chip fabricators do this to sale defective chips that could not otherwise be sold.