Not like it was ever really widely available anyway, right? The GeForce GTX 670 offers most of GK104's on-chip resources, doesn't give up much performance, and costs $100 less. Now, let's see if Nvidia can make enough of them to satisfy demand.
How’s this for perspective? Last generation, Nvidia’s dual-GPU flagship sold for somewhere around $700. Before that, the GeForce GTX 295 was a $500 board. If you want the pinnacle of graphics performance today, the GeForce GTX 690 will cost you at least $1000—more if your vendor of choice is marking it up, as many are right now.
It’s hard to have a straight-faced discussion about value with the prices of high-end graphics cards shooting off into space. But when Nvidia told me that it planned to sell its GeForce GTX 670 for $400, I was ready to get serious about the V-word.
Hello Again, GK104
Of course, an attractive price is only one variable in the equation that determines whether you want to buy something or not. Performance is another, as is availability (a more problematic factor for Nvidia as of late).
The natural question to ask becomes: how fast is GeForce GTX 670?
Benchmarks tell that tale. But specifications give us a good measure of what to expect. Nvidia’s newest card centers on the same GK104 GPU as the company's GeForce GTX 680. But whereas the 680’s graphics processor employed all eight of the chip's SMX clusters, the 670 utilizes seven. The eighth is disabled. Hardware-wise, everything else is the same.
That means GeForce GTX 670 weighs in with 1344 total CUDA cores (192 *7) and 112 texture units (16 * 7). It also ends up giving up a single PolyMorph engine, four warp schedulers, and eight dispatch units, though the overall effect of shutting down an SMX is intended to scale with the GPU’s other resources. We did run our usual tessellation scaling numbers, and found the GTX 670 just slightly worse-off than the GTX 680 before it.
GK104’s back-end remains intact, consisting of four ROP clusters that output eight 32-bit integer pixels per clock each, totaling 32. Similarly, four 64-bit memory controllers create a 256-bit aggregate interface.
Nvidia populates that bus with the same 2 GB of GDDR5 memory found on its GeForce GTX 680, even setting the same 1502 MHz frequency, yielding a 6008 MT/s data rate. To further differentiate the GeForce GTX 670 from the more expensive 680, Nvidia does drop the lower-end card’s core to a 915 MHz base, with the typical GPU Boost setting landing at 980 MHz.
As we’ll see, though, those moves don’t have as much cumulative effect as we might have suspected.
|GeForce GTX 670||GeForce GTX 680||Radeon HD 7950||Radeon HD 7970|
|Full Color ROPs||32||32||32||32|
|Graphics Clock||915 MHz||1006 MHz||800 MHz||925 MHz|
|Texture Fillrate||102.5 Gtex/s||128.8 Gtex/s||89.6 Gtex/s||118.4 Gtex/s|
|Memory Clock||1502 MHz||1502 MHz||1250 MHz||1375 MHz|
|Memory Bandwidth||192.2 GB/s||192.3 GB/s||240 GB/s||264 GB/s|
|Graphics RAM||2 GB GDDR5||2 GB GDDR5||3 GB GDDR5||3 GB GDDR5|
|Die Size||294 mm2||294 mm2||365 mm2||365 mm2|
|Process Technology||28 nm||28 nm||28 nm||28 nm|
|Power Connectors||2 x 6-pin||2 x 6-pin||2 x 6-pin||1 x 8-pin, 1 x 6-pin|
|Maximum Power||170 W||195 W||200 W||250 W|
Oh My God, Becky. Look At Her PCB.
Nvidia’s GeForce GTX 680 is 10” long. Both its PCB and its cooling shroud are that long. A nice, sturdy aluminum frame encircles the whole card, adding rigidity for a beefy vapor chamber and acoustically-optimized centrifugal fan.
The reference GeForce GTX 670, on the other hand, is 9.5” long. But its PCB only accounts for 6.75” of that. Nvidia claims that the 670’s scaled-back power requirements allowed it to move voltage regulation circuitry to the other (left) side of the GPU, which itself is rotated to purportedly improve signal integrity.
Nvidia uses the same fan found on the GeForce GTX 680, which we like because it exhausts all of the card’s heated air out the rear I/O panel. But it employs a cost-reduced heat sink. You’ll see in the power and noise benchmarks that the result is slightly louder, slightly warmer operation under load. However, the GTX 670 is only marginally less attractive in that regard.
Deactivating a single SMX and turning down the GeForce GTX 670’s core clock results in a typical board power of 141 W, according to Nvidia. That’s around 30 W less than the GeForce GTX 680, which bears a 170 W typical and 195 W maximum power rating. Since a 16-lane PCI Express slot only delivers 75 W of power, you still need two six-pin auxiliary connectors to drive the GTX 670. In the event that you forget to attach those leads, Nvidia added a pre-boot warning message to its Kepler-based cards instructing the end-user to plug them in.
The 670 offers the same four display outputs as Nvidia’s GeForce GTX 680: two dual-link DVI connectors (one DVI-I and one DVI-D), one full-sized HDMI output, and one full-sized DisplayPort connector. All four can be active simultaneously, partly addressing AMD’s Eyefinity technology, which we’ve seen enable up to six screens on one card.
- Giving GK104 A Haircut
- EVGA GeForce GTX 670 Superclocked
- Test Setup And Benchmarks
- Benchmark Results: 3DMark 11
- Benchmark Results: Battlefield 3 (DX 11)
- Benchmark Results: Crysis 2 (DX 9 And 11)
- Benchmark Results: The Elder Scrolls V: Skyrim (DX 9)
- Benchmark Results: DiRT 3 (DX 11)
- Benchmark Results: World Of Warcraft: Cataclysm (DX 11)
- Benchmark Results: Metro 2033 (DX 11)
- Benchmark Results: Sandra 2012 And LuxMark 2.0
- Benchmark Results: MediaEspresso 6.5
- Temperature And Noise
- Power Consumption
- GeForce GTX 670 Versus GTX 680 And Radeon HD 7970
- Two GeForce GTX 670s In SLI
- Are We Still Taking These Launches Seriously?