We set off on a mission to strip the cooler off of a very high-end graphics card and replace it with something claimed to be even better. The rebellious project was a success, and we had lots of fun in the process.
That's not to say there's anything wrong with Nvidia's reference cooler. In fact, it remains a quiet, effective way to ensure heat is exhausted from your chassis. Gigabyte's solution blows all of that hot air around inside the case. As a result, be sure you have the right enclosure before spending the extra cash on the GHz Edition board. With that said, the reference cooler's thermal capacity isn't particularly special; it costs the GK110 GPU quite a bit of its potential performance.
Not so for Gigabyte’s WindForce 600 Watt cooler. It scores major points with its new fans and an improved cooler design. This is probably about as good as a two-slot thermal solution can get. Hopefully, the company uses it on some of its other cards as well.
Naturally, Gigabyte's WindForce 600 Watt cooler handles the GeForce GTX Titan Black's 250 W board power with ease, which is what we set out to test. So long as you own a well-cooled case, modifying Nvidia's reference design is wholly worthwhile. It'll give you a few pleasant hours of tinkering to start, and a more enjoyable gaming experience after that. The knowledge that you own something unique should help ameliorate the pain of buying such a high-end board (providing you can find it; we're still not seeing it for sale here in the U.S.).
This raises another question: is there a point to the GeForce GTX Titan Black to begin with, particularly in a gaming system? The only additional values come from unrestricted double-precision compute performance (which isn't relevant to a majority of desktop users) and 6 GB of graphics memory. At least at the resolutions and settings once GK110 can comfortably drive, this isn't really an advantage. It really takes a multi-GPU configuration to demonstrate the need for more than 3 GB.
Nvidia's Reference Cooler
Want to know more about how the engineers at Nvidia came up with the iconic windowed cooling solution we've seen on so many of the company's high-end graphics cards? Check out The Story Of How GeForce GTX 690 And Titan Came To Be.
We probably won’t see many Gigabyte GeForce GTX Titan Black GHz Edition graphics cards in the wild. The exorbitant price alone takes care of that. However, this story was worth writing, if only to demonstrate how much performance can be gained from Nvidia's GK110 GPU by matching it up to a better cooler.
The combination of Gigabyte's notable factory overclock and the improved cooler puts the modified GeForce GTX Titan Black GHz Edition’s performance on par with an even more aggressively overclocked Gigabyte GeForce GTX 780 Ti WindForce OC. That card only manages to achieve a slightly higher GPU Boost clock rate, despite its higher base frequency. It also runs hotter and louder than the modified Titan Black. The biggest difference is that you'll find the overclocked 780 Ti board selling for a lot less money.
More daring enthusiasts can use the included OC Guru software to increase the power target and core voltage above the stock settings. This allowed us to push the GPU Boost clock rate beyond 1300 MHz. That might seem a little risky though, given that we're talking about a $1000+ graphics card.
To the folks at Gigabyte: pay attention to what we were able to do with your cooler and apply that to other (more affordable) models as well. We're sure the enthusiast market will thank you.
- A GeForce GTX Titan Black You Modify Yourself
- The Gigabyte WindForce 600 Graphics Card Cooler
- Upgrading The Gigabyte GeForce GTX Titan Black
- Dimensions And Pictures: The Upgraded Gigabyte GeForce GTX Titan Black
- Power Consumption: Test Methodology And Idle Measurements
- Power Consumption: Gaming And Full Load Measurements
- Temperatures And Noise
- Performance
- Gigabyte Gets Its WindForce Cooler Right

If you bothered reading the first page you'd know why.
If you bothered reading the first page you'd know why.
"Nvidia doesn’t allow its partners to sell the GeForce GTX Titan Black with proprietary cooling. However, Gigabyte now offers a GHz Edition of the card that comes bundled with its WindForce solution, which you can install on the overclocked board yourself."
This one right? whats the difference between you install it yourself and Gigabyte take the initiative pre-factory installed? or Warranty Void?
If you bothered reading the first page you'd know why.
"Nvidia doesn’t allow its partners to sell the GeForce GTX Titan Black with proprietary cooling. However, Gigabyte now offers a GHz Edition of the card that comes bundled with its WindForce solution, which you can install on the overclocked board yourself."
This one right? whats the difference between you install it yourself and Gigabyte take the initiative pre-factory installed? or Warranty Void?
Ok, for your better understanding:
Nvidia doesn’t allow its partners to sell the GeForce GTX Titan Black with factory-installed proprietary cooling.
Igor Wallossek, I wonder if you could put up a graph for 3D rendering? If you use Blender's BMW scene by Mike Pan (a popular benchmark scene), make sure you properly set the tile size!
Ok, for your better understanding:
Nvidia doesn’t allow its partners to sell the GeForce GTX Titan Black with factory-installed proprietary cooling.
Silly question probably, but why does nVidia allow only EVGA to break this rule, with their hydro copper signature edition you mentioned? Is it just because it's a water cooled model? Do you think nVidia specially signs off on the design?
I'm genuinely curious.
that's what it looks like to me.
> Igor Wallossek, I wonder if you could put up a graph for 3D rendering? If you use
> Blender's BMW scene by Mike Pan (a popular benchmark scene), make sure you
> properly set the tile size!
Arion Bench 2.5.0 would be a better test, because it scales perfectly with
multiple GPUs.
Or the AE CUDA test my friend has created, but it's pretty intense, maybe
takes too long with just one card (about 20 minutes with a single 780Ti).
Ian.
Arion Bench 2.5.0 would be a better test, because it scales perfectly with
multiple GPUs.
Or the AE CUDA test my friend has created, but it's pretty intense, maybe
takes too long with just one card (about 20 minutes with a single 780Ti).
Ian.
I agree. The BMW scene is not the best CUDA benchmark. I just didn't want them to mess it up if they decided to use it. I heard some people complaining about this benchmark, although I don't know if they were right or wrong.
My Titan Black renders the BMW in just over 24 seconds!
For comparison, an Intel Core 2 Duo @ 2.33 GHz took 16 minutes!
Have you run a Titan Black on that AE CUDA test? If so, I am curious to see the results!
wolverine96 writes:
> I agree. The BMW scene is not the best CUDA benchmark. ...
I've tried it numerous times with various setups, it just seems to behave a
bit weird IMO.
> My Titan Black renders the BMW in just over 24 seconds!
Main problem I find is I can't work out how to make it use all available GPUs.
Is that possible? One of my 580s does it in about 43s, but my system has 4 of
them, so it's a bit moot really. Mind you, I'm using an older version of
Blender (2.61), stuck with it to ensure consistent CPU-based testing.
And as you say, it also involves some CPU stuff (scene setup doesn't use the
GPU).
> Have you run a Titan Black on that AE CUDA test? If so, I am curious to see
> the results!
Alas no, atm I don't have access to anything newer than some top-end 3GB GTX
580s (MSI LX, 832MHz); my system has 4 of them. Final version of the test file
takes 14m 48s to render in AE using 16bpc and 8 samples (ie. just average
quality), so on a Titan Black I'm guessing it would take maybe 25 mins? Hard
to say. Would certainly be interesting to find out. Note the 'max' quality
setting would be 32bpc and 10 samples (likewise, for the full animation, avg
quality is 1080p @ 25Hz, max quality is 50Hz).
I'll sort out the test readme, download archive, web page, etc., next week,
but need to talk to C.A. first about some things. Anyway, here's the rendered
image in original Targa format (just one frame, the default test frame 96, the
last frame in the main animation sequence):
http://www.sgidepot.co.uk/misc/cuda.101_Frame96.tga
Here's the file converted to BMP and SGI IRIS/RGB:
http://www.sgidepot.co.uk/misc/cuda.101_Frame96.bmp
http://www.sgidepot.co.uk/misc/cuda.101_Frame96.rgb
and for those who don't mind losing a bit of quality, here's a 100% JPEG:
http://www.sgidepot.co.uk/misc/cuda.101_Frame96.jpg
The full 4 second animation takes hours to compute even at average quality and
is thus intended more as a stress test for those interested in checking that
their system can handle long renders or other GPU tasks without falling over
(I've seen many people asking for a test like this on forums). I suspect at max
quality the whole sequence would take about a week to crunch on my system.
Also interesting for exploring power consumption & energy cost issues for
different GPU configs (load draw on my system during the render is around 920W).
Ian.
Your system with 4 GTX 580's is much faster than mine! (Two GTX 580's is about as fast as one GTX Titan Black.) I guess the only time mine would be faster is if the scene used more than 3GB of RAM. I actually was planning on getting 2 GTX 580's, but then I discovered the Titan Black.
Is that Cycles in those images you posted?
By the way, Blender 2.71 is coming out very soon. In the past 10 versions, there have been some major performance gains for Cycles. I think it's like 30-50% faster in some cases.
> Did you say you are having trouble getting multiple GPU's to work? I only use one
> GPU, but here's a very informative link. More specifically, see this section.
Thanks!! My goof, looks like V2.61 doesn't have the Compute Panel. Will try
the newer 2.70a in a moment... (downloading now)
> Your system with 4 GTX 580's is much faster than mine! ...
Yup, though I suspect your power bill is less likely to make your eyeballs explode.
> ... I guess the only time mine would be faster is if the scene used more than
> 3GB of RAM. ...
I had been hoping we'd see 6GB 780Tis, but seems like that's been killed off. Shame.
> I actually was planning on getting 2 GTX 580's, but then I discovered the Titan Black.
The real advantage of multiple 580s is just low upfront cost. Standard 580s are pretty
cheap (I have four 1.5GB 797MHz models which cost about 400 UKP total), if one's ok
with the VRAM limit. 3GB 580s cost a bit more, but not much more (I've bought/sold
nine Palit 3GB 580s in the past year). The MSI LXs though can be a tad pricey, depends
on luck really I guess. I got mine (five total) for good prices though overall, and they do
oc like crazy (1GHz+ is possible).
> Is that Cycles in those images you posted?
No, it's the RayTrace3D renderer within After Effects.
> By the way, Blender 2.71 is coming out very soon. In the past 10 versions,
> there have been some major performance gains for Cycles. I think it's like 30-50%
> faster in some cases.
Good that they keep boosting it, but a nightmare for benchmarking consistency.
Ok, download done, quick test...
Cycles does the BMW in 11.56s (blimey!), tile size 240x135. Just curious btw,
you mentioned using 512x512 tile size, but surely it'd be optimal to use an even
divisor of the image dimensions in both X and Y? What do you get if you try
a tile size of 240x135?
Ian.