As I was testing Nvidia’s GeForce GTX Titan, but before the company was able to talk in depth about the card’s features, I noticed that double-precision performance was dismally low in diagnostic tools like SiSoftware Sandra. Although it should have been 1/3 the FP32 rate, my results looked more like the 1/24 expected from GeForce GTX 680.
It turns out that, in order to maximize the card’s clock rate and minimize its thermal output, Nvidia purposely forces GK110’s FP64 units to run at 1/8 of the chip’s clock rate by default. Multiply that by the 1:3 ratio of double- to single-precision CUDA cores, and the numbers I saw initially turn out to be correct.
But Nvidia claims this card is the real deal, capable of 4.5 TFLOPS single- and 1.5 TFLOPS double-precision throughput. So, what gives?
It’s improbable that Tesla customers are going to cheap out on gaming cards that lack ECC memory protection, the bundled GPU management/monitoring software, support for GPUDirect, or support for Hyper-Q (Update, 3/5/2013: Nvidia just let us know that Titan supports Dynamic Parallelism and Hyper-Q for CUDA streams, and does not support ECC, the RDMA feature of GPU Direct, or Hyper-Q for MPI connections). However, developers can still get their hands on Titan cards to further promulgate GPU-accelerated apps (without spending close to eight grand on a Tesla K20X), so Nvidia does want to enable GK110’s full compute potential.

Tapping in to the full-speed FP64 CUDA cores requires opening the driver control panel, clicking the Manage 3D Settings link, scrolling down to the CUDA – Double precision line item, and selecting your GeForce GTX Titan card. This effectively disables GPU Boost, so you’d only want to toggle it on if you specifically needed to spin up the FP64 cores.
We can confirm the option unlocks GK110’s compute potential, but we cannot yet share our benchmark results. So, you’ll need to look out for those in a couple of days.
AMD really has a chance now to come strong in 1 month. We'll see.
Better idea, lower all of the prices on the current GTX 600 series by 20%+ and I'd be a happy camper!
Crysis 3 broke my SLI GTX 560's and I need new GPU's...
Better idea, lower all of the prices on the current GTX 600 series by 20%+ and I'd be a happy camper!
Crysis 3 broke my SLI GTX 560's and I need new GPU's...
AMD really has a chance now to come strong in 1 month. We'll see.
12x2 + 12x2 = 6? ...
"That card bears a 300 W TDP and consequently requires two eight-pin power leads."
Shows a picture of a 6pin and an 8pin...
I haven't even gotten past the first page but mistakes like this bug me
Nevermind, the 2nd mistake wasn't a mistake. That was my own fail reading.
My understanding from this is that Titan is just 40-50% faster than HD 7970 GHz Ed that doesn't justify the Extra $1K.
What? Electricity is not cheap in the Philippines.
Titan is a luxury product. It's not supposed to offer a competitive price/performance ratio, just as a Ferrari's price isn't based on its horsepower or fuel efficiency. Titan is a statement moreso than it is a bona-fide money maker for nVidia.
The idea of status-symbol computer components strikes me as a little silly, of course, but I'm not in the target market. Neither are most gamers, whether high end or not.
If you generally spend $1600 on the graphics' subsystem of your computer, then I'm not even sure you fit in the so-called high-end. Super-high-end, maybe. You are the 1%.
Its an engineering beauty, but what could make us wish it? Most gamers already have enough with 7970Ghz or 670s so... not a smart choice.
12x2 + 12x2 = 6? ...
the chips are Gb (Gigabit) not GB (Gigabyte) which is a difference of 8x
so 12x2Gb+12x2Gb = 48 Gb = 6GB
chips are commonly refereed to in capacity as the bit size not byte size
Assuming proper notation is being observed (often its not), "b" is a bit and "B" is a byte.
6 Gigabytes = 48 Gigabits as 1 Byte = 8 bits.
btw very interested how far this 'beast' will overclock
BL1NDS1DE13