Nvidia Titan RTX Review: Gaming, Training, Inferencing, Pro Viz, Oh My!

Temperatures and Fan Speeds

SPECviewperf 13

The SPECviewperf 13 benchmark doesn’t apply constant stress to Titan RTX, so fan speeds rise and fall throughout the run in response to thermal load. It’s clear, however, that Nvidia’s axial fans spin much slower than its blower-style coolers, even on a higher-power card.

In comparison, Titan V ramps up quickly to keep up with heat generated by this workstation-class benchmark. Titan Xp chases close behind.

Temperatures fluctuate as SPECviewperf spins each dataset up and down. But as a general trend, the massive GV100 processor runs hottest, followed by Titan Xp’s GP102. TU102, despite its size, is kept relatively cool by dual axial fans and a beefy vapor chamber cooler.

More so than Titan V or Titan Xp, Titan RTX spends a lot of time jumping between power states, depending on its load.

FurMark

FurMark’s intensity doesn’t faze Titan RTX. A smooth ramp up to about 2,250 RPM is much more graceful than Titan Xp’s sharper rise to 2,500 RPM. Titan V settles around 2,700 RPM as it struggles to keep GV100 cool enough.

Although Titan V never hits its 91°C thermal threshold, it still runs hotter than either GP102 or TU102. The former’s maximum GPU temperature is 94°C, while the latter is rated at up to 89°C. With that said, Titan RTX peaks at a mere 80°.

Titan V maintains a steady 1,200 MHz at its 0.712V, obeying its rated base frequency.

Meanwhile, Titan RTX oscillates between 1,365 and 1,380 MHZ at roughly 0.737V, keeping its nose just above Nvidia’s 1,350 MHz base clock rate.

Titan Xp bounces around as well, but generally hovers in the 1,300 MHz range at 0.768V. That makes it the only Titan card to violate Nvidia’s base frequency specification (it shouldn’t drop below 1,404 MHz).

MORE: Best Graphics Cards

MORE: Desktop GPU Performance Hierarchy Table

MORE: All Graphics Content

19 comments
    Your comment
  • AgentLozen
    This is a horrible video card for gaming at this price point but when you paint this card in the light of workstation graphics, the price starts to become more understandable.
    Nvidia should have given this a Quadro designation so that there is no confusion what this thing is meant for.
  • bloodroses
    496490 said:
    This is a horrible video card for gaming at this price point but when you paint this card in the light of workstation graphics, the price starts to become more understandable. Nvidia should have given this a Quadro designation so that there is no confusion what this thing is meant for.


    True, but the 'Titan' designation was more so designated for super computing, not gaming. They just happen to game well. Quadro is designed for CAD uses, with ECC VRAM and driver support being the big difference over a Titan. There is quite a bit of crossover that does happen each generation though, to where you can sometimes 'hack' a Quadro driver onto a Titan

    https://www.reddit.com/r/nvidia/comments/a2vxb9/differences_between_the_titan_rtx_and_quadro_rtx/
  • madks
    Is it possible to put more training benchmarks? Especially for Recurrent Neural Networks (RNN). There are many forecasting models for weather, stock market etc. And they usually fit in less than 4GB of vram.

    Inference is less important, because a model could be deployed on a machine without a GPU or even an embedded device.
  • ern88
    Just buy it!!!
  • truerock
    Really don't want obsolete ports on my next video card. USB-C only, please.
  • mdd1963
    I'm sure this card will be worth it to *somebody* out there.....somewhere...
  • mac_angel
    missing the Battlefield V for 4K?
  • littleleo
    Only available thru Nvidia website?
  • mdd1963
    252980 said:
    Just buy it!!!


    Would not buy it at half of it's cost either, so...
    :)

    The Tom's summary sounds like Nviidia payed for their trip to Bangkok and gave them 4 cards to review....plus gave $4k 'expense money' :)
  • alextheblue
    So the Titan RTX has roughly half the FP64 performance of the Vega VII. The same Vega VII that Tom's had a news article (that was NEVER CORRECTED) that bashed it for "shipping without double precision" and then later erroneously listed the FP64 rate at half the actual rate? Nice to know.

    https://www.tomshardware.com/news/amd-radeon-vii-double-precision-disabled,38437.html

    There's a link to the bad news article, for posterity.
  • kyotokid
    ...finally the 11/12 GB VRAM barrier is broken in outside the Quadro line. This is important for rendering large resolution high quality images. Yes 2,500$ is still a lot of scratch however having roughly the same resources of the Quadro RTX6000 for less than half the cost (500$ less than the Titan-V with half the VRAM) is a major benefit for the serious graphics enthusiast and content developer.

    Speed is dependent on keeping the workload in VRAM.

    However my one concern is why wasn't there a benchmark for using Out Of Core memory in the Octane test which would have prevented the crashes on the Titan V and Titan Xp?
  • cangelini
    2883328 said:
    Is it possible to put more training benchmarks? Especially for Recurrent Neural Networks (RNN). There are many forecasting models for weather, stock market etc. And they usually fit in less than 4GB of vram. Inference is less important, because a model could be deployed on a machine without a GPU or even an embedded device.


    Anything in particular you'd like to see?
  • cangelini
    332490 said:
    ...finally the 11/12 GB VRAM barrier is broken in outside the Quadro line. This is important for rendering large resolution high quality images. Yes 2,500$ is still a lot of scratch however having roughly the same resources of the Quadro RTX6000 for less than half the cost (500$ less than the Titan-V with half the VRAM) is a major benefit for the serious graphics enthusiast and content developer. Speed is dependent on keeping the workload in VRAM. However my one concern is why wasn't there a benchmark for using Out Of Core memory in the Octane test which would have prevented the crashes on the Titan V and Titan Xp?


    The OctaneRender test was using out-of-core memory. My understanding is that the problem happens when those two cards rely too heavily on out-of-core memory, after running out of on-board HBM2/GDDR5X, which is set aside for that purpose, overrunning and causing a failure. Successful runs on all cards meant using a less aggressive scene.
  • cangelini
    169108 said:
    missing the Battlefield V for 4K?


    Weird, it was there. Unpublished the album and re-published it--appears fixed.
  • tyr_antilles
    Nice article, but I was looking at the power consumption section and I really miss the comparison with other boards power draw. I think it was a lot better the old graph where you could see several cards power consumption at a glance and think better about efficiency. Please bring back the power consumption comparison.
  • redgarl
    Huh... RVII makes so much sense now after seeing this...

    You can buy two, put them in CF and annihilate any compute prowess of this card.

    The worst is a single RVII is faster for compute than this card... hence the price tag of 2400$, and no HBM2.
  • njbmw5
    If you are buying it, only for gaming...

    You deserve a big ear rape Bruh
  • njbmw5
    Roses are red
    Violets are blue
    Just use your head
    You will get the clue
  • drmaddogs2
    The Octane render test is not adequate in description as to the RTX 2080ti and Titan RTX... different UIs make a difference as to Octane plugins and as to standalone Octane and Plugin the file size restraints for both Render programs are not being applied evenly.
    I refuse to believe Titan versus 2080Tis would show such a discrepancy/difference extreme as in your chart... >< 1 minute versus 4 minutes. Perhaps there is a miss on the enabling between the two cards as to using some Vulkin Api application on the Titan and not 2080Ti but 4xs difference is impossible unless file size outstrips the 2080Ti vram capacity and both cards are not using the new kernel system- Vulkin'.