OpenCL And CUDA Are Go: GeForce GTX Titan, Tested In Pro Apps
-
Page 1:Can GeForce GTX Titan Handle Professional Workloads?
-
Page 2:DirectX: AutoCAD 2013, 2D
-
Page 3:DirectX: AutoCAD 2013, 3D
-
Page 4:OpenGL: Maya 2013
-
Page 5:OpenGL: Maya 2013, Continued
-
Page 6:OpenGL: CATIA And EnSight
-
Page 7:OpenGL: LightWave And Maya
-
Page 8:OpenGL: Pro/ENGINEER And SolidWorks
-
Page 9:OpenGL: TcVis And NX
-
Page 10:OpenGL: Unigine Heaven
-
Page 11:OpenGL: Unigine Sanctuary
-
Page 12:OpenGL: Unigine Tropics
-
Page 13:OpenGL: PostFX And TessMark
-
Page 14:DirectX: Autodesk Inventor
-
Page 15:CUDA: 3ds Max + iray Renderer
-
Page 16:CUDA: Blender
-
Page 17:CUDA: Octane
-
Page 18:CUDA: FluidMark 1080p
-
Page 19:OpenCL: Bitmining, LuxMark, And ratGPU
-
Page 20:OpenCL: Computational Operations
-
Page 21:OpenCL: Image Processing
-
Page 22:OpenCL: Video Processing
-
Page 23:GeForce GTX Titan: Fast, But Not A Workstation Card
OpenGL: TcVis And NX
Siemens Teamcenter Visualization Mockup (tvcis-02)
As we saw in the Pro/ENGINEER benchmark, these numbers show why it's better to use professional-class hardware and drivers for workstation-oriented software. AMD's Radeon HD 7970 GHz Edition is the only card that even comes close to being usable, and when we say it comes close, we don’t mean it actually gets there. The GeForce GTX Titan’s performance is nowhere near acceptable for this type of work.
Siemens NX (snx-01)
The same picture emerges once again. AMD's Radeon HD 7970 GHz Edition manages a frame rate that's three times as fast as the Titan.
We know from the numbers we're running for our workstation story that Nvidia's Quadro cards are highly competitive in professional applications. The same cannot be said about the company's desktop-oriented boards, though. Apart from EnSight and Maya, even a $1,000 GeForce GTX Titan just isn't usable.
- Can GeForce GTX Titan Handle Professional Workloads?
- DirectX: AutoCAD 2013, 2D
- DirectX: AutoCAD 2013, 3D
- OpenGL: Maya 2013
- OpenGL: Maya 2013, Continued
- OpenGL: CATIA And EnSight
- OpenGL: LightWave And Maya
- OpenGL: Pro/ENGINEER And SolidWorks
- OpenGL: TcVis And NX
- OpenGL: Unigine Heaven
- OpenGL: Unigine Sanctuary
- OpenGL: Unigine Tropics
- OpenGL: PostFX And TessMark
- DirectX: Autodesk Inventor
- CUDA: 3ds Max + iray Renderer
- CUDA: Blender
- CUDA: Octane
- CUDA: FluidMark 1080p
- OpenCL: Bitmining, LuxMark, And ratGPU
- OpenCL: Computational Operations
- OpenCL: Image Processing
- OpenCL: Video Processing
- GeForce GTX Titan: Fast, But Not A Workstation Card
These results prove that the Titan is more than twice as fast as the rest of the cards tested here when it come to rendering in CUDA based app’s like iray, blender and most likely V-ray too. Even if the Titan turned out to be on par with the older gen GTX cards, which it didn’t, the 6GB of onboard memory for me is an absolute must. My current 3GB GTX 580’s are almost maxed out on Vram because of the high level of detail I model at, and that’s before texturing, so the alternative is buying an older generation card like the quadro 6000 or Tesla c2075 at over twice the price of the Titan, or spending more than three times the price of the Titan on the newer Tesla K20/X.
* Good view port performance.
* Great gaming performance.
* More than twice as fast as the older GTX cards in CUDA based production rendering.
* 6Gb of onboard memory for huge data sets, at less than half the price of an older 6GB Quadro/Tesla cards.
This card is a win win win for the apps I use, so the “its price is just too high for the performance it offers in professional applications” remark, is completely wrong.
Would you rather spend £7600 on 4x older 6gb Tesla cards in your render node, or spend £3300 on 4x Titan’s and get over twice the performance, do the math.
I understand the advantages of Quadro/Tesla cards, optimised drivers, higher yield chips, better stability, durability, but using the GTX 480 vs. Quadro 6000 as an example up to 30% extra view port performance, but over 700% in cost, from a business point of view the math just doesn’t add up.
I have owned Quadro cards in the past, and always ended up being disappointed by the very slight view port performance increase over the desktop equivalent, and feeling I have just wasted a lot of cash for nothing. One of my mechanical models I am working on has over 30 million polygons so far, and the GTX 580 throws it around the view ports with ease.
For gaming yes this card is over priced, and you are better off getting a cheaper sli/crossfire configuration, but for some professionals that need fast render times and working on large data sets, this card will be a much cheaper and faster option than spending a lot more cash on Quadro/Tesla cards.
I already ordered two Titans this morning. I will order another two when my other kidney sells on Ebay.
BTW, I'm hoping the OpenCL benchmarks all make it to the GPU Charts. I'd like to know how the HD 7870 stacks up, at least. Being a new owner of one, I'm pleased at the showing made by the other Radeons. I had expected Titan to better on OpenCL, based on all the hype.
Because it would be pointless. They use the same GPUs, but clocked lower and with ECC memory.
The whole point of Titan was to make a consumer card based on the Tesla GPU. I don't think AMD has a separate GPU for their workstation or "SKY" cards.
In Pro applications :
1. 7970 is generally quite bad.
2. Titan has mixed performance.
3. Drivers make or break a card.
In more consumer friendly 'general' apps :
1. 7970 dominates. Completely.
2. 680 is piss poor (as expected)
3. 580 may or may not compete.
4. Titan is not worth having.
AMD needs to tie up moar with Pro app developers. Thats the market which is ever expanding, and will bring huge revenue.
Would have been interesting to see how the FirePro version of 7970 performs compared to the HD7970.
It isn't pointless, since it helps put into perspective where this non-pro video card stands in the professional world. It's like making a lot of gaming benchmarks out of professional cards with no non-pro cards. You need perspective.
Other than that, is was an interesting read.
Cheers!
Show me one benchmark where AMD actually does well? the amount of fanboyism in your comment is unsettling, go back to your cave, Troll.
I've been running Quadro drivers on my 9600GT & 560 Ti for years now.
You aren't quite settled with what troll should be, are you? :trollface
These results prove that the Titan is more than twice as fast as the rest of the cards tested here when it come to rendering in CUDA based app’s like iray, blender and most likely V-ray too. Even if the Titan turned out to be on par with the older gen GTX cards, which it didn’t, the 6GB of onboard memory for me is an absolute must. My current 3GB GTX 580’s are almost maxed out on Vram because of the high level of detail I model at, and that’s before texturing, so the alternative is buying an older generation card like the quadro 6000 or Tesla c2075 at over twice the price of the Titan, or spending more than three times the price of the Titan on the newer Tesla K20/X.
* Good view port performance.
* Great gaming performance.
* More than twice as fast as the older GTX cards in CUDA based production rendering.
* 6Gb of onboard memory for huge data sets, at less than half the price of an older 6GB Quadro/Tesla cards.
This card is a win win win for the apps I use, so the “its price is just too high for the performance it offers in professional applications” remark, is completely wrong.
Would you rather spend £7600 on 4x older 6gb Tesla cards in your render node, or spend £3300 on 4x Titan’s and get over twice the performance, do the math.
I understand the advantages of Quadro/Tesla cards, optimised drivers, higher yield chips, better stability, durability, but using the GTX 480 vs. Quadro 6000 as an example up to 30% extra view port performance, but over 700% in cost, from a business point of view the math just doesn’t add up.
I have owned Quadro cards in the past, and always ended up being disappointed by the very slight view port performance increase over the desktop equivalent, and feeling I have just wasted a lot of cash for nothing. One of my mechanical models I am working on has over 30 million polygons so far, and the GTX 580 throws it around the view ports with ease.
For gaming yes this card is over priced, and you are better off getting a cheaper sli/crossfire configuration, but for some professionals that need fast render times and working on large data sets, this card will be a much cheaper and faster option than spending a lot more cash on Quadro/Tesla cards.
I already ordered two Titans this morning. I will order another two when my other kidney sells on Ebay.
You benchmarked iRay but didn't mention how with that 6Gb the Titan can do scenes that most other cards cant touch and would simply reject. Then there is the issue of viewing massive scenes. to show what I am talking about, go down the page a little and click on the landscape/ocean scene, which kills a normal video card but would probably fit inside a Titan.
The Titan is faster than its pro origins, a fraction of the price and has enough RAM on it to do serious work. I think it is a "no-brainer" as a workstation card – just buy it.
I must ask, though: Was the Titan using all the compute resources available to it? There was that setting in Nvidia's Control Panel, that let you switch between more double precision float performance or more gaming performance. Was it enabled?
http://www.tomshardware.com/reviews/geforce-gtx-titan-gk110-review,3438-3.html
Thanks for the article. It is extremely thorough with the exception K1114 raises in the first comment.
Why not include some Quadros and FirePros? This would give a real frame of reference with regard to workstation performance.
Because the CUDA tests are rendering speed not viewport performance, and while it's interesting seeing the CUDA performance that really does need to be compared to pro cards for those interested in such things.
However most 3DS Max studios doing architectural visualisation use Vray using the CPU on a renderfarm, they might use RT locally, but CUDA is still, distinctly secondary to using a graphics card for viewport acceleration.
It would also be nice to indicate any issues gaming cards have in applications like 3DS Max, benchmarks are one thing, but trying to pick a non existant vertex because the drivers haven't rendered the scene correctly quickly makes you realise why the Quadro cards exist, in my experience the money saved isn't worth the frustration when you make your living from it.