New Titan X's for Iray rendering in 3ds Max?

Status
Not open for further replies.

DoDidDont

Distinguished
May 27, 2009
81
0
18,640
Was not sure whether or not to post this in app's or GPU's so did both.

I was thinking about upgrading my 4x GTX Titan’s (2688 core) to 4x GTX Titan X for rendering with Iray, until I saw these benchmarks on TomsHardware Germany.

http://www.tomshardware.de/geforce-quadro-workstation-g...

The benchmark’s for Iray do not make sense?

The Quadro M6000 and Titan X are essentially exactly the same GPU, with exactly the same estimated single precision performance at 7 Tflops, so this does not explain why the M6000 is 3x faster than the Titan X in the iray benchmark?

It cannot be about drivers otherwise the Blender and Octane results would be exactly the same with the M6000 3x faster, but the results for Blender Octane etc are exactly as expected with the Titan X a little faster than the M6000 because of the higher clock rates.

So why is Iray different?

The Maxwell Patch must have been applied to 3ds max otherwise the Quadro M6000 and Titan X would not work in 3ds Max at all.

This can have only two possibilities,

One that the test is a rotten egg and needs to be performed again in 3ds max 2016.

or

Two, Nvidia are purposefully crippling Desktop Maxwell GPU’s in Iray to promote their Quadro Cards! Nvidia Develop Iray, so cannot manipulate the results in Blender or Octane, only in Iray they can do this because they make the GPU’s and the software.

The Iray test should have very similar results to the Blender / Octane / RatGPU and Luxmark tests, with the Titan X slightly faster than the M6000 because of the higher clock rates.

If this is the case, that Nvidia have decided to cripple Maxwell desktop cards in iray to force people to buy their extortionate Quadro cards, then I for one will not be upgrading.

Do Nvidia think that freelancers like me are going to spend around $24500 ( £16400 ) on Quadro cards instead of approx $5200 (£3500) on Titan X‘s? The majority of freelancers and small studios cant afford it, so if Nvidia are crippling the Titan X in iray, I am sure they will end up loosing tens of millions in sales from potential new sales / upgrades from people like me.

I am not going to spend an extra $19300 (£12900) on what is essentially the same GPU’s just because Nvidia want to force people to buy Quadro’s by crippling performance in Iray, so if the benchmark is not a rotten egg, and Nvidia have crippled the Desktop Maxwell cards in iray, they have lost my money, and I’m sure once the word gets around, tens of millions of other potential sales. Freelancers and studios will just stick to their older Titans.

I am not sure what the exact statistics are, but I am sure that the millions of freelancers and small studios out there out weigh the larger studios that can afford M6000’s in their workstations and render nodes, so this would be a very bad business move from Nvidia.

Does anyone else know of any benchmarks in iray comparing the Titan X to the M6000? Hopefully this benchmark is wrong, but I would not put it past Nvidia to do this.

Anyone upgraded their old Titan with a new Titan X and run benchmarks against new and old in Iray to see what the time improvements are?

If you are unhappy about Nvidia doing this, spread the word!

Update


Because the post went unsolved, I am guessing a moderator selected Shneiky answer as the solution thinking that Quadro's are optimised better for 3ds max performance, but we are talking about Cuda rendering, so Shneiky's answer is not the solution and is completely wrong.

The Titan X is the fastest card currently available at writing for Iray rendering with 3ds max. This problem was solved with a software patch and driver updates, and the original benchmark on TomsHardware De was incorrect, and misleading.

There is a big difference between viewport performance which Quadro cards are usually best at because of the optimised drivers, and Cuda performance, (ask a Cuda developer!), and unfortunately Shneiky is wrongly putting the two together. The M6000 and Titan X are identical in hardware specs, and have the same single precision compute estimate at 7 Tflops. Iray currently only uses single precision, and the Titan X has higher clocks than the M6000 so renders fastest in iray.

Check the benchmarks here:-

http://www.migenius.com/products/nvidia-iray/iray-bench...

The Titan X has also been proved to be slightly faster in 3ds max for viewport performance this time around according to SpecPerf benchmarks, so the solution that I didn't pick is wrong on both counts!

If toms is letting people choose solutions that have not been solved for other people, at least make sure those people know that facts!
 

Shneiky

Distinguished
nVidia has been doing this for years. They are decreasing the compute capabilities of GTXs ever since Kepler. Some people even found out that GTX drivers were not letting Kepler cards use all CUDA cores in Premier Pro.

People who upgraded from 570s to 680s (from Fermi to Kepler) and witnessed a performance drop in professional software productivity. While Kepler has 1:8 or 1:16 (i believe) ratio from integer to float the ratio for Maxwell has been brought down to 1:32. Older Kepler cards win in a lot of computational scenarios against the new architecture.

People around me who had Fermi Quadros saw improvements in some areas and a drop in other areas when they benched new Kepler Quadros. All of those people are reluctant to even upgrade to Maxwell Quadros, because of the performance drop in some areas. Most of them are waiting for Pascal.

So - here you have it. Maxwell is less compute oriented than Kepler. nVidia wants you to buy Quadros for their IRay or whatever, so all GTXs, including the Titan X, are far behind.
 

DoDidDont

Distinguished
May 27, 2009
81
0
18,640


Hi Shneiky,

This is not the case with the new Maxwell's. The Titan X and M6000 have exactly the same single precision performance, and the drivers are not crippled. You can see by the Blender and Octane benchmarks that the Titan X is faster than the M6000 because of higher clock rates, only in Iray the M6000 performance is 3x faster, but the Iray results should be the same as Blender and Octane, with the Titan X being slightly faster. There is no reason for this other than the test being wrong or Nvidia crippling the Titan X only in Iray which they develop.

Unlike the GTX 680 that had terrible single and double precision, the old Titans and Titan X are not crippled and the Titan X has 7tflops of single precision, the same as the M6000.

The Older Titans were also faster than the precious generation Quadro's in Iray and were not crippled.

So this can only be down to the iray software.

If the Titan X was crippled by drivers then All benchmarks would have trhe same results with the M6000 being 3x faster in the single precision tests.

 

Shneiky

Distinguished
You are making two wrong assumption here:

1 - Both Octane and IRay are very different in nature.

GPU renderers are claiming to be "unbiased" or in other wordes brute force route in ray tracing. With that in mind, it is obvious that each software has its own way of setting up the acceptable tolerances the rendering engine will operate within. Different algorithms and optimizations can lead to varied results in both quality and speed.

2 - Yes this is also the case with Maxwell. The old Titan delivers more than twice the double precision compute.

http://www.anandtech.com/show/9059/the-nvidia-geforce-gtx-titan-x-review/15

I am not an expert in GPU rendering, I use production CPU rendering most of the time, but I have this explanation, if you want my 2 cents. Octane is single precision. Single precision in Titan X and Quadro is rather similar. IRay is double precession. Titan X suffers in double precision where as the Quadro does not.

-------
http://render.otoy.com/forum/viewtopic.php?f=9&t=45545

"Octane doesn't use double precision with the small exception of hair segments. Everything else is single precision only"

------

While I can not find the exact link right now to really point the source, my research for IRay some time ago when we had to pick a renderer for the office (we went VRay) was that IRay does contain a lot of double precision calculation in there.

 

DoDidDont

Distinguished
May 27, 2009
81
0
18,640


The link below shows older benchmarks with the Titan (2688 core) in Iray, Blender.

http://www.tomshardware.co.uk/best-workstation-graphics-card,review-32728-18.html

1. The results are as expected on all the single precision benchmarks in the link above with the Titan being the fastest because of the higher clock rates compared to its Quadro equivalent. So single precision applications do behave as expected, and the new Titan X vs M6000 results should be the same.

2. The reason I only mention single precision in my posts is because Iray only uses single precision so talking about double precision is irrelevant, I know double precision is bad on the new Mexwell but it is bad on both the Titan and Quadro cards.

It is obvious from the Titan X vs. M6000 benchmarks that the only application using single precision, that has bad results for the Titan X is Iray, this can only be a software issue, and if you bother to look at the hardware specs of both the M6000 and Titan X, their single precision performance is identical.

As the Iray, Blender, Octane, Lux and rat results on the old comparison are as expected with the Titan being faster, the new results are exactly as expected with the Titan being faster again in Blender, Octane, Lux and rat, with the exception of iray which Nvidia develop. So this cannot be a driver or hardware issue, but crippling though the iray software. Its just simple logic.

I’ve been using 3ds max since version 4.2 and I am now using the latest 2016 version. In tests I have carried out, there is a slight loss in performance using the 4x Titan (2688 core) when double precision Is enabled, because Turbo is disabled. Iray definitely does not use double precision. The Titans I am currently using have great double precision performance, but as I do not use apps that take advantage of DP, its not important for me that the Maxwell GPU’s suffer in this category.
 

DDWill

Distinguished
Aug 3, 2009
28
0
18,530
I have to agree with DoDidDont sorry Shneiky.

I remember the article on Tom's a few years ago that DoDidDont gave a link to. Its one of the reasons I upgraded my old GTX 580's to Titan Blacks.

I would say there's something wrong with that iray test result on the Dutch site. I also would have expected the new Titan X to have faster results than the M6000 in the iray test as its GPU is clocked higher.

I was thinking about the upgrade as well, but these benchmark results are a little confusing. I can see your logic DoDidDont.

I hope this is just a test result gone wrong, but it does look like a software problem, as you said the TitanX and M6000 should have the same SP performance, and the other benchmarks show that they have, with iray being the exception.

Shneiky, Iray only uses single precision.

I will put a post on the Iray forum with that link form Tom's DE to see what the iray/nvidia team have to say.

Cheers for the link DoDidDont.
 

DoDidDont

Distinguished
May 27, 2009
81
0
18,640
I was having the same talk over Autodesk's 'The Area' forum. Migenius have updated their Iray Benchmarks, and they tell a very different story with the Titan X being the fastest card, slightly ahead of the M6000.

The tests were carried out in 3ds Max 2105 SP2, so I trust these results a lot more than the Toms hardware DE site, using 3ds max 2013. Maybe Toms De modified the dll library but didn't do a very good job of it.

There are still people saying the Titan X doesn't work in 2015 and that the Maxwell patch does not currently include the Titan X in its library, but the guys over at Migenius must have gotten the Titan X to work somehow, so I think its only a matter of time before there is an updated patch.

The link to the Migenius benchmark is below.

http://www.migenius.com/products/nvidia-iray/iray-benchmarks-2015
 
Status
Not open for further replies.