Feedback appreciated on a Build for a 3d / CAD workstation

wyvernwood

Honorable
Oct 15, 2012
36
0
10,530
Following on from my previous thread:
http://www.tomshardware.co.uk/forum/366835-13-custom-build-graphics-workstation-queries I've managed to focus in on a build now - and would really appreciate some feedback and comment - particularly on compatibility of components:

The desktop will be used for approx 30% CAD, 40% 3ds Max, 20% Photo/image PP, 10% other (including occasional games)

I'm intending on purchasing within the month - and to build it myself.

CPU: Intel i7 3930k 3.20 GHZ (£450.00)
Motherboard: ASUS P9X79 PRO LGA 2011 (£277.00)
Memory: 32GB (8x4GB) DDR3 Quad Core RipjawsZ 1866MHz (£155.00)
Graphics: EVGA GeForce GTX 680 Classified 4096MB GDDR5 (£500.00) <- For iRay rendering in 3dsMax (1536 CUDA cores)

HD1: Samsung SSD 256GB 830 (£135.00)
HD2: 2TB Western Digital SATA Caviar Black 64MB cache (£155.00)
DVD/CD RW: DVD/CD RW - SATA (£20.00)

Case: ATX Full Tower Fractal (USB3) (£75.00)
PSU: Corsair TX 650W V2 ATX2.31 80Bronze Power Supply (£75.00)
CPU fan: Noctua NH-D14 *2011* Dual Radiator CPU Cooler (£68.00)
Case Fan:

OS: Win 7 Pro 64bit (£115.00)

I still haven't got a full handle on the best cooling approach for this desktop - so the Case / Case cooling approach still needs tying down.

Overall this comes out at about £2000 - which is more than I initially wanted to spend, but only way of decreasing it is to reduce all items by 1 notch or so (i7 3820, 16GB Ram, GTX 680 2GB, 1TB HD) which results in an overall cost of approx £1500 - and I'm trying to work out whether the extra £500 is worth spending, it is after all a significant chunk of a licence for After Effects etc.

Thanks for any feedback.
 

brandon402

Honorable
Jul 2, 2012
144
0
10,690
It looks like the ultimate build for rendering! Everything looks compatible. You may want to get a 750 watt power supply, 650 will cover it, but they say it's best to have some "headroom".

If you wanted to save some money you could do an i7-3770k build. It will still be fast, but you would have 8 threads instead of 12 threads with the 6 core. I render too, but I couldn't bring myself to pay almost $600 for the 3930k :na:
 
Since you are using pro software, I would go with a pro graphics card. I would go with a AMD v5900 from AMD as its affordable and faster than what Nvidia offers in that price range. Sure, consumer cards will work, but 400 and 500 are faster at rendering than the 600 series right now because of the software optimization, but pro cards will give you better performance, more stability and better support. If you plan to overclock go with a 3770K as it will save you some dough and be nearly as fast as the 3930k. If you don't overclock, go with an 1155 Xeon as you can get 8 threads for the same price as a 3770k or cheaper depending on clock speed.
 

wyvernwood

Honorable
Oct 15, 2012
36
0
10,530


Sorry, I should probably been a bit clearer on the rendering for which the system will be used. I would like to use 3dmax / iRay - which is NVidia only, and dependant upon the CUDA cores offerred by their cards. So the choice is really between GTX680 and the pro-level Quadros.

If I'm right in my analysis:

For the same money, you get more CUDA cores and memory (essentially for holding polys/textures) with the GTX than you do in the Quadros. Quadros will give more stability I think, and be more suited to the constant demands of long renders.

On balance I made a decision to go with the GTX680 (1536 CUDA cores / 4GB RAM) for £500 rather than the QUADRO 4000 (256 CUDA cores / 2GB RAM) for about £725 - It will deliver better iRay performance (although with a risk of over-heating for long stretches) than the Quadro.

My main productivity rendering will be Mental Ray (which won't use the GPU anyway other than for viewport refresh), I aim to develop iRay skills with the system - hence the inclusion of the GTX.
 
Okay. Nvidia it is. Like I said though, although the amount of cores does matter, the 500 series has been faster than the 600 series because of optimization of its architecture. This is because the 600 is relatively new. Eventually the 600 series will get optimized though, but you will probably see lower speeds with it to start. Although the Quadros tend to have lesser specifications, they do have more stable drivers that are designed specifically for the apps you use and are designed with long render runs in mind. The newest version of Cuda has just come out too. All of the benchmarks I have read show Quadros smoking GeForce cards in pro apps (these are older cards though). It would be nice to see Tom's do an updated article on this.
 
I've been scouring the web for benchmarks and see the Ge Force leading in some software which is great. More performance for less money. I just can't find new Quadro cards running benches vs. new Ge Force cards. I am curious to see how the 680 works for you. Please post on how it goes.
 

wyvernwood

Honorable
Oct 15, 2012
36
0
10,530
Same here - there is very little that I have found doing a direct comparison with the later cards.

I did find some outdated info comparing the Quadro 4000 with the GTX 400 series - in which the 400s did score quite well on the iRay side of things (although not as well with viewport refresh - and obviously were running far hotter).

Since this is about accessing the iRay technology - I may settle for the GTX 680 for the time being (been looking for a supplier of the 580s - and they seem to be getting rarer now, especially at 4GB).

Ultimately the Quadros would be the way to go, but that memory restriction could be a limiting factor for me (1.5GB on the Quadro within my budget compared to 4GB on the 680 for less £). If I take to iRay - or other GPU renders, I can always upgrade the card to a Quadro (or Quadro+Tesla).

Going for a GTX580 instead of a 680 will only save about £100, and will have 1.5GB instead of 4GB RAM, so I think I will still opt for the 680.
 

wyvernwood

Honorable
Oct 15, 2012
36
0
10,530
I did find this on Jeff Patton's site - on his FAQ about IRAY:

IRAY FAQ

His response on Q3 is interesting:

Which GPU should I buy?

As a rule of thumb here’s what I’ve been telling others that asked the same question: Start with something like a 2gb or 3gb GTX card (a 3gb 580 is a good choice) to get a feel for iray and/or GPU rendering in general to see if it even fits into your pipeline. If it doesn’t, well at least you have a nice gaming card to use or sell. If you’ve tried it and discovered that GPU rendering will work well for you and your GPU temps aren’t terribly bad, then you may want to get another GTX type card or two for faster renders.

After testing, if you discover your scenes require a bit more than 3gb then you have to decide whether or not it’s feasible for you to invest in the higher end Quadro and/or Tesla cards. Typically you’ll use a Quadro series GPU to drive your viewport and Tesla for rendering. You can render with the Quadro GPU’s, but do NOT buy a Quadro for rendering only. I say this because the Tesla computing GPUs are less expensive. For example, a 6GB Quadro 6000 cost around $4,000.00 while a 6GB Tesla c2075 will cost around half that at $2,000.00.
 
RAM - Are those low profile units ? So they will fit under ya cooler ? I suggest these over the Gskills

http://www.newegg.com/Product/Product.aspx?Item=N82E16820226327

Nothing bad to say about the Samsung 830 but check prices versus other tier 1 SSD's

http://www.tomshardware.com/reviews/ssd-recommendation-benchmark,3269-6.html

SandForce controller with Toggle DDR NAND (Mushkin Chronos Deluxe, Patriot Wildfire, OCZ Vertex 3 Max IOPS, OWC Mercury Extreme Pro 6G, Corsair Force GS) .... Samsung 830 SSD 256 GB, Plextor M3 Pro 128 GB/256 GB, OCZ Vertex 4 512/256 GB

HD - This is the one item I'd say "take another look". I been using the Barracuda XT's exclusively in CAD and video boxes.

http://benchmarkreviews.com/index.php?option=com_content&task=view&id=708&Itemid=60&limit=1&limitstart=10

For a hard drive, the Barracuda XT series offers the best performance available from a 7200 RPM mechanical storage device.

Given ya usage (little to no gaming) I wouldn't be sizing for SLI, therefore I'd go with a 650 watter

Here is Guru3D's power supply recommendation:

GeForce GTX 680 - On your average system the card requires you to have a 550 Watt power supply unit.
GeForce GTX 680 SLI - On your average system the cards require you to have a 750 Watt power supply unit as minimum.

As for the cooler, here's a comparison between the Noc and the Phanteks:

Thermal: Phateks 50.75C / Noc DH-14 51.25 - Edge Phanteks by 0.5 degrees*
Warranty: Phanteks 5 years / Noc DH-14 1 year - Edge Phanteks by 5 years
Aesthetics: Phanteks several color option s/ Noc DH-14 one ugly option :)

* other sites show different results and they oft trade wins.

The CUDA "optimizations" you read about are in part not related as much to "optimizations" as it does in locking out CUDA performance. Adobe for example locks CUDA and prevents you from using it unless it's on a card that has gone thru their long and expensive testing program. You can unlock CUDA in CS on any modern CUDA card with a simple file edit as described here:

http://www.studio1productions.com/Articles/PremiereCS5.htm

You can find CUDA capable cards here

http://developer.nvidia.com/cuda/cuda-gpus

As you can see, the 5xx series cards only do CUDA 2.0/2.1

GeForce GTX 690 3.0
GeForce GTX 680 3.0
GeForce GTX 670 3.0
GeForce GTX 660 Ti 3.0
GeForce GTX 660 3.0
GeForce GTX 650 TI 3.0
GeForce GTX 650 3.0
GeForce GTX 560 Ti 2.1
GeForce GTX 550 Ti 2.1
GeForce GTX 460 2.1
GeForce GTS 450 2.1
GeForce GTS 450* 2.1
GeForce GTX 590 2.0
GeForce GTX 580 2.0
GeForce GTX 570 2.0

The other part is that we haven't yet seen "big kepler". It has been widely rumored that the 660, 670 and680 we see today were originally scheduled to be the 640, 650 and 660 and that because of the 78xx / 79xx series performance numbers, they renumbered the line. It was explained at release that these cards were designed primarily as gaming cards and not the dual use gaming / GPGPU usage like we saw with the high end cards in the 4xx and 5xx series. So yes, while not an optimization issue, they just don't have the oomph in CUDA that their predecessors did simply because nVidia was chasing a different market with what they originally intended would be their midrange cards.
 

grifFH

Honorable
Oct 22, 2012
2
0
10,510
Just a quick note about iray. Max2013/iray doesn't support kepler, as of product update 5, so a 6xx series or a KXXXX Quadro will be useless until AutoDesk decides to update Max.
 

willyroc

Honorable
Jul 22, 2012
257
0
10,810
Which is why I would get a 1155 Xeon, preferably the E3-1230V2 since it's essentially the same as a 3770, coupled with a Z77 motherboard, and a GTX 670 since it has the same memory bandwidth as the 680.
 

wyvernwood

Honorable
Oct 15, 2012
36
0
10,530


OK - with all that I have read - I hadn't picked up on this crucial point, and I've slammed the breaks on.

At present then, Max 2012/2013 iray will not function with a GTX680 ?

Is the GTX670 an older version of Kepler that is compatible with iray in 2012/13 ?

Is support for a GTX680 in Max simply that - ie the card is not certified, but still workable ?


Thinking about it, it isn't necessarily a show-stopper for us. Our present license is for 2010, so we will need an upgrade to Max before accessing iRay anyway. I could still go for the system (to drastically increase performance in our Mental Ray pipeline), put a lower-spec card in, and then upgrade to a 680 or a Quadro once we upgrade Max (after iray is compatible with the newer cards).
 

grifFH

Honorable
Oct 22, 2012
2
0
10,510
Kepler support hasn't been implemented in iray for Max 2012/13. Iray works in CPU only mode with Kepler cards. It will not use any 6XX cards to render.
Just to be clear, you can use 6XX series cards in Max for viewport display. Autodesk doesn't certify consumer cards but they do some testing. Their GPU list for Max is here. If you show all results the 580/680 pass all the tests. We've been using the consumer cards without any problems.