Sign in with
Sign up | Sign in
Your question

Rendering with 6 GTX 580 cards

Last response: in Systems
Share
July 25, 2012 12:23:54 PM

Hello,

Wanted to do a new rig from the ground up that supports 6 gtx 580 cards (this is key since this will be dedicated to gpu rendering). What would be the best setup in terms of motherboard and psu? Any overall setup for this configuration suggested? I'm a total newbie and any help would be highly appreciated!
July 25, 2012 12:36:00 PM

This topic has been moved from the section Overclocking to section Systems by Pyree
m
0
l
July 25, 2012 12:51:31 PM

I dont think its even possible to get more than 4 cards in Crossfire/SLI, and I doubt you could find a consumer grade motherboard with six 16x PCI-E3 slots on it. Closest I can think of is two separate PC's linked over a network, both working on the job.

I think you would get more rendering performance from a $1500 CPU than you would from 6 mainstream graphics cards. If your willing to spend that much money, I would look into Quadro cards as their dedicated to rendering.

As for PSU requirements, it would be well above whatever consumer grade stuff you could get. Maybe if you got two 800W supply's and connected one to the mobo and 2 cards, and the other to the left over cards, then somehow rigged the 2nd PSU to turn on with the first. At that power draw, you better check how much you can actually draw from the wall, as 1600W would probably trip a circuit breaker so you'l have to do something about that.

m
0
l
Related resources
July 25, 2012 12:56:13 PM

SLI only does up to 4 GPUs, so kindly send your spares for me to use :D 

seriously though, rendering software typically will only use 1 GPU, with the possibility of using a 2nd GPU. If you really think you need the horsepower of 6 GPUs then you need to sell them all off and invest in a high quality Quadro or FirePro GPU with will bring much better performance, 10bit color, and error correction while requiring much less power and producing much less noise.

As far as the plaform is concerned: you can do a lot with civilian LGA1155 i7 CPUs, but if this is for real work then you really need to move up to LGA 2011 with a 6 core (12 thread) CPU. Remember that core count is more important than core speed for high end workloads (though having both is always better). This should be paired with 32GB of ram (4x8GB if you want to move up to 64GB later, but 8x4GB may be fine).

If you have a lot of money to burn then look into duel 2011 Xeon e5 setups. They can offer duel 8core HT setups which would have 16 cores and 32 threads of processing power. Again, low core speeds, but the massive parallel performance is unmatched. Such a system should be paired with 64 or 128GB of ram (2-4GB per thread).

For more specific advice please let us know what software you are using, the scope of the projects you are doing, and what kind of budget you have to work with.

In all likelyhood a single 580 will do the trick for what you want, and duel 580s will let you game well also. Sell the rest and invest in high quality monitors to work with (at least 2 and preferably 3), and other high end components.
m
0
l
July 25, 2012 12:59:22 PM

Oh, for power supplies; After you have your build finalized then select a powersupply.

Use a power calculator to figure out what you need, and then stick with something that is at or 1-200W above what you need. Never go below, and never get something massively overpowering for your system.

Stick with a quality power supply such as PC Power and Cooling, or Corsair that is rated at 80+ Bronze or better.
m
0
l
July 25, 2012 1:12:35 PM

What are your budget requirements?
m
0
l
July 25, 2012 1:36:24 PM

Thank you for all your suggestions! To clarify, wanted to custom build something like this one that I've came across from my research:

http://blog.renderstream.com/2010/11/renderstream-annou...

What do you guys think?
Regarding budget, seems like this type of setup will ran about minimum
8k to build yourself? (Since 8 x GTX580 3GB will already cost about 650 each).
Will it be better to custom build or buy pre-build like the one above?
m
0
l
July 25, 2012 2:31:40 PM

The ASUS Z9PE-D16 motherboard (~400$, SSI EEB) has 6 PCI-E 3.0 x16 slots (some of them run at x8). You will need single-slot graphics cards, as well as 2 Xeon LGA2011 processors.

The Cooler Master II Cosmos case supports this motherboard, however 3 of the stand offs are not in the correct position to support this board. It also has places for you to mount water cooling.
m
0
l
July 25, 2012 2:40:52 PM

He won't be using SLI guys....

The biggest issue that I'm aware of is size. First you'll need a motherboard with the slots. I think this one is used often.

http://www.newegg.com/Product/Product.aspx?Item=N82E168...

After that you'll need the cards. The problem is the GTX580s are all dual slots, so you'll need to go to water cooling to get them down to a single slot. After that its a matter of power. Each GTX580 can use around 250W per card, so 250 * 7 = 1750W. You'll need the biggest of the USA approved PSUs and even then you'll probably need to downclock the card. Better idea is to only run 6 cards or use two PSUs. I wonder how that box uses only one PSU....

Before anyone suggests it you don't want to use GTX680/670s. They don't have as good 64bit FP compute abilities as the GTX580.

http://www.anandtech.com/show/5699/nvidia-geforce-gtx-6...
m
0
l
July 25, 2012 2:45:09 PM

gpgpu motherboards are not easily found though
m
0
l
July 25, 2012 2:46:19 PM

4745454b said:
He won't be using SLI guys....

The biggest issue that I'm aware of is size. First you'll need a motherboard with the slots. I think this one is used often.

http://www.newegg.com/Product/Product.aspx?Item=N82E168...

After that you'll need the cards. The problem is the GTX580s are all dual slots, so you'll need to go to water cooling to get them down to a single slot. After that its a matter of power. Each GTX580 can use around 250W per card, so 250 * 7 = 1750W. You'll need the biggest of the USA approved PSUs and even then you'll probably need to downclock the card. Better idea is to only run 6 cards or use two PSUs. I wonder how that box uses only one PSU....

Before anyone suggests it you don't want to use GTX680/670s. They don't have as good 64bit FP compute abilities as the GTX580.

http://www.anandtech.com/show/5699/nvidia-geforce-gtx-6...



my bet is that its a custom-built box and motherboard. which explains its $14,000 pricetag

specs:
Quote:

Processor(s): 2 x Intel® Westmere Xeon® X5680 (3.33GHz)
Graphics Card: 8 x Video Cards ( Nvidia GTX 470, GTX 580, Quadro 4000, 5000, 6000, Tesla™ C2050, C2070, M2090 or AMD ATI HD6970)
Operating System: Linux 64-bit, Windows 7 64-bit
Memory: Maximum 144GB DDR3-1333 MHz ECC Registered Memory (18 x 8GB) 48GB Main Memory, 96GB RamDisk
Hard Drives: 2 x Intel 2.5" M25-e SSD or 2 x Intel 2.5" 510 SSD
Backplane Connection: infinband (lose one of the GPU's) or Gigabit
m
0
l
July 25, 2012 5:51:31 PM

4745454b said:
He won't be using SLI guys....

The biggest issue that I'm aware of is size. First you'll need a motherboard with the slots. I think this one is used often.

http://www.newegg.com/Product/Product.aspx?Item=N82E168...

After that you'll need the cards. The problem is the GTX580s are all dual slots, so you'll need to go to water cooling to get them down to a single slot. After that its a matter of power. Each GTX580 can use around 250W per card, so 250 * 7 = 1750W. You'll need the biggest of the USA approved PSUs and even then you'll probably need to downclock the card. Better idea is to only run 6 cards or use two PSUs. I wonder how that box uses only one PSU....

Before anyone suggests it you don't want to use GTX680/670s. They don't have as good 64bit FP compute abilities as the GTX580.

http://www.anandtech.com/show/5699/nvidia-geforce-gtx-6...


Since the OP wanted to do GPU rendering, the lane width on half of those slots (x1!) will have a major performance hit. Server motherboards with multiple sockets like the one I have linked above, are a better option. In terms of price for processors, if you don't spring for Xeon E5 2687W processors, and instead go for entry-level or mid-range (after all, we're talking GPU performance here, CPU is only there to keep them fed with instructions and shaders), it becomes more sensible.

Plus, with server motherboards, there is quad channel memory slots for each processor. This translates to superior performance when transferring data from host (CPU) memory to device (GPU) memory. Since the transfer needs to happen quickly, an x8 or x16 link will provide that. However, as you said, size will be the biggest issue. It is more expensive to use single-slot graphics cards, but we're already talking 6 high-end cards. The OP clearly has the money.

Not to mention that with server boards, the ethernet controller has support for teaming GbE LAN ports. Which would be good if you're wanting to cluster more than one of these beasts together in the future.

We can also discuss price too. (Ballpark figures.)
ASUS Z9PE-D16 - 300$
2x Xeon processors - Up to 4,000$
8 x 4GB DDR3 1600 - 300$ ish.
6x GTX 580's - Assuming 700$ per card, 4200$.
PSU - Up to 500$
Storage - Up to 200$

This comes out to 6000$ to 9500$. Compared to the build you looked at, this is 1000$ to 5000$ cheaper.
m
0
l
July 25, 2012 6:45:40 PM

Shotzo said:
Since the OP wanted to do GPU rendering, the lane width on half of those slots (x1!) will have a major performance hit. Server motherboards with multiple sockets like the one I have linked above, are a better option. In terms of price for processors, if you don't spring for Xeon E5 2687W processors, and instead go for entry-level or mid-range (after all, we're talking GPU performance here, CPU is only there to keep them fed with instructions and shaders), it becomes more sensible.

Plus, with server motherboards, there is quad channel memory slots for each processor. This translates to superior performance when transferring data from host (CPU) memory to device (GPU) memory. Since the transfer needs to happen quickly, an x8 or x16 link will provide that. However, as you said, size will be the biggest issue. It is more expensive to use single-slot graphics cards, but we're already talking 6 high-end cards. The OP clearly has the money.

Not to mention that with server boards, the ethernet controller has support for teaming GbE LAN ports. Which would be good if you're wanting to cluster more than one of these beasts together in the future.

We can also discuss price too. (Ballpark figures.)
ASUS Z9PE-D16 - 300$
2x Xeon processors - Up to 4,000$
8 x 4GB DDR3 1600 - 300$ ish.
6x GTX 580's - Assuming 700$ per card, 4200$.
PSU - Up to 500$
Storage - Up to 200$

This comes out to 6000$ to 9500$. Compared to the build you looked at, this is 1000$ to 5000$ cheaper.


Thanks for the suggestions! Were looking at server board options based exactly on your points of data transfer and the ability
to cluster them. A couple of questions, since there will be a minimum of 4-6 cards and running for long periods of rendering time (could be days), is water cooling recommended even though there would be no overclocking or is air cooling enough? If so, what type of cooling system do you suggest? Also due to power issues of running these cards, would it be more beneficial/manageable to run 4 cards and connect the 5th slot to an expansion case with it's own power supply for the other 4 cards?

m
0
l
July 25, 2012 7:24:45 PM

The air cooling will be enaugh if you keep this beast in a 20 degrees celsius room ^- i guess
m
0
l
July 25, 2012 8:01:39 PM

newbie00 said:
Thanks for the suggestions! Were looking at server board options based exactly on your points of data transfer and the ability
to cluster them. A couple of questions, since there will be a minimum of 4-6 cards and running for long periods of rendering time (could be days), is water cooling recommended even though there would be no overclocking or is air cooling enough? If so, what type of cooling system do you suggest? Also due to power issues of running these cards, would it be more beneficial/manageable to run 4 cards and connect the 5th slot to an expansion case with it's own power supply for the other 4 cards?


If you have found a way, or know of one, to run multiple dual-slot boards with the ASUS Z9PE-D16 server board, please let me know. I'd be interested.

I would strongly suggest water cooling, since this many GPUs placed next to each other with no slots in between makes it very difficult for them to intake cool air with dual slot designs. Water cooled cards are available in the single-slot form factor, as well. Depending on the cost of your expansion case (the one I know of starts at 8000$ for several x8 slots using an active splitter), this may also be cheaper. It is not without risk however, as a leak can cause a significant loss of capital. However, to my knowledge, there is no other mainstream solution for cooling this many graphics cards in a compact space. If you decide to go the air cooled route, and the cards end up having thermal problems, they will throttle themselves down to reduce the heat, but this will also decrease performance significantly.

As for powering these cards, choose cards that less hungry on maximum power usage (around 200W is enough to bring down the power requirements to the range where a 330$ 1600W PSU can handle it. Research is necessary to determine if your chosen GPU is efficient enough). For baseline load requirements, I suggest purchasing your desired motherboard and processor, but no GPUs, and measuring the power draw from the wall outlet at maximum processor load. This will give you an idea of if the PSU you choose is within the range. Then purchase or borrow a single GPU, and run it at maximum processor and GPU load. This will give you an idea of how much power the card uses. Since PSUs have an output power that is always less than the power draw from the outlet, this gives you around a 10% margin of safety. Bear in mind that 1600W of power translates to nearly 15A of current at 115V.

This PSU is rated for 1600W: http://www.newegg.com/Product/Product.aspx?Item=N82E168...

An alternative design would be to split the power draw evenly from two power supplies. Commercial solutions exist which allow this to be done; they turn on the second supply when the motherboard commands the first one to turn on. This may be a cheaper alternative, however it is not without risk. If a power supply fails, the other supply will not be able to cope with the sudden increase in demand for power (which will likely be well beyond its rated specification), and may fail as well. I have not researched the consequences of a failed PSU, however, I will state that if the failure mode of the PSU involves giving components more voltage than they were expecting, it will be costly.

That being said, when you install all these cards into one box, you should set whatever program you are using to only use a small subset of the cards, and ramp up the load as you verify the power draw from the outlet does not exceed the rating for the PSU.
m
0
l
July 25, 2012 10:28:43 PM

Thank you all for your valuable suggestions - they are duly noted!

Regarding the expansion case options, have anyone heard of Cubix before -
specifically Cubix GPU-Xpander Desktop 4 (for 4 cards)? This would
ran about 6k and have the potential to be modular (it seems one could
add 2 or even 3 of them to existing desktop to create a small cluster
of gpu render farm).
m
0
l
July 26, 2012 12:02:23 AM

I have not heard of the Cubix before (I was searching for PCI-E active expanders), but by looking at it, it may be a viable way to continue to increase the computational power available. But at 6,000$ or more for one, it may be cheaper to pursue this solution if the price difference between single and dual slot cards is significant.

Another method to explore for increasing the power budget in your system may be to simply look for other means of generating 12 volts at high amperage, cutting up some wires and attaching PCI-E power connectors, and plugging it in to the graphics cards.

Or, if your GPU rendering software already allows renders to be done using networked computers (i.e. a cluster), it would be significantly cheaper to use dual-slot cards on consumer hardware, and build two machines. But, if it doesn't allow that, then the solution I outlined above would be cheaper compared to Cubix.
m
0
l
July 26, 2012 1:36:34 AM

I'm not sure 1x in this case would be bad. Would take longer to transfer data to the GPU, but the most time consuming thing would be the work the GPU does. I only linked that board because it has the slots and I thought people have used it before. If there are better boards then by all means use them.

I also don't think cooling should be that big of an issue. As long as your water loop has the ability to handle the heat then you should be good. And no, I haven't heard of cubix.
m
0
l
July 26, 2012 3:01:05 AM

Shotzo said:
I have not heard of the Cubix before (I was searching for PCI-E active expanders), but by looking at it, it may be a viable way to continue to increase the computational power available. But at 6,000$ or more for one, it may be cheaper to pursue this solution if the price difference between single and dual slot cards is significant.

Another method to explore for increasing the power budget in your system may be to simply look for other means of generating 12 volts at high amperage, cutting up some wires and attaching PCI-E power connectors, and plugging it in to the graphics cards.

Or, if your GPU rendering software already allows renders to be done using networked computers (i.e. a cluster), it would be significantly cheaper to use dual-slot cards on consumer hardware, and build two machines. But, if it doesn't allow that, then the solution I outlined above would be cheaper compared to Cubix.

Unfortunately no network render capability at this point - it will definitely be a huge plus!
Will also look into the PCI-E active expander option.
m
0
l
July 26, 2012 3:18:36 AM

4745454b said:
I'm not sure 1x in this case would be bad. Would take longer to transfer data to the GPU, but the most time consuming thing would be the work the GPU does. I only linked that board because it has the slots and I thought people have used it before. If there are better boards then by all means use them.

I also don't think cooling should be that big of an issue. As long as your water loop has the ability to handle the heat then you should be good. And no, I haven't heard of cubix.

Was trying to keep to only air cooling since no OC work will be done on these cards or CPU's, but sounds like will need to water cool due to the amount of cards (minimum 4 GTX580 3GB)?
m
0
l
July 26, 2012 6:25:14 AM

You need water cooling turn the cards into single slot cards. If you want to run that many.

Maybe run 4 GTX690s? They have two GPUs per card, so it should be like running 8 GTX680s. Only issue is the worse64bit FP abilities of the GTX680 vs the 580. I'm not familiar enough with the programs being used and what you want to know if this is even an issue.
m
0
l
July 26, 2012 2:27:54 PM

4745454b said:
You need water cooling turn the cards into single slot cards. If you want to run that many.

Maybe run 4 GTX690s? They have two GPUs per card, so it should be like running 8 GTX680s. Only issue is the worse64bit FP abilities of the GTX680 vs the 580. I'm not familiar enough with the programs being used and what you want to know if this is even an issue.

I see - so it's more of a size issue as mentioned earlier and not so much a heat issue.
(So 4 air cool dual slot vs 8 water cool single slot for example.)
But if proceed with 8 water cool GTX580, still not quite get how to go around the power
issue even if it's 1600W since base on the card specs, 8 cards will definitely be over.
(Also read up on using 2 psu units and the consensus is don't do it.) This leads me back to the original
post of this build:

http://blog.renderstream.com/2010/ [...] p-systems/

Base on the picture and the specs they release, it's 8 GTX580, 1 psu, and no water cool...
Even if it's a custom mobo that can hold 8 dual slot cards but I assume the power issue
still exists?

Any AMD Opteron boards that have more slots than ASUS KGPE-D16?
Also in terms of server boards, what's the pros/cons of Intel and AMD?
Looking at this option if I can confirm that the render engine will have network function over LAN
so we will have the possibility for clustering.
And if so, will be leaning towards building 2 rigs using server boards with 4 gtx 580 and 24-32GB
rams each - which provides 2 work stations during the day and 8 gtx 580 at night for computing.

Regarding 690's - unfortunately won't be able to take advantage of that since the rendering engine
is optimized for Fermi for now and so far not suitable for Keplar-based cards. Might change when
updating to Cuda 5 build but do not know when that'll happen. Also we will need to be able to fit the
entire scene on 1 card's memory - GTX580 @ 3GB, GTX 690 @ 2GB per card.
m
0
l
July 26, 2012 2:49:50 PM

Looking at this option if I can confirm that the render engine will have network function over LAN
so we will have the possibility for clustering.
And if so, will be leaning towards building 2 rigs using server boards with 4 gtx 580 and 24-32GB
rams each - which provides 2 work stations during the day and 8 gtx 580 at night for computing. said:
Looking at this option if I can confirm that the render engine will have network function over LAN
so we will have the possibility for clustering.
And if so, will be leaning towards building 2 rigs using server boards with 4 gtx 580 and 24-32GB
rams each - which provides 2 work stations during the day and 8 gtx 580 at night for computing.


This would be your best, cheapest, and most flexible, option.

As far as air-cooling, it's possible, but I would be concerned about the amount of heat. Putting things into perspective, a conventional microwave oven uses 1000W of microwave energy to cook your food. 4 GTX 580 (at 350W per card at full load), is 1400W.

There was also an external PCI-E cabling standard released back in 2007. Might be worth looking into.
m
0
l
July 26, 2012 2:52:23 PM

Quote:
GTX 590's are still a worthy mention, they are Fermi, and Fermi do really well at this type of work!

Yes - double the cuda cores of 580's but at half the memory - 1.5GB per card.
It's great if you don't need anything more than 1.5GB :) 
m
0
l
July 26, 2012 3:13:04 PM

Shotzo said:
This would be your best, cheapest, and most flexible, option.

As far as air-cooling, it's possible, but I would be concerned about the amount of heat. Putting things into perspective, a conventional microwave oven uses 1000W of microwave energy to cook your food. 4 GTX 580 (at 350W per card at full load), is 1400W.

There was also an external PCI-E cabling standard released back in 2007. Might be worth looking into.

Will definitely look deeper into PCI-E cabling since there might be an issue for bottleneck over bandwidth while using LAN clusters - the scene files tend to be pretty heavy.

Definitely concerned on the heat especially these will be running for days on end at minimum 80% full load...
Seems like water cooling will be the way to go. Might be unconsciously trying to avoid it seems I'm a total novice at
building water cooling systems :p 
But on the other hand, this might be the perfect scenario to finally give it a shot!
m
0
l
July 26, 2012 4:51:25 PM

Thats the beauty of a custom build. Take a 1600W PSU. Set aside 200W for board and CPU. Take the remaining 1400W and divide by number of cards, say 7. This gives you 200W per card. If you downclock the cards so they use only 200W max and not 250ish, then you can do it on one PSU.

Not a huge difference between fermi and Keplar. I've said from the start to stay with GTX580 as the FP 64bit compute is better on this card then the others. If you want to do compute with GTX cards they are the best. I don't remember how crippled the GTX570 is. These might be worth considering as well.
m
0
l
July 27, 2012 12:52:22 AM

Did some searching.. At one time there existed 2000W supplies (pulling >20A from the wall too.)

http://www.newegg.com/Product/Product.aspx?Item=N82E168...


As for bottlenecking over bandwidth, you'll need to be writing 100 MB/s worth of data to saturate a 1 GbE connection. Teaming 4 GbE ports together, and that's 400 MB/s. You'll be hard pressed to transfer that much data constantly that it becomes an issue. If it does become an issue, look into 10GbE or even 40GbE NIC cards, or fiber channel.
m
0
l
July 27, 2012 1:01:00 AM

Going these routes you might need a dedicated circuit. For sure if you max out the 2kW PSU, maybe even with the 1600. Depends on what else is running.
m
0
l
July 28, 2012 2:20:50 PM

Thanks - will definitely stick with the 1600 and play around with down clocking the cards to use less wattage.
Post a question to the program company regarding this (card speed) and whether or not it impacts rendering
speed or is it mostly the number of CUDA cores. Also regarding the possibility of using network over cat6's.

Did some digging and found some PCIe cables but seems like they are not widely available and comes at a price.
(Will have to call and purchase directly from manufacturer and a set including 1 PCIe cable + 2 cards on each side
starts at about 1k....) They are usually suppliers of PCIe expansion cases on the server side, did not see much desktop cases, mostly rack mounts. So hopefully the network clustering will be supported.

Have been specing out parts based on the recommendations and will start to post them over the weekend.
m
0
l
July 29, 2012 2:51:36 AM

Based on the previous recommendations, this is the list so far:

Case: Caselabs Magnum STH10
Motherboard: ASUS Z9PE-D16
CPU: Intel Xeon E5-2660 x 2
Graphic Card: EVGA GeForce GTX 580 (Fermi) Hydro Copper 2 3072M x 4
PSU: LEPA G Series G1600-MA x 1
SSD: OCZ Vertex 4 128G x 1

Memory (Server Memory): Kingston 32GB (4 x 8GB) 240-Pin DDR3 ECC Registered 1600
or
Memory (Desktop Memory): CORSAIR DOMINATOR 32GB (4 x 8GB) 240-Pin DDR3
Debating whether to use server rams (ECC) or desktop rams (water cool)

Radiator: Black Ice SR1 480 x 2 (On the bottom)
Radiator: Black Ice SR1 360 x 1 (On the top)

Fan: Noctua NF-P14 FLX
or
Fan: Arctic F Pro PWM F14

CPU Water Block: EK Supremacy Universal CPU Liquid Cooling Block - Full Nick
RAM Block: EK Corsair Dominator Series X4 Ram Liquid Cooling Block - Nickel CSQ

Still playing around with the water loops and will post initial diagram in the next couple of days. So far, seems like will do 2 separate loops. Thinking one for the CPU's and possible RAM block (connecting to the top 1 x 360) and the 2nd loop just for the 4 GPU's (connecting to the bottom 2 x 480). Also if go with this setup, thinking about putting the PSU on the top. What do you guys think about the choices?

m
0
l
July 29, 2012 3:06:33 AM

If going four cards you shouldn't need water. Just make sure to get a motherboard that supports four dual slot cards. You'll need a case that supports it as well as most cases only support 7 slots. EATX?
m
0
l
July 29, 2012 4:04:38 AM

4745454b said:
If going four cards you shouldn't need water. Just make sure to get a motherboard that supports four dual slot cards. You'll need a case that supports it as well as most cases only support 7 slots. EATX?

Will air cooling be enough to cool 4 dual slot cards since they will not have any space in between and will be running at close if not full load continuously?
m
0
l
July 29, 2012 4:09:25 AM

In terms of cases, also looking at Lian Li PC-P80N aside from the Magnum STH10 - both of them can support 10 slots.
m
0
l
July 29, 2012 6:28:01 AM

If you have enough airflow then yes. You might even want to mount your hdd in a 5.25 bay so that the air from the front of the case goes right to the cards without anything blocking it.
m
0
l
July 29, 2012 11:54:33 AM

If we are looking at an ~$8K budget then I would like to suggest that you look at 2-4 tesla cards, paired with duel Xeon CPU mobo. There is a huge performance (and reliability) increase in moving up to Tesla or Quadro cards which I think you would appreciate if this is going to be a heavily loaded system.

Much less power draw, deeper bit depth, much more memory per GPU (6GB of GDDR5), error correction, and specialized drivers which really unleash Fermi's compute potential.
m
0
l
July 29, 2012 1:43:55 PM

CaedenV said:
If we are looking at an ~$8K budget then I would like to suggest that you look at 2-4 tesla cards, paired with duel Xeon CPU mobo. There is a huge performance (and reliability) increase in moving up to Tesla or Quadro cards which I think you would appreciate if this is going to be a heavily loaded system.

Much less power draw, deeper bit depth, much more memory per GPU (6GB of GDDR5), error correction, and specialized drivers which really unleash Fermi's compute potential.

Definitely took a long hard look at Tesla builds based exactly on the points mentioned and concentrating on the the 2 things that matters the most to our GPU pipeline - card memory and cuda cores. We did an in house assessment which shows most of our scenes that requires GPU can fall in between 2-3GB if we squeeze hard enough with maybe 10%-20% around the 4.5GB range. Base on this and factoring the cost issue where you can have 4 x GTX580 3GB @ 2048 cores for $700 per card and 4 x Tesla M2075 @ 1792 cores for @2400 per card; an almost 1:4 ratio - we felt like it would be more beneficial to go with the GTX setup, especially if there might be the possibility of networking these stations together in the future (still waiting for confirmation on this.)
m
0
l
July 29, 2012 4:43:35 PM

newbie00 said:
Definitely took a long hard look at Tesla builds based exactly on the points mentioned and concentrating on the the 2 things that matters the most to our GPU pipeline - card memory and cuda cores. We did an in house assessment which shows most of our scenes that requires GPU can fall in between 2-3GB if we squeeze hard enough with maybe 10%-20% around the 4.5GB range. Base on this and factoring the cost issue where you can have 4 x GTX580 3GB @ 2048 cores for $700 per card and 4 x Tesla M2075 @ 1792 cores for @2400 per card; an almost 1:4 ratio - we felt like it would be more beneficial to go with the GTX setup, especially if there might be the possibility of networking these stations together in the future (still waiting for confirmation on this.)

Purchase one and do some tests, I think you will find that those fewer CUDA cores will do more work than the extra cores on the 580 because the pipeline is better structured, and the driver is more effective. If it does not work out then simply return it, but I think you will be surprised that in spite of the apparent specs that the card is simply faster for this kind of work.
m
0
l
!