Sign in with
Sign up | Sign in
Your question
Closed

Any experience OC'ing or liquid cool of NVIDIA Tesla C1060

Last response: in Overclocking
Share
November 27, 2009 4:58:38 AM

Hi
I'm going to be writing / running some massively parallel CUDA code on a NVIDIA Tesla C1060 card. My present system (I7 975 CPU) is water cooled, and runs at ~ 4.2 gHz now. I'm going to be adding at 2 of the Tesla 1060 cards, or the new Fermi cards for parallel computations. I already have 2 x BFG NVIDIA GeForce GTX 285 H2O cards (dual 30 - inch high resolution Dell monitors).

Does anybod have any experience with overclocking or cooling these new cards?

Thanks,
Particleman529
a b U Graphics card
a b K Overclocking
November 27, 2009 6:39:29 AM

Why does it have to be watercooled?

It will still stay within nice temps without waer. But a guy on here "xtc28" i think he has some teslas.

Hes kinda into water cooling so he might know about something. it would be worth sending him a pm.
November 27, 2009 9:51:02 AM

Overshocked,
Thanks for your response.
I'm just planning ahead. I presently use liquid cooling with the 2 x BFG NVIDIA GeForce GTX 285 H2O cards, as it's necessary whenever I'm doing any sort of intensive graphics work (I don't do gaming).
With the type of processing that will be run on these cards - they will most certainly get hot - and I'd just l like to include them on either a new / auxiliary loop or what I'm using now.
Thanks,
Particleman529

Related resources
Can't find your answer ? Ask !
November 27, 2009 6:42:55 PM

i know teslas are flashed with a different bios and use different drivers, but that doesn't necessarily mean there are any physical differences. If you can find out if a tesla card has the same mounting holes as a standard desktop card, you should have no issues watercooling them. keep the stock coolers intact and in good shape though... if one of your teslas dies, you can bet they won't cover it under warranty with a waterblock on it.
also, if you've got an i7 and 2 285s in a loop already, you're going to need a new loop for your teslas. sweet rig. i want one.
December 4, 2009 2:05:29 PM

endorphines,

Thanks - I'm getting information about the tesla units now. One of the sales reps is going to send me more information about the card's design - with this, I should be able to determine if I can water cool it. I'll ask the reps about the return policy if I water cool the thing. NVIDIA is having a 'Mad Scientist' promotion now that runs until Jan 31, 2010. The promotion allows for a sizeable cut in price, and a 'first in line' option for getting the Fermi. The C1060 is available now, with the C2050 (Fermi) coming available 1st quarter next year. If I take part in the program, I'll be first in line for the Fermi, at a reduced price, while keeping the C1060 (Tesla unit). I think I'm going to do that.

The Tesla provides 933 /78 GFlops (single/double precision, while the C2050 (Fermi) provides 1,260 / 630 GFlops (single / double precision) of processing power. Most of my stuff is double-precision applications, and I have enough memory onboard already - 16Gb, in one of the systems. I'll probably run the Tesla in one processor for now, then shift it to another system when I upgrade to Fermi.

PGI just released a Fortran-CUDA compiler that I can use for Fortran code, C/C++ is available for CUDA with other compilers.

Anyway, that's the latest, I'll post when I get more information.
Thanks,
Particleman529
January 5, 2010 8:10:41 AM

Thanks Particleman for your interesting and informative discription,

Currently I am running my CUDA-FDTD (finite difference time domain) code on a 400$ machine + 400$ GTX285.

I am just designing a new Tesla system. My problem is memory limited (4GB is nice but would like much more). I will have to spend about 5500 US$ before the end of January for the new machine. End of fiscal year and the budget needs to go...

My thoughts so far:
Ubuntu OS
two or three C1060 Tesla's (very sadly the Fermis are not due before 2nd or even 3rd quarter 2010, then 6 GB GPU will be fantastic!))
Intel 7i 975
ASUS supercomputer P6T7 WS motherboard
Corsair 6x4GB dominator RAM (if 6 modules fit in the tightly packed memory slots of the ASUS board)
Corsair 1000W or Silverstone 1500W PSU
Corsair Obsidian 800d chassis (I like the 4 included SATA hot swap cases)

My question: You seem to have experience in building a CUDA box. What would you do differently in a new build? Any "don't do"s to pass on? How did you water cool the your GTX285 cards? Custom made cooling? I would very much appreciate any suggestion from your side!

Cheers,
FDTD-freak
January 15, 2010 7:21:46 AM

Hi FDTD-freak,
Just a few suggestions based on my experience. I'm using a C1060 now - the footprint seems to follow some of the other NVIDIA cards, and I should be able to water cool it. I just haven't taken off the shroud yet.
To address some of your questions:
Operating system:
I use both Linux and Vista. I'm using Vista with the new Tesla / CUDA system, as I write a lot of Fortran code. Since I work with Fortran a lot, I'm using the PGI compiler - with visual basic. I like / prefer the visual basic interface that comes with some of the Fortran compilers (such as with the Intel compiler and PGI compiler). Additionally, many of the optimization options for intel chips are imbedded in the selection options for the packaged compilers. That being said, if everything's already written and optimized - Linux runs with less baggage.
I presently use an I7-975 chip. Everything is water cooled. I lapped the chip after about 2 months of use. I use that funky gallium thermal paste - I forgot the name - it requires some time, but the combination of lapping combined with that gallium based thermal paste dropped my temps quite a bit. I plan on using the new I9 with my new build when the Fermi cards arrive.
For a motherboard, ASUS reigns supreme insofar as overclocking options. I use the P6T motherboard with the Tesla card. Right now, I'm running two GTX 285's (water cooled), that I use for the two 30-inch high resolution Dell monitors. I do not use the GTX 285's for CUDA computing - only graphics. For the GTX 285's, I use the pre-built BFG cards, pre-assembled with water cooling blocks, available on Newegg. My main problem, early on with the ASUS board was the Northbridge component - it ran extremely hot. Initially, I air-cooled with one of the Thermaltake heat sinks and an additinal fan attached - this helped for awhile. In time ( about 3 weeks), I just added an additional loop for the Northbridge and ancillary components.
For a newer system, I'm checking out some of the boards with 4-card capability. When the Fermi's and I9's are released, I expect that ASUS will have motherboards capable of running 4 pci-e cards (2-Fermi's, 2-GTX 295's).
For power - you'll need at least 1200 - 1500 Watts for one Tesla card, and possibly 2 power sources if you're going with 2 Fermi's. I use a 1600 watt silverstone power source. I fried my 1000 Watt source some time ago.
For memory - I'm waiting for the Mushkin redline series with 4 Gb. I'm in contact with Mushkin reps, and they sould be out - I just don't know when. Presently, I run 12 Gb of the Mushkin red-line series (6 x 2 Gb). I've been able to tune them, and they run nicely. Presently, I use thermaltake HR-07 air-cooled systems. In the future, I'll probably water-cool my memory - as they get warm.
For my next build, I'll probably be moving to SSD - this is a primary concern because of costs. I run a 4 TByte raid 0/12 system now, for my HDD's.
For my case - I'm using a TJ07. Everything's inside, I use a hybrid, 2-loop, 2-reservoir, 3-radiator system. The case is max'ed out.

For my new build - which will follow release of the Fermi's and the I9 - I'll use:
Case - something butt ugly, but big - like the Lian Li 888 - heavily modified for cooling.
Cooling -
CPU - Single of 2-stage phase change cooling ( guy from one of the boards who lives in Atlanta has a good reputation for building these).
Northbridge, SB, MosFet - Water / liquid cooling.
Memory, GTX-295's - Water, liquid cooling.
2- possibly 3 radiators, 2-reservoirs. 2-loop hybrid cooling system.

Processor(s) - I9, 2-Fermi cards (4GB) - (I got the Mad Scientist discount and have them on order).
Other cooling things I'm thinking about. Though I like to look at my computer - the time may soon come when I will need to move it to our server / refrigerated room. If I do that, I'm going to use those Delta fans that move a lot of air and make a lot of noise in the box (120 x 38). It'll be crazy noisy. I put some of those in one of my servers some time ago - and they're crazy loud.
Just a quick question - what will be your price / costs for the Fermi or Tesla cards?
Take care,
Particleman529
Anonymous
a b U Graphics card
a b K Overclocking
May 11, 2010 4:55:54 PM

Hello Particleman, FDTD-

I am in the very early planning stages of a build that sounds like you what two have constructed. My aim is ultimately a four-way Tesla Fermi crunching machine for "extra-curricular" projects where I can't justify using my workplace's real supercomputer. I will be able to find a use for however much power I can cram into one box, but I don't have a lot of experience in high-performance computing from the hardware end. Therefore, my concerns are where I need to spring for server-grade components and if I am bottlenecking the system somewhere or overkilling in one area. I have a lot of questions on the subject to which I haven't been able to find adequate answers, and I realize you probably don't have all of them, either.

I was looking at building from the EVGA Classified SR-2 (1), since it's the only board I've seen that seems to have the full 64+ PCI Express lanes needed for the Teslas as well as SATA 6 GB. If I mod the Teslas to fit in a single slot, would the in-between slots be able to use the four remaining PCIe lanes on each 5520 chipset?

Or should I take a server-grade board and just use SATA 3 GB drives? I haven't been able to find any news on when the 6 GB will appear in that niche. The EVGA also has dual Gigabit and USB 3.0, both of which are nice future-proofing. With that as a foundation, do I need to populate both Xeon sockets to be able properly to feed the Teslas? Any advice on choosing between the different 5600s? Is ECC memory necessary for that kind of workload? I was planning on using two of Crucial's new SATA 6GB SSDs (2) in RAID 0, but is RAID 1 more important? There are only two SATA 6GB ports, so higher-order RAIDs are not an option.

Finally, with so much invested in the hardware, I am considering OCing and possibly WCing. Will the 20-series Teslas be compatible with, e.g. the Swiftech waterblocks for the related GeForce cards? (3) Or should I stick to stock speeds for equipment longevity and stability? Thanks.

Andrew

(1) http://www.evga.com/articles/00537/
(2) http://www.newegg.com/Product/Product.aspx?Item=N82E168...
(3) http://hothardware.com/News/Wet-and-Wild-EVGA-Releases-...
May 18, 2010 1:06:18 AM

Quote:
Hello Particleman, FDTD-

I am in the very early planning stages of a build that sounds like you what two have constructed. My aim is ultimately a four-way Tesla Fermi crunching machine for "extra-curricular" projects where I can't justify using my workplace's real supercomputer. I will be able to find a use for however much power I can cram into one box, but I don't have a lot of experience in high-performance computing from the hardware end. Therefore, my concerns are where I need to spring for server-grade components and if I am bottlenecking the system somewhere or overkilling in one area. I have a lot of questions on the subject to which I haven't been able to find adequate answers, and I realize you probably don't have all of them, either.

I was looking at building from the EVGA Classified SR-2 (1), since it's the only board I've seen that seems to have the full 64+ PCI Express lanes needed for the Teslas as well as SATA 6 GB. If I mod the Teslas to fit in a single slot, would the in-between slots be able to use the four remaining PCIe lanes on each 5520 chipset?

Or should I take a server-grade board and just use SATA 3 GB drives? I haven't been able to find any news on when the 6 GB will appear in that niche. The EVGA also has dual Gigabit and USB 3.0, both of which are nice future-proofing. With that as a foundation, do I need to populate both Xeon sockets to be able properly to feed the Teslas? Any advice on choosing between the different 5600s? Is ECC memory necessary for that kind of workload? I was planning on using two of Crucial's new SATA 6GB SSDs (2) in RAID 0, but is RAID 1 more important? There are only two SATA 6GB ports, so higher-order RAIDs are not an option.

Finally, with so much invested in the hardware, I am considering OCing and possibly WCing. Will the 20-series Teslas be compatible with, e.g. the Swiftech waterblocks for the related GeForce cards? (3) Or should I stick to stock speeds for equipment longevity and stability? Thanks.

Andrew

(1) http://www.evga.com/articles/00537/
(2) http://www.newegg.com/Product/Product.aspx?Item=N82E168...
(3) http://hothardware.com/News/Wet-and-Wild-EVGA-Releases-...




Hi Andrew,

Insofar as motherboards – I’ve been evaluating the EVGA Classified SR-2 as well. For this board, you’re looking at 2 XEON chips costing around $1780 apiece. (http://www.8anet.com/ShowProduct.aspx?pid=7887).
So, you’ve got $3500 invested in your CPU, having 2 x 6 physical cores. I’m a little concerned about maintaining parity with a 2-chip system, when overclocked. EVGA indicates that the board has great overclocking features, I’ve just not seen anybody who’s actually done this, yet – at least for the type of 24 x 7 processing that I need to maintain.

The Tesla cards run really hot – when they run. I’m concerned about being able to keep enough air flow through my system to keep the cards running. I received my C1060 card in December, which immediately died. I RMA’d the card, and only today (nearly 5 months later) received a replacement. I’ve not installed the card, yet. I ordered the new Fermi C2050, and am (supposedly) one of the first in line to get the card. I’ve not even gotten a serious estimate from the supplier as to when they’ll ship the unit(s). The C2050 was supposed to be available early Q2 …. Well, that’s come and gone – and there’s still no word.


As an alternative, I’m considering just using a single CPU (the I7 980), with either the new ASUS Rampage III motherboard or the supercomputer motherboard. With this setup, I’d have 4 SLI slots for Fermi / Tesla cards. For myself, I use intense graphics with 2 30-inch Dell monitors. I need at least one slot with one of the new Fermi graphics cards (water cooled). For the remaining 3 SLI slots, I can use for the 3x C2050’s. Here, I’ll have a single cpu / 6 physical core processor that I’ll be more confident about overclocking.

For the Tesla (C1060) and Fermi (C2050) cards – NVIDIA firmly states that the cards run ‘cool enough’, so that they don’t require any cooling modifications. My experience thus far does not support their contention. If you’re running 3 or 4 Tesla’s or C2050’s, you’re sure to have heating problems.

For SATA 6 GB availability, your options for existing ports are limited. You can purchase a 6GB card for your ASUS board as an add-on that will work. What I’ve done to make use of 3GB SSD’s is this – I keep one 6GB xfer, 500 GB storage SSD loaded with my OS, and primary software. That SSD is placed in front of my RAID 10 system, populated by 4 x 500 GB Raptor units. I’m reticent to use 4 x SSD’s in a RAID system due to cost, and possible problems with dependability at this time.

For your Tesla’s and the C2050’s or above, you’ll need 4GB RAM / unit. You will need and use this memory. I use Mushkin Blackline 2 x (3 X 4 GB) kits. Mushkin doesn’t have any of the redline kits for the 4GB RAM. The Mushkin RAM is rock solid, and can be tuned to accommodate your Tesla / Fermi system. Depending upon your applications, you have a strong possibility of pounding your RAM.

I use a hybrid cooling system. The C1060’s are not amenable to water cooling (NVIDIA will void your warranty – given my sketchy experience with the cards, this probably isn’t a good idea). I water cool my CPU, motherboard, and video card. For my new system, I’ll be using phase-change system (I won’t mention the name, but will have a person who builds a lot of these single / double phase change cooling systems – e-mail me and I will send you his contact information), to cool my CPU. I’ve had some issues with overheating the motherboard, and will be using a water-cooled system for that. Water cooling seems to work just fine for the video card.

Stay in touch regarding your build, I'm interested in your results.
Particleman529

a b U Graphics card
May 18, 2010 2:00:58 AM

This topic has been closed by Aford10
!