Xeon W5580 Build

Interesting bits of information... Just assembled a workstation consisting of:

* 2x Intel Xeon "Nehalem" W5580 (3.2Ghz)
* ASUS Z8PE-D12 Server Motherboard
* 12x2GB DDR3-1333 Corsair Memory
* 2x Thermalright Ultra-120 Extreme Black
* 2x Thermalright LGA1366 Bolt-Thru Kit
* Corsair HX 1000W PSU
* Silverstone Tenjim TJ10 with Side Window
* One 750GB Samsung Hard Drive (have to ckeck the model)

I have uploaded a few pictures... First off, one of two Xeon W5580 Nehalem processors... :sol:

The case I chose for the build is the almost-all aluminum TJ10, which makes for a great workstation case. Here it is with the PSU already in place:

I proceeded to installing the motherboard, but quickly found that I needed 3 extra holes for standoffs because three motherboard holes didn't line up with any of the holes in the motherboard tray. This was solved by drilling the apropriate screw holes, which didn't take long. I didn't do that: it was a guy at a local workshop. Took him about 10 minutes.

Here's the motherboard already on the motherboard tray: (really, removable motherboard trays are great when working with parts like these!)

Well, I thought it would only take a few more minutes to get a first boot out of this thing. But as you can see in this last picture, this ASUS board has a lot of capacitors around the CPU area, and, surprise, surprise, the Thermalright retention bracket makes direct contact with these capacitors, and there's no way I can actually screw the retention bracket in place, unless I want to smash some capacitors.

Well, I got very frustated, but there was a solution: use motherboard standoffs to screw into the backplate screw holes and make the whole retention assembly move up a few milllimiters. This would require 8 standoffs, and this would also require some sort of metal plate to place between retention bracket and heatsink which would need to be of the same height as the standoffs used.

I did some research and found that I needed male-female standoffs with both screws being M3 standard. The CPU backplate accepts M3 screws, and the Thermalright spring screw is also a M3 screw. So where do I get such a standoff? No local shop had them in stock. I tried with a local electronics part dealer which didn't have them in stock either. Very stupid situation, really, until I found out that Lian Li uses M3-based standoffs in their cases. The store who provided me with the parts managed to get eight standoffs from a Lian Li case for this. The guy who works there was very helpful and didn't charge anything at all for all his help, and he did have some trouble with this: he actually doesn't work with Lian Li cases.

(just as a side note, I'm from Brazil. If I lived in the US or in Europe, I could easily order these standoffs online, but it was surprisingly difficult to find a solution here)

I then needed a metal plate of some sort that matched the 6.3mm standoff height. Well, to sum it all up, I ended up asking a local workshop to make such a plate out of aluminum, and it also had a screw hole in the middle so I could use a screw head as a way of making sure the plate wouldn't slide out of place - much like thermalright does. I also asked for a hole on the other side, so that thermalright's own screw head would fit there. Check it out:

And here are the finished parts:

Once I had these parts, installing the coolers was easy - as easy as it should have been in the first place :ange:

Note the motherboard standoffs and the aluminum cylinder?... They did the trick. Also note that the retention bracket would have touched the capacitors on this last picture, if it wasn't ~6mm above its usual position. System booted up just fine and recognized 2 3.2Ghz processors and all 24GBs of system memory. Here's how it looked at an earlier stage of cabling:

I finished the hardware installation and have already installed Ubuntu 9.04 on this system. Everything works beautifully. I'm about to get an internet connection for this thing. I will also do some HPC performance tests.

Too bad I don't actually own this system! I'm just assembling it. It felt great assembling it anyway! I'm even happy that I could find a solution for the heatsink clearance issue. Hey, I guess it's my first actually-good-for-something mod! :)
15 answers Last reply
More about xeon w5580 build
  1. Good work - very impressed with your HSF fitting.
  2. Four pictures of the assembled system:

    Now some closeups where the hardware inside can be seen more clearly:

    The system is all-black, the TJ10 being black brushed aluminum and the heatsinks being TRUE Blacks. Rather appropriately, the owner of the system (I don't own this, I just assembled it :cry: ) has decided to call it "blackstar", as in the radiohead song.

    I have some thermal testing here and I'll post soon. I also plan on comparing performance under linux (ubuntu) to some desktops I have here and the previous-gen Harpertown rig we have here, which uses a 3.0Ghz, 1600Mhz FSB Xeon X5472 with 16GB FB-DIMM, and is the direct predecessor to this system. They are actually both built around the same Silverstone TJ10 black case and will sit right next to each other. (check out this link for more info on this other, Harpertown-based rig)

    I might also test against an old cluster of 1.5Ghz Itanium 2s we have here, just for the fun of it, but the cluster is already >3 years old and I really, really don't expect the Itaniums to put up much of a fight.
  3. Just uploaded a screenshot with the stress testing. For temperature readings, I used coretemp under linux, doing the usual "modprobe coretemp" and using lm-sensors to collect the data. These are not the motherboard-measured temperature values, which are mostly lower because they're not on-die measurements.

    Idle temperatures for the W5580s hover around the 35-37 degrees or so. I then installed prime95 (from www.mersenne.org) to do some heavy-duty stress testing. I asked for 16 testing threads with small-FFT tests, which are the ones that produce the most heat and power consumption and as such are the ultimate "thermal design" testers. BTW, I never found any combination of programs (even multithreaded ones) that could come close to reproducing the temps measured under prime95's small FFT testing.

    Here goes the screenshot for 100% (or, as linux interprets it in this case, ~1600%) CPU usage:

    So that's it: 70C absolute top temperature registered in a few cores, but most cores show consistently under 70C - maybe 65-66C average. I haven't ever measured one single degree beyond 70C on any core. I also hope that, as the thermal paste settles in, the temperature might drop a degree or two. I'd love to compare the temperature on these W5580s to the i7 965, but I've never seen an i7 965... Also, the W5580s have a more agressive Turbo mode, AFAIK, as they can get to 3.6Ghz on single-threaded programs.

    I don't really know, anyway, about Turbo mode under linux. It should be driver independent and should therefore work automatically, but I still have to read on the subject to find out how to measure the turbo mode-enabled clock speed under linux...

    Well, bottom line about these temperature measurements, these new nehalem processors are definitely quite hot. I've never seen these kinds of coretemp readings from C2D/C2Q, as far as I can remember...
  4. A couple of comments (awesome setup, by the way). I've found with my i7 that the max temperatures occur under blend, not SmallFFT (due to the on die memory controller, no doubt), and even higher temps occur under Linpack testing. Arguably though, the Linpack testing isn't representative of any real world loads though.
  5. PsychoSaysDie: Sorry, I have already installed Ubuntu (linux) and don't plan on going to WinXP anytime soon... So I'm afraid I won't be able to run 3dmark06. But I am planning on doing several comparisons - haven't started yet though.

    cjl: Oh, OK. Forgot about the OMC, will try blend test and see what kind of temps I get. BTW, what temperatures are you getting for that i7 965? Do you also have the temps for the stock speed? I was kind of curious to see if my results are OK.

    I did another test that I forgot to mention: a power consumption test using a watt-meter. I was actually surprised at the rather low measurements:

    Idle: 135-140W. Power factor is ~0.98-0.99.
    Load (small FFT): 415-420W. Power factor is still ~0.98-0.99.

    So I guess that Corsair PSU is quite efficient. Its PFC correction is also apparent. I have another desktop i7 920 system here that doesn't have active PFC (I know, wasn't me!) and it consumes ~350VA (Vrms*Arms, actual power usage is 200-210W, PF=~0.6), which means it only pulls 10-15% less current from the wall as this Xeon-based system. I hate bad PSUs! :D
  6. With mine, full load temps are around 63-64 at stock clocks, and 75-80 (depending on ambient) at 4.2GHz. That's with a TRUE black, so the same exact heatsink setup as yours.
  7. Quote:
    With mine, full load temps are around 63-64 at stock clocks, and 75-80 (depending on ambient) at 4.2GHz. That's with a TRUE black, so the same exact heatsink setup as yours.
    Huuum interesting. Is that with prime95, test blend or linpack? Also, what thermal paste did you use? I actually used the thermalright one, instead of my old AS Ceramique I have lying around here, because the Ceramique is >4 years old and a strange liquid comes out before the actual paste. (Ceramique is probably a suspension of particles that got separated mechanically by gravity after long periods of storage in the same position - that's my theory).

    Was that measured using coretemp? If so, your 63-64 is probably an average of those 16 temperatures, right?... I calculated the average for my system and the average is actually 66C, which is a little worse than yours but quite within an acceptable range. Could be that the thermal paste still has to set in (this was the very first stress test), which might help a degree or two, or it could simply be that your case has better airflow. Plus, anyway, there are two of those to heat up the air inside this Silverstone TJ10... but only one in your case. Another slightly distant possibility is that the more agressive turbo mode heats up a little more. But hey, it's only 2-3C difference...

    I wish that I had a rig like yours or like this one... I'm still on a much older A64 X2 4200....

    BTW, what's up with coretemp showing 16 cores? And why is there only one, and not two, 70C readings? I'd think that there would be only one thermal diode for each physical processor... There's also just one 62C reading in those coretemp temperatures.... Well, I don't quite get it...
  8. I measured mine with Realtemp, and that's roughly an average during Prime95 SmallFFT. Add about 5C for linpack temps, and 2-3C for blend temps. As for yours showing 16 temps? Realtemp only shows 4 temps for me - one per core, regardless of whether HT is on or off. I used Arctic Silver 5.
  9. Just ran Prime95 with "blend" test and actually, the temperatures are lower than with the other test I tried. I looked it up and what I said before was wrong: the highest power consumption and heat output is produced when doing Large FFT, which is even something that the ASCII interface tells you. So all this time I was using Large FFTs but I posted on this forum saying I was running small FFT. Sorry! I mixed them both up.

    I'll test with more caution and see the temperatures I get.
  10. Do you recommend an HSf without having to mod it to make it fit?
  11. no video card ?
  12. rrob: I think I'd go with this HSF:


    Note how the mounting system is a drop-in replacement and the retention bracket is completely flat. It is in the same "supercooler" league as the TRUE, and it would have saved me a lot of trouble. Alas, it was launched the very day I installed my mod.

    h0devil: Yeah... no video card. It only has a poorly-performing onboard video card - with dedicated memory nonetheless. Well, this is a headless number cruncher after all!"

    I tested the performance in typical single-threaded workloads against a 2.66Ghz C2D and found that it is roughly 45% faster in transcendental function evaluation, 188% faster (yep!) in matrix multiplication/manipulation (which must be due to the OMC), and 25% faster in PRNGs like the Mersenne Twister. This was a simple, pedestrian compilation with g++, not the Intel compiler... Of course, the final production software that runs on this system will all be compiled with the Intel compiler.
  13. Nice! Impressed with your modding to fit on the coolers!! Well done - thumbs up. That was great eye candy for me - dual socket Xeon systems have been in my dreams for a while :p!! Have fun with them!! :)
  14. WOW, well done, really big monster.

    I ask any one if there is a benchmark of intel x5580 or x5550 with video editing and/or 3Ds Max.

    hope you enjoy your system.
  15. Yeah, it really is a big monster. It was a joy assembling it.

    A little off-topic, maybe, but I'm also currently studying up on the Nehalem-EX launch. For us, physicists, just imagining a 4S8C (32CPU) or 8S8C (64CPU) system is great!!!

    For example, I'm currently running a correlated-particle toy model for which one of the variables reads "CPUs //Number of CPUs" - I can simply put any integer there and the program will automatically spawn the needed threads! I'm currently using CPUs=8, for Core i7 platforms... Just imagine setting CPUs=64 for the 32CPU, HT-enabled system... I'd have results in 1/8th of the time!!!

    The Nehalem-EX platform made me wonder if independent manufacturers like Tyan or Supermicro would actually create 8S motherboard combos. There is already one opteron-based solution from Tyan, which is this one:


    QPI now makes this possible for the Nehalem-EX Xeons, but question is, will only the big integrators be able to do so or would we be able to assemble these rigs ourselves?

    Ideally, it could be as simple as "SLI-ing" two special Nehalem-EX mobos...
Ask a new question

Read More

CPUs Xeon Motherboards