Dual Xeon Nehalem Workhorse Build

Lyuokdea

Distinguished
Mar 3, 2009
99
0
18,630
Hi All,

I'm building a workhorse computer for scientific computation. I have about $2500 to spend, give or take. The complete list is on newegg here, but I'll write up the main details too, for those too lazy to click:

http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=9576285

Processor - Xeon Nehalem e5520 (2.26 Ghz) x2
Heatsink - Dynatron G666 60mm Double Ball Bearing CPU Cooler (x2)
Motherboard - Asus LGA Dual-1366 Motherboard
RAM - 12 GB (6x2GB) OCZ Platinum DDR3 1333
Graphics Card - GeForce Nvidia 9500 1GB 550 Mhz
Hard Drives:
2x ocz vertex 30 GB SSD (raid 0, all system files)
1x Velociraptor 300 GB 10,000 RPM HD (/home, /root/, /swap, also for all frequent writes)
2x Western Digital Caviar Green WD10EADS 1TB SATA 3.0Gb/s Hard Drive (raid 1 /data/)
Case - COOLER MASTER HAF 932 RC-932-KKN1-GP Black Steel ATX Full Tower Computer Case - Retail
Power Supply - CORSAIR CMPSU-750TX 750W ATX12V / EPS12V
Cords - 5x OKGEAR 18" SATA II Cable Model GC18ARM12 - Retail

There's also a DVD player, monitor, keyboard, mouse, but those are kind of straightforward


A couple questions for everybody:

1.) Is there an advantage to the Asus Motherboard, against the $100 cheaper or so Tyan boards, which I just saw come out today.

2.) Apparently the Xeon processor doesn't come with a heatsink, is that true?

3.) Is the SSI EEB server case necessary? Will this motherboard fit in an EATX case? Also, what type of fans are needed for this server case? I'm worried about the RAM getting very hot if I don't have massive airflow through this thing.

4.) Is there anything else i'm obviously missing (besides fans). I'll need to buy some silver thermal solution for the heatsink, i assume, unless that comes with the heatsink.

Lastly, I'd like to purchase this somewhat soon. But what do you think the probability of problems will be, if I don't wait for the components to get tested and sorted out by the (much smarter than me) community. I do have the advantage that in the next couple days I can ship it to outside california, and have friends bring it here, and thus avoid $200 in taxes or so.

Let me know what you think, and any suggestions you have. I've built computers before, but I've never built anything of this complexity (dual CPU or server motherboards). Thanks for your help,

~Lyuokdea
 

Lyuokdea

Distinguished
Mar 3, 2009
99
0
18,630
Already made one catch...apparently the new boards are configured for EATX instead of SSI EEB (like previous server boards). The server rack i had listed supported both, but since it is EATX, i think i will probably use a Coolermaster desktop case, since I'm really worried about airflow for this thing. Thus i've now added the components:

Coolermaster EATX Steel Case - http://www.newegg.com/Product/Product.aspx?Item=N82E16811119160
Corsair 12V 750W power supply - http://www.newegg.com/Product/Product.aspx?Item=N82E16817139006

~Lyuokdea
 
Lots of specialized questions that I cant answer , but I have 2 observations .

1/ I'd have an ASUS over a tyan board any day of the week

2/ your cpu cooling fans are very small and obviously designed to fit in a rack mount case . The 60 mm fans are tiny compared to the kind of gaming heat sinks we usually get to talk about . They are going to be very noisy . 51 dB each means you need this computer in another room or you need ear plugs .

 

antiacid

Distinguished
Oct 21, 2008
456
0
18,790
Additionally,

4x green drives might not be the best option. Consider getting an SSD / velociraptor 300gig for a system drive. If you don't care about the data "safety" because you backup externally very often, then you could grab 2-3-4 velociraptors and get decent results. I wouldn't advise going for raid0 ssd's because without a proper implementation, you'll get worse results overall (and if you ask, chances are against you).
 

Lyuokdea

Distinguished
Mar 3, 2009
99
0
18,630
I think the change to an EATX gaming case has fixed the noise issue. Thanks for the help.

Hmm, the green drives might be a problem then. The two issues with HDs for the system is this. I need at least a TB of storage, because some of the codes I run output huge amounts of data. Also, I figured it would be best to have raid do the databackup, so it is done in real time (some codes take a week to run, and one will always be going whenever the system fails)

So my thought was to use 4 HDs in a Raid 10 configuration. But if that's actually going to be slower due to issues with raid drivers, then maybe I should look for a SSD+raid 1 backup arrangement? I know SSD drives aren't really supposed to write much faster than HDDs, but maybe it will still be a small improvement? Would getting the black version of the HDs instead of the Green version give me back most of the performance?

Thanks,

~Lyuokdea
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
See also:

http://it.anandtech.com/IT/showdoc.aspx?i=3536


Conclusion

The Nehalem architecture only caused a small ripple in the desktop world, mostly due to high pricing and performance that only shines in high-end applications. However, it has created a giant tsunami in the server world. The Xeon 5570 doubles the performance of its predecessor in applications that matter to more than 80% of the server market. Pulling this off without any process technology or clock speed advantage, without any significant increase in power consumption, is nothing but a historic achievement for the ambitious and talented team of Ronak Singhal.

[end quote]



More launch news here:

http://www.pcper.com/#NewsID-6974

http://www.intel.com/pressroom/archive/releases/20090330corp_sm.htm

Internet: Meet Your New Processor

New Intel® Server Processors Boost Speed, Efficiency, Add More Intelligence, Adapt to Energy Usage Needs



MRFS
 

Lyuokdea

Distinguished
Mar 3, 2009
99
0
18,630
Thanks for the info....I'll have to see what i can fit into the price tag, without the grant people getting mad at me. Do you see any components where i can cut down price wise without causing problems for the system as a whole?

Thanks,

~Lyuokdea
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
You don't need a separate heatsink/fan for the CPU,
if you remove the stupid push-pins from Intel's stock HSF,
and use a proper backing plate e.g. Sidewinder's
"LGA1366 bolt-thru-kit":

http://www.sidewindercomputers.com/lga13sobbowi.html


Their salesman, Gary Stofer, really knows a lot about this problem
and the preferred solution. See also:

http://www.supremelaw.org/systems/heatsinks/warning.htm


Also, you can save a little money on the Cooler Master HAF-932:

http://www.newegg.com/Product/Product.aspx?Item=N82E16811119160&Tpk=N82E16811119160

Lots of room, and great ventilation options (we have one).


Going in the other direction, ASUS motherboards generally
work GREAT with Corsair SDRAM e.g.:

http://www.newegg.com/Product/Product.aspx?Item=N82E16820145220

... but this latter item costs a bit more than your OCZ DDR3 kit.


MRFS
 

Lyuokdea

Distinguished
Mar 3, 2009
99
0
18,630
Thanks for the tips on the Corsair RAM and and cheaper Coolermaster case...

The question i have about the heat sink, is that newegg states that the current e5520 line doesn't include stock heatsinks....I find this very odd, given that this is a retail version, but that's what it says.

~Lyuokdea

 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
Thanks for the update!

Maybe Intel is finally admitting, in a round-about kind of way,
that their stupid push-pins do have engineering defects.


Check here for a comprehensive list of HSF reviews:

http://www.frostytech.com/


Contact them for a recommendation.


MRFS
 

Lyuokdea

Distinguished
Mar 3, 2009
99
0
18,630
another question for everybody....

my friend is recommending the following. I should scrap the 4 1TB drives, and instead get a combination of a 60 GB SSD (for the operating system), a 74 GB Maxtor Raptor 10,000 RPM drive (for frequent writes), and then 2 of the 1TB drives in a raid 1 array....

do you think that's a better idea for this sort of system?

~Lyuokdea
 
Currently the only SSD's worth buying are the ones using the Indilinx controller or the expensive Intel ones.

http://www.anandtech.com/storage/showdoc.aspx?i=3535

http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&Description=ocz%20vertex&bop=And&Order=PRICE

Whoever suggested getting an old, previous generation, Raptor has been out of the game for a while. The VelociRaptor took the place of the old Raptors quite a while ago. The current 7200 RPM drives are just as fast as the old Raptors. I would pick up a couple OCZ Vertex drives and put them in RAID 0 for the system drive and use a couple 1TB drives in RAID 1 for storage.
 

Lyuokdea

Distinguished
Mar 3, 2009
99
0
18,630
Hi All,

Thanks for your help...this is really starting to come together! The good news is the total amount i can spend has gotten finalized, and it's about $2830, which is about as high as I'd hoped. Also, I should be able to avoid sales tax, and push pretty near $2750 into the machine, and then some money for shipping etc.

With that in mind, I've made some major revisions to the HD area: here is my current plan:

(2x) ocz vertex 30 GB SSD (for operating system and program installation)
(1x) 300 GB Velociraptor (for /home/, /root/, /swp)
(2x) Caviar Black 1TB (/data/)

A couple (Linux Related) questions about this:

1.) Is this a good configuration. I've heard that putting /swp/ on a ssd is a bad idea due to the frequent writes. I don't imagine I'll actually use swp that much due to the large supply of RAM. That's the main reason for the velociraptor drive, but I'm wondering if Velociraptors really have any point in a system such as this, because it seems if I want speed I go with SSD, and for space i go to WD Caviars, so is that worth the investment?

2.) What filesystem should I use on the ssd's? I've heard that journalized systems are bad, since i don't need to defragment. Should I be using ext2 instead of ext3? Is it better to use ext3 and just turn off autodefrag?

3.) Non-linux related. How much stability would I get from adding ECC Ram. Is it worth the pefrormance hit (non-triple channel), and cost (probably an extra hundred, which probably equals some downgrade in the above hd implementation), given the fact that system stability is very important.

Thanks again,

~Lyuokdea

 

atomiktoaster

Distinguished
Dec 28, 2007
41
0
18,530
A few questions: Is this machine dedicated to one piece of computational code, or shared with different users and applications. If it's single purpose, is it for an existing application or is the software being custom developed? For a single purpose machine, one piece of hardware will end up being the performance limitation, depending on the code being run. Once the limiting speed is reached there, increasing performance in other areas is a waste of money. In that case, you want to make sure you know where your bottleneck will be before you choose the hardware by consulting with the software developers. If it's a general purpose machine, you have to make a best guess at what is going to be important for most of the users.

I'm a mechanical engineer working on a custom solid mechanics simulation, and our simulation needs to multiply and update many pairs of values in a few GB data space over and over. The whole data set fits in the RAM, so hard drive performance is not a big issue at all for us. Our current bottleneck is latency between the processor L1 cache and the RAM, when randomly accessing individual values in the data set. Your bottleneck could be memory bandwidth, memory size, processor speed, poor multithreading, HDD write speed or any number of things.

As far as ECC goes, the more ram you use, the more likely you are to have a bit error. Your code might or might not react badly to an error. ECC and RAID 1/5 are important in the server segment primarily because they help prevent downtime. If 24/7 availability is a key priority, then they both make sense. In your case, it might be that you can tolerate a crash once a month if the system runs significantly faster for the rest of the time (which triple-channel may or may not give you, depending on the application).

Have you looked into CUDA or any of the other GPGPU technologies for this machine? For our application, we wanted more parallelism and speed than traditional CPUs provide. We're currently doing testing with ATI's gpgpu offering as well as a non-gpu-based computational accelerator solution. If there's going to be new software developed for this machine, a high-end GPU would provide more options for the programmers. I'm curious to see what other people's take on it is. Plus, with 8 cores and a 4870 X2 this machine might finally be able to run GTA4 maxed out (after hours of course).
 

Lyuokdea

Distinguished
Mar 3, 2009
99
0
18,630
Thanks so much for the detailed response. The system will be running several different codes (and I'm not sure which new codes could be run in the near future)...unfortunately they are all very different, one tends to have a bottleneck from HD write, one tends to have a RAM shortage, and the other is probably processor limited (though I haven't paid enough attention to it to find the bottleneck there)....

it will also be used as a user machine at the same time, which is why i'm thinking that keeping the system drives apart from where any data is being written will be key.

I'm still starting off the grad school endevor, so I know very little about the codes I'm going to need to be running in 6 months, much less in the next couple years. One good thing is that I'll have money to get upgrades, for instance if I start doing hardcore CUDA programming, or need a huge raid array, etc.

~Lyuokdea
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
> As far as ECC goes, the more ram you use, the more likely you are to have a bit error. Your code might or might not react badly to an error. ECC and RAID 1/5 are important in the server segment primarily because they help prevent downtime. If 24/7 availability is a key priority, then they both make sense.


The latest Nehalem EP dual-socket servers (read "Xeon Core i7")
are reported to use ECC Registered DDR3 e.g.:

http://it.anandtech.com/IT/showdoc.aspx?i=3536&p=4

Xeon Server 1: ASUS RS700-E6/RS4 barebone
CPU: Dual Xeon "Gainestown" X5570 2.93GHz
MB: ASUS Z8PS-D12-1U
RAM: 6x4GB (24GB) ECC Registered DDR3-1333
NIC: Intel 82574L PCI-E Gbit LAN


There's a summary chipset block diagram here:

http://forums.macnn.com/65/mac-pro-and-power-mac/381919/nehalem-xeon-benchmarks-ridiculously-fast/



MRFS
 

Lyuokdea

Distinguished
Mar 3, 2009
99
0
18,630
One new question that's now popping up...the Supermicro boards are now available. Which manufacturer is thought to have higher quality? I know they're both pretty top-line for servers, correct?

A couple of differences I'm noticing are that the Supermicro board is listed as an EATX board, while the Asus board is SSI EEB. The Supermicro boards are using the Intel 5520 chipset while the Asus board uses the Intel 5500. The Asus board has an integrated video card, while the Supermicro one has integrated sound. The supermicro board also has one more PCI Express x16 port

Thanks for your help,

~Lyuokdea
 

Lyuokdea

Distinguished
Mar 3, 2009
99
0
18,630
Last question i still haven't gotten answered clearly , are SSI EEB and EATX interchangable? Is the problem only with the power supply, or the case as well?

Thanks,

~Lyuokdea
 

Lyuokdea

Distinguished
Mar 3, 2009
99
0
18,630
Hi All,

I wanted to put the updated build on here, and ask a couple last questions, before I go ahead and start buying components by the end of this week.

http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=9576285

Again for those not interested in clicking, the components are as listed:

Processor - Xeon Nehalem e5520 (2.26 Ghz) (x2)
Heatsink - Dynatron G666 60mm Double Ball Bearing CPU Cooler (x2)
Motherboard - SUPERMICRO MBD-X8DA3-O Dual LGA 1366 Intel 5520 EATX Server
RAM - 6 GB (3x2GB) Crucial DDR3 1333 ECC Unbuffered Triple Chanel kit (x2 for 12GB total)
Graphics Card - GeForce Nvidia 9600 GT Superclocked GDDR3 512 MB
Hard Drives:
2x ocz vertex 30 GB SSD (raid 0, all system files)
1x Western Digital Caviar SE16 640 GB(/home, /root/, /swap, main user drive)
2x Western Digital Caviar Green WD10EADS 1TB SATA 3.0Gb/s Hard Drive (raid 1 /data/, high volume writes)
Case - SILVERSTONE KUBLAI Series KL03-B Black
Power Supply - CORSAIR CMPSU-850TX 850W ATX12V 2.2 / EPS12V 2.91 SLI Ready CrossFire Ready Active PF
Cords - (2x) OKGEAR 18" SATA II Cable Model GC18ARM12 - Retail

So the remaining questions are:

1.) Are the Motherboard/Heatsink/Case, and Power supply all compatible? The concerns I have are correct holes in the case for the heat sink mounting brackets, as well as the power supply needs for the motherboard. Specifically, Supermicro lists the following: (http://www.supermicro.com/manuals/motherboard/X58/X8DA3.pdf)

"The X8DA3/X8DAi can accommodate 24-pin ATX power supplies. Although most power supplies generally meet the specifications required by the CPU, some are inadequate. In addition, the two onboard 12V 8-pin power connections are also required to ensure adequate power supply to the system."

I don't understand which power supplies have two 12V 8-pin power connections. Does the one I'm listing meet this specification? Do all power supplies have this, and thus it is not prominantly listed among those that do?

Also, I've noticed a beastly heat sink from Zalman: http://www.newegg.com/Product/Product.aspx?Item=N82E16835118046 I'm not planning on overclocking at all, so i think this would be a bit overboard, but is it worth it, given that the price isn't that much greater?

2.) Has anybody set up a dual monitor display off of a GeForce 9600GT? Is that a reasonable setup without any screen hangups? I doubt I'll be doing anything graphically intensive, but I'm not sure if 1GB graphics memory would help more than having a very fast processor as with my setup.

3.) Does the raid setup I have look reasonable for their given uses. I want a combination of a system that is very snappy for the user, but at the same time, there will be almost constant read writes going to the 1TB Caviar array. I think this separation will make everything work.

Let me know what you think, as always, all comments are welcome.

~Lyuokdea

 

Lyuokdea

Distinguished
Mar 3, 2009
99
0
18,630
Thanks...i had never thought of looking for an adapter. It's not going to hurt the power supply long term, to split the 12V line like that is it?

~Lyuokdea