Dual Xeon Nehalem Workhorse Build

Hi All,

I'm building a workhorse computer for scientific computation. I have about $2500 to spend, give or take. The complete list is on newegg here, but I'll write up the main details too, for those too lazy to click:


Processor - Xeon Nehalem e5520 (2.26 Ghz) x2
Heatsink - Dynatron G666 60mm Double Ball Bearing CPU Cooler (x2)
Motherboard - Asus LGA Dual-1366 Motherboard
RAM - 12 GB (6x2GB) OCZ Platinum DDR3 1333
Graphics Card - GeForce Nvidia 9500 1GB 550 Mhz
Hard Drives:
2x ocz vertex 30 GB SSD (raid 0, all system files)
1x Velociraptor 300 GB 10,000 RPM HD (/home, /root/, /swap, also for all frequent writes)
2x Western Digital Caviar Green WD10EADS 1TB SATA 3.0Gb/s Hard Drive (raid 1 /data/)
Case - COOLER MASTER HAF 932 RC-932-KKN1-GP Black Steel ATX Full Tower Computer Case - Retail
Power Supply - CORSAIR CMPSU-750TX 750W ATX12V / EPS12V
Cords - 5x OKGEAR 18" SATA II Cable Model GC18ARM12 - Retail

There's also a DVD player, monitor, keyboard, mouse, but those are kind of straightforward

A couple questions for everybody:

1.) Is there an advantage to the Asus Motherboard, against the $100 cheaper or so Tyan boards, which I just saw come out today.

2.) Apparently the Xeon processor doesn't come with a heatsink, is that true?

3.) Is the SSI EEB server case necessary? Will this motherboard fit in an EATX case? Also, what type of fans are needed for this server case? I'm worried about the RAM getting very hot if I don't have massive airflow through this thing.

4.) Is there anything else i'm obviously missing (besides fans). I'll need to buy some silver thermal solution for the heatsink, i assume, unless that comes with the heatsink.

Lastly, I'd like to purchase this somewhat soon. But what do you think the probability of problems will be, if I don't wait for the components to get tested and sorted out by the (much smarter than me) community. I do have the advantage that in the next couple days I can ship it to outside california, and have friends bring it here, and thus avoid $200 in taxes or so.

Let me know what you think, and any suggestions you have. I've built computers before, but I've never built anything of this complexity (dual CPU or server motherboards). Thanks for your help,

67 answers Last reply
More about dual xeon nehalem workhorse build
  1. Already made one catch...apparently the new boards are configured for EATX instead of SSI EEB (like previous server boards). The server rack i had listed supported both, but since it is EATX, i think i will probably use a Coolermaster desktop case, since I'm really worried about airflow for this thing. Thus i've now added the components:

    Coolermaster EATX Steel Case - http://www.newegg.com/Product/Product.aspx?Item=N82E16811119160
    Corsair 12V 750W power supply - http://www.newegg.com/Product/Product.aspx?Item=N82E16817139006

  2. Lots of specialized questions that I cant answer , but I have 2 observations .

    1/ I'd have an ASUS over a tyan board any day of the week

    2/ your cpu cooling fans are very small and obviously designed to fit in a rack mount case . The 60 mm fans are tiny compared to the kind of gaming heat sinks we usually get to talk about . They are going to be very noisy . 51 dB each means you need this computer in another room or you need ear plugs .
  3. Additionally,

    4x green drives might not be the best option. Consider getting an SSD / velociraptor 300gig for a system drive. If you don't care about the data "safety" because you backup externally very often, then you could grab 2-3-4 velociraptors and get decent results. I wouldn't advise going for raid0 ssd's because without a proper implementation, you'll get worse results overall (and if you ask, chances are against you).
  4. I think the change to an EATX gaming case has fixed the noise issue. Thanks for the help.

    Hmm, the green drives might be a problem then. The two issues with HDs for the system is this. I need at least a TB of storage, because some of the codes I run output huge amounts of data. Also, I figured it would be best to have raid do the databackup, so it is done in real time (some codes take a week to run, and one will always be going whenever the system fails)

    So my thought was to use 4 HDs in a Raid 10 configuration. But if that's actually going to be slower due to issues with raid drivers, then maybe I should look for a SSD+raid 1 backup arrangement? I know SSD drives aren't really supposed to write much faster than HDDs, but maybe it will still be a small improvement? Would getting the black version of the HDs instead of the Green version give me back most of the performance?


  5. See also:



    The Nehalem architecture only caused a small ripple in the desktop world, mostly due to high pricing and performance that only shines in high-end applications. However, it has created a giant tsunami in the server world. The Xeon 5570 doubles the performance of its predecessor in applications that matter to more than 80% of the server market. Pulling this off without any process technology or clock speed advantage, without any significant increase in power consumption, is nothing but a historic achievement for the ambitious and talented team of Ronak Singhal.

    [end quote]

    More launch news here:



    Internet: Meet Your New Processor

    New Intel® Server Processors Boost Speed, Efficiency, Add More Intelligence, Adapt to Energy Usage Needs

  6. Thanks for the info....I'll have to see what i can fit into the price tag, without the grant people getting mad at me. Do you see any components where i can cut down price wise without causing problems for the system as a whole?


  7. You don't need a separate heatsink/fan for the CPU,
    if you remove the stupid push-pins from Intel's stock HSF,
    and use a proper backing plate e.g. Sidewinder's
    "LGA1366 bolt-thru-kit":


    Their salesman, Gary Stofer, really knows a lot about this problem
    and the preferred solution. See also:


    Also, you can save a little money on the Cooler Master HAF-932:


    Lots of room, and great ventilation options (we have one).

    Going in the other direction, ASUS motherboards generally
    work GREAT with Corsair SDRAM e.g.:


    ... but this latter item costs a bit more than your OCZ DDR3 kit.

  8. Thanks for the tips on the Corsair RAM and and cheaper Coolermaster case...

    The question i have about the heat sink, is that newegg states that the current e5520 line doesn't include stock heatsinks....I find this very odd, given that this is a retail version, but that's what it says.

  9. Thanks for the update!

    Maybe Intel is finally admitting, in a round-about kind of way,
    that their stupid push-pins do have engineering defects.

    Check here for a comprehensive list of HSF reviews:


    Contact them for a recommendation.

  10. another question for everybody....

    my friend is recommending the following. I should scrap the 4 1TB drives, and instead get a combination of a 60 GB SSD (for the operating system), a 74 GB Maxtor Raptor 10,000 RPM drive (for frequent writes), and then 2 of the 1TB drives in a raid 1 array....

    do you think that's a better idea for this sort of system?

  11. Currently the only SSD's worth buying are the ones using the Indilinx controller or the expensive Intel ones.



    Whoever suggested getting an old, previous generation, Raptor has been out of the game for a while. The VelociRaptor took the place of the old Raptors quite a while ago. The current 7200 RPM drives are just as fast as the old Raptors. I would pick up a couple OCZ Vertex drives and put them in RAID 0 for the system drive and use a couple 1TB drives in RAID 1 for storage.
  12. for the 1tb's get the caviar blacks if you don't mind power usage since they run at 20MB/s faster than normal HDD's because they have double the cache.
  13. If you need to squeeze things a bit to get under your budget:
    ASUS 22" LCD Monitor $150 A/R - for $120 savings
    4x WD Caviar Black 500GB HDD $70 - for $176 savings
  14. Hi All,

    Thanks for your help...this is really starting to come together! The good news is the total amount i can spend has gotten finalized, and it's about $2830, which is about as high as I'd hoped. Also, I should be able to avoid sales tax, and push pretty near $2750 into the machine, and then some money for shipping etc.

    With that in mind, I've made some major revisions to the HD area: here is my current plan:

    (2x) ocz vertex 30 GB SSD (for operating system and program installation)
    (1x) 300 GB Velociraptor (for /home/, /root/, /swp)
    (2x) Caviar Black 1TB (/data/)

    A couple (Linux Related) questions about this:

    1.) Is this a good configuration. I've heard that putting /swp/ on a ssd is a bad idea due to the frequent writes. I don't imagine I'll actually use swp that much due to the large supply of RAM. That's the main reason for the velociraptor drive, but I'm wondering if Velociraptors really have any point in a system such as this, because it seems if I want speed I go with SSD, and for space i go to WD Caviars, so is that worth the investment?

    2.) What filesystem should I use on the ssd's? I've heard that journalized systems are bad, since i don't need to defragment. Should I be using ext2 instead of ext3? Is it better to use ext3 and just turn off autodefrag?

    3.) Non-linux related. How much stability would I get from adding ECC Ram. Is it worth the pefrormance hit (non-triple channel), and cost (probably an extra hundred, which probably equals some downgrade in the above hd implementation), given the fact that system stability is very important.

    Thanks again,

  15. A few questions: Is this machine dedicated to one piece of computational code, or shared with different users and applications. If it's single purpose, is it for an existing application or is the software being custom developed? For a single purpose machine, one piece of hardware will end up being the performance limitation, depending on the code being run. Once the limiting speed is reached there, increasing performance in other areas is a waste of money. In that case, you want to make sure you know where your bottleneck will be before you choose the hardware by consulting with the software developers. If it's a general purpose machine, you have to make a best guess at what is going to be important for most of the users.

    I'm a mechanical engineer working on a custom solid mechanics simulation, and our simulation needs to multiply and update many pairs of values in a few GB data space over and over. The whole data set fits in the RAM, so hard drive performance is not a big issue at all for us. Our current bottleneck is latency between the processor L1 cache and the RAM, when randomly accessing individual values in the data set. Your bottleneck could be memory bandwidth, memory size, processor speed, poor multithreading, HDD write speed or any number of things.

    As far as ECC goes, the more ram you use, the more likely you are to have a bit error. Your code might or might not react badly to an error. ECC and RAID 1/5 are important in the server segment primarily because they help prevent downtime. If 24/7 availability is a key priority, then they both make sense. In your case, it might be that you can tolerate a crash once a month if the system runs significantly faster for the rest of the time (which triple-channel may or may not give you, depending on the application).

    Have you looked into CUDA or any of the other GPGPU technologies for this machine? For our application, we wanted more parallelism and speed than traditional CPUs provide. We're currently doing testing with ATI's gpgpu offering as well as a non-gpu-based computational accelerator solution. If there's going to be new software developed for this machine, a high-end GPU would provide more options for the programmers. I'm curious to see what other people's take on it is. Plus, with 8 cores and a 4870 X2 this machine might finally be able to run GTA4 maxed out (after hours of course).
  16. Thanks so much for the detailed response. The system will be running several different codes (and I'm not sure which new codes could be run in the near future)...unfortunately they are all very different, one tends to have a bottleneck from HD write, one tends to have a RAM shortage, and the other is probably processor limited (though I haven't paid enough attention to it to find the bottleneck there)....

    it will also be used as a user machine at the same time, which is why i'm thinking that keeping the system drives apart from where any data is being written will be key.

    I'm still starting off the grad school endevor, so I know very little about the codes I'm going to need to be running in 6 months, much less in the next couple years. One good thing is that I'll have money to get upgrades, for instance if I start doing hardcore CUDA programming, or need a huge raid array, etc.

  17. > As far as ECC goes, the more ram you use, the more likely you are to have a bit error. Your code might or might not react badly to an error. ECC and RAID 1/5 are important in the server segment primarily because they help prevent downtime. If 24/7 availability is a key priority, then they both make sense.

    The latest Nehalem EP dual-socket servers (read "Xeon Core i7")
    are reported to use ECC Registered DDR3 e.g.:


    Xeon Server 1: ASUS RS700-E6/RS4 barebone
    CPU: Dual Xeon "Gainestown" X5570 2.93GHz
    MB: ASUS Z8PS-D12-1U
    RAM: 6x4GB (24GB) ECC Registered DDR3-1333
    NIC: Intel 82574L PCI-E Gbit LAN

    There's a summary chipset block diagram here:


  18. One new question that's now popping up...the Supermicro boards are now available. Which manufacturer is thought to have higher quality? I know they're both pretty top-line for servers, correct?

    A couple of differences I'm noticing are that the Supermicro board is listed as an EATX board, while the Asus board is SSI EEB. The Supermicro boards are using the Intel 5520 chipset while the Asus board uses the Intel 5500. The Asus board has an integrated video card, while the Supermicro one has integrated sound. The supermicro board also has one more PCI Express x16 port

    Thanks for your help,

  19. Last question i still haven't gotten answered clearly , are SSI EEB and EATX interchangable? Is the problem only with the power supply, or the case as well?


  20. Hi All,

    I wanted to put the updated build on here, and ask a couple last questions, before I go ahead and start buying components by the end of this week.


    Again for those not interested in clicking, the components are as listed:

    Processor - Xeon Nehalem e5520 (2.26 Ghz) (x2)
    Heatsink - Dynatron G666 60mm Double Ball Bearing CPU Cooler (x2)
    Motherboard - SUPERMICRO MBD-X8DA3-O Dual LGA 1366 Intel 5520 EATX Server
    RAM - 6 GB (3x2GB) Crucial DDR3 1333 ECC Unbuffered Triple Chanel kit (x2 for 12GB total)
    Graphics Card - GeForce Nvidia 9600 GT Superclocked GDDR3 512 MB
    Hard Drives:
    2x ocz vertex 30 GB SSD (raid 0, all system files)
    1x Western Digital Caviar SE16 640 GB(/home, /root/, /swap, main user drive)
    2x Western Digital Caviar Green WD10EADS 1TB SATA 3.0Gb/s Hard Drive (raid 1 /data/, high volume writes)
    Case - SILVERSTONE KUBLAI Series KL03-B Black
    Power Supply - CORSAIR CMPSU-850TX 850W ATX12V 2.2 / EPS12V 2.91 SLI Ready CrossFire Ready Active PF
    Cords - (2x) OKGEAR 18" SATA II Cable Model GC18ARM12 - Retail

    So the remaining questions are:

    1.) Are the Motherboard/Heatsink/Case, and Power supply all compatible? The concerns I have are correct holes in the case for the heat sink mounting brackets, as well as the power supply needs for the motherboard. Specifically, Supermicro lists the following: (http://www.supermicro.com/manuals/motherboard/X58/X8DA3.pdf)

    "The X8DA3/X8DAi can accommodate 24-pin ATX power supplies. Although most power supplies generally meet the specifications required by the CPU, some are inadequate. In addition, the two onboard 12V 8-pin power connections are also required to ensure adequate power supply to the system."

    I don't understand which power supplies have two 12V 8-pin power connections. Does the one I'm listing meet this specification? Do all power supplies have this, and thus it is not prominantly listed among those that do?

    Also, I've noticed a beastly heat sink from Zalman: http://www.newegg.com/Product/Product.aspx?Item=N82E16835118046 I'm not planning on overclocking at all, so i think this would be a bit overboard, but is it worth it, given that the price isn't that much greater?

    2.) Has anybody set up a dual monitor display off of a GeForce 9600GT? Is that a reasonable setup without any screen hangups? I doubt I'll be doing anything graphically intensive, but I'm not sure if 1GB graphics memory would help more than having a very fast processor as with my setup.

    3.) Does the raid setup I have look reasonable for their given uses. I want a combination of a system that is very snappy for the user, but at the same time, there will be almost constant read writes going to the 1TB Caviar array. I think this separation will make everything work.

    Let me know what you think, as always, all comments are welcome.

  21. No, the power supply you chose doesn't have 2 8-pin CPU power connectors. You would have to either buy a server PSU designed for dual CPU setups or buy an adapter like this one:

    LOGISYS AD202 12V Molex to 8pin P8 Adapter - Retail
    http://www.newegg.com/Product/Product.aspx?Item=N82E16812887002 $4.99

    I would choose the adapter because all the server PSU's I found were very big and >$400.
  22. Thanks...i had never thought of looking for an adapter. It's not going to hurt the power supply long term, to split the 12V line like that is it?

  23. No, it'll be fine.
  24. Also...is there a difference between 24 pin power supplies and 20+4 pin?


  25. If I'm not mistaken, the only difference really is that the 20+4 pin PSU's will also work with older motherboards that had a 20-pin main connector instead of the new standard 24-pin connector.
  26. Something I just realized after re-analyzing your setup: SAS. You must go with SAS instead of Sata for 24/7 constant use for one major reason: SAS has a far lower error rate than Sata, which is why it is used in servers. The mobo you have chosen includes 8 SAS ports so that is a good starting point. Having used SAS myself and researched the benefits, I vividly remember the lower error rate as being the main benefit. Good news for you is that Seagate has their ES.2 drives(server-class) in SAS as well as all of their 10k & 15k drives in 2.5" & 3.5". I can see you have a disposition towards WD; however, the drives you have chosen are not designed for what you need.
    If you don't like Seagate, here is another option with a 15k Hitachi at newegg
    Get 2 and run in Raid 1 to replace your SSD's and your system drive.

    I'm looking for more ideas but I'm posting this now so you can make a better informed decision.
  27. I think this power supply has the 2 8pin 12V lines that are required, so I've subbed it into the build


    Thanks for the info on HDs, the question will be price vs. performance for my system, so I'll need to research the details more.

  28. That PSU doesn't have 2 8-pin CPU power connectors either. It has one 4-pin CPU power connector and one 8-pin. The other 8-pin connectors you see listed are PCI-E connectors and will not work to power the CPU. You would still need an adapter like this one to convert the 4-pin to an 8-pin connector.

    APEVIA 9.5" 4 Pin P4 /12V to 8 Pin P8 /12V Power Supply Converter Cable Model CVT48 - Retail
    http://www.newegg.com/Product/Product.aspx?Item=N82E16812201005&Tpk=p8%20converter $3.50
  29. Quick question: what is your reasoning for using SSD?
    I ask because SSD's are mainly beneficial if you need high IOPs/faster access time. Because you stated that you need faster write speeds for the 1TB drives, this means that you shouldn't need a faster access time.
    Also, I would be cautious about WD for Raid due to TLER which stands for Time Limited Error Recovery. This single function has cost me over 200GB of data. When there is a read/ write being performed and the drive has some sort of error, TLER can cause the drive to hang which causes the Raid controller to 'drop' the drive from the array, which means a Raid rebuild is necessary. I have experienced this with both Raid 1 and Raid 0 using Raptors; however, I have never experienced this using Raid 10 with Raptors. The funny thing is that the TLER is designed for Raid use; yet, someone wrote a program to disable this feature because of its problems. And yes, the GP drives have TLER.

    One more point: what ever you do, use Raid 1/10/5 for all data, never use Raid 0, unless you don't care about losing data.

    Do you have an UPS? It would surely suck to have a week-long code execution to be lost due to power problems.
  30. How noise are the 15000 RPM SCSI drives, compared to the 7200 and 10000 RPM (Velociraptor) drives? Are they too noisy to be in a common work environment?

    I'm looking specifically at:

  31. I can't personally attest to that drive but the Seagate 15k.6 is close to the Raptor and very similar to 4 Raptors in Raid(which is what I have) and the fans usually drown out most noise.

    Another reason to consider enterprise class drives such as Seagate's ES.2(7200rpm) is that they are built for a high amount of transactions compared to desktop drives which aren't. This includes not only their build quality but also their instruction code. Many sites like to benchmark drives and many times they use a 7200rpm server drive against regular desktop drives and test them using desktop applications and the occasional IOP test. Even though these tests are wrong, they do show how poorly desktop drives handle high amounts of transactions.

    I honestly think that even 4 250-320GB drives plus 2-1TB drives will work. Check this out: For maximum IOPs/transactions per second you can use short stroking(T'sH just did a great article). Say you get 4-320GB drives & set them up in Raid 10 for OS/apps and Raid 0 for swp using intel's Matrix Raid. Allocate the first 500GB to the Raid 10 and the rest, 280GB to Raid 0. Create a 50GB partition in the Raid 10 which will be the first partition you create so this uses the first 25GB of each drive. You then can use this drive for your high IOPs thanks to short stroking. you can even double the size if you need. And then the 2-1TB drives really don't need to be server-class drives since they are for backup only.
    I don't know what 'swp' is so I assumed it means 'swap' like a scratch disk of sorts. Read the article because it will open your eyes to many possibilities.
    Sorry, but I don't have the time to calculate how much you have to spend on drives so it would be cool if you could do that so I can get a better idea.

    Also, do you have a UPS?
  32. I just checked prices on CPU's and some other parts and it does make more sense to build 2 PCs and cluster them OR run 2 different codes at the same time. You can build a master PC with Raid and a 2nd PC with a single drive. You can overclock the CPU's to 3.0GHz minimum at 100% stable which already gives you a nice boost in speed over your current setup.

    Check this out. The only thing I didnt add was some small parts and I forgot to add a single drive for the 2nd PC and it comes within your budget.

    4-74GB Raptors; 2-1TB Seagate 7200.11(Very fast)
    2-PC Power&Cooling PSU's - 1-750w & 1-610w
    1-Lian-Li large case capable of holding 10 drives and has LOTS of airflow
    2-Zalman 9700LED CPU fans(I have 2 of these & they work great & are rather quiet)
    2-Intel Q8300 Quad-core @2.33GHz easily OC'd to 3.0+
    Same 2 LCD's
    2- 2x2GB Kingston DDR2-800
    2-ASUS P5Q Pro LGA 775 Intel P45 ATX Intel Motherboard - designed for OC
  33. What power supplies do SAS drives require? I assume it uses a different power connector than SATA drives?

  34. SAS connectors are a combo of the power & data Sata cable so instead of two separate connectors, there is just one.
    Just google it and it is easy to find.
  35. ah...so that means that the power is drawn out through the motherboard? instead of coming directly from the power supply?

    I was worried that desktop power supplies might not support them, but it sounds like as long as the motherboard does, i'm good

  36. Nope, it doesn't come from the motherboard. The plugs that I've seen have a single connector to the drive, but it splits into two connectors on the other end. One is a data connector to the board, and the other connects to a standard power supply. All the ones I've seen should work fine with a normal desktop PSU though.
  37. Also...how much louder can I expect the SCSI 15k RPM drive to be? Is this going to be dominant source of noise in my machine?

    I'm a little bit worried about noise, are there any insulating cases, or external solutions, which can help keep my office quiet?

  38. If you want lower noise, look at insulated cases from Lian-Li or Antec. I have a Lian-Li server class tower case which can hold 12 drives and either a 2nd PSU or 2 80mm fans and is fairly quiet for having 4 Raptors in it, especially since the raptors are in a raid 10 so they all move at the same time. I also have an Antec P180 w/4 raptors which is insulated as well and provides great noise suppression. I can barely hear the drives over the fans during heavy use. I can't attest to the 15k drive you pointed out but I do know that the Seagate 15k.6 drives are almost identical to my 4 raptors; however, the Seagate 15k.6 are the best 15k drives on the market. Also, at Dell & Buy.com, a Seagate Savvio 15k.6 146GB sells for $250-260. They have the highest IOP, sustained read(145-160MB/s) & write(140MB/s) and also the highest MTBF. Only the Intel X25 SSD has a higher MTBF(2MM hours vs 1.6MM).

    I think WD has released their Velociraptors in SAS, FYI.
  39. Sorry to keep asking random questions:

    What do people think about this Chieftec case?


    I know it doesn't have any fans. However, I can get it for less than $100 with shipping included, and then can spend the extra money buying quiet 120 mm and 90mm fans.

    The major concern that is holding me up, is whether the holes are in the right place for the dual processor heat sinks. Is this likely to be a major problem? Is it possible to create the correct holes if they don't exist?


  40. Unfortunately, that case does not support E-ATX boards so it won't work for your setup. Also, it is not insulated so it will be rather loud.
    And if I recall, cases supporting E-ATX with dual-CPU's provide a backplate for the CPU/cooler. I know for certain my Lian-Li case came with them.
  41. According to both the newegg and Chieftec, it supports EATX..i found a cheaper deal on that site, and didn't notice it didn't list EATX there... so i think that's ok, noise might be an issue though


  42. Wow, after looking at the pictures on newegg, that is one very nice case. I really like the side vent for the HDDs with 2 80mm fan holes. For fans, I really like Antec's 3 speed fans which allow better cooling efficiency combined with better noise control.
  43. Yeah... I think it's a great deal, and by visual inspection (using the zoomed in images on newegg), I think the holes will work out ok.

    The only trick is that you're going to dump and extra $40 or so into the fans, but since i get to pick them out, I can know that all my fans are going to be quiet (aiming for 20 db or so per fan)

  44. One new question....i finally saw the price of the non-SAS Supermicro motherboard...I had figured it would be $20-30 cheaper, and thus not a big deal, but it's actually $100 off. So maybe I'll do that, which will require swaping in a velociraptor instead of the SAS 15k RPM drive.

    My question is...I know the SATA interface allows 3.0 Gbit/s. Is that a total among all SATA devices on your system, or is that per SATA device. That is, if I have 6 different SATA drives on the system, are they bound to a maximum of ~300 MB/s transfer? I know that likely the processors wouldn't support a ton more than that anyway. However, if having SAS and SATA simultaneously would allow the 15k drive to run without interference from any SATA communications, that might be worthwhile...


  45. Lyuokdea said:
    How noise are the 15000 RPM SCSI drives, compared to the 7200 and 10000 RPM (Velociraptor) drives? Are they too noisy to be in a common work environment?
    I'm looking specifically at:
    Here is a head-2-head comparison between WD Velicraptor 300GB vs Fujitsu 15K 147GB SCSI (Ultra 320)
    Note the differences in SINGLE USER (your applications) and MULTI USER. Also look at the differences in environmental factors.
  46. Thanks for the information...obviously, the programs I will be writing, are slightly different than the games used as single user applications. However, that starts to make me think that the extra $100 for SAS support might not be worth it.

    But first I need to check on the total SATA bandwidth. Is it limited to 3 Gb/s for all HDs combined? In that case, there might be a problem, because my 6 hard drives could easily outburst that, if all HDs are running at the same time.


  47. Lyuokdea said:
    because my 6 hard drives could easily outburst that
    Really? What is the burst capacity of your 6 HDDs?
  48. Well, according to this: http://techreport.com/articles.x/15363/12

    The velociraptor alone will be like 2.0 Gbps, the two caviar black 1TBs are about 1.6 Gbps a piece, the caviar se16 is about the same, and then the ssd will also be quite fast

Ask a new question

Read More

New Build Systems Product