Recommend a case for 8 x 3.5" drives

I am tired of the poor network performance of my ReadyNAS NV NAS device, and instead I want to build my own fileserver.

The primary purpose of this box is to store my backup-, music-, pictures-, and video files, and to serve this content over Gb ethernet to my Media Center PC, workstation, and wireless notebooks.

I want to use an 8 x port SATA-II RAID-5 controller with 8 x 3.5" SATA-II drives.
I am considerig the Adaptec 2820SA PCI-X card, but the PCI-X does limit my motherboard options.

I do not need lots of CPU power, only enough to serve my media files over Gb ethernet.
I am considering an Intel Core 2 Duo.

I do not need fast graphics, and once setup I will run the server without a monitor and just remote desktop in.
I am considerig a motherboard with built in video.

For the case I am considering the Lian-Li PC-V1100B Plus II V-Silent case.

What would your recommandations be for a case (or otehr components)?

8 answers Last reply
More about recommend case drives
  1. Have you considered a slower, PCIE card like the HighPoint RocketRAID 2320 PCI Express x4? It is easily half the price, supports auto rebuild hot spare, and motherboard compatibility will be painless - since it's a fileserver only you probably won't notice the difference.

    A cheap case like the Spire Swordfin would be my choice over the lan-li since my wallet usually dictates where to focus spending.
    The TT Armor is also a great choice, 10x external bays in case you want to go with hot-swap with all 8x array hdd's and 1x OS hdd and a dvd drive.
    The CM Stacker 810 has 11x external bays so you can have 1xdvd/8x array hdd/2x mirrored OS hdd's all external.
  2. doolittle, thanks for the recommedations.

    I was considering putting all the drives internal to the case, that's why I was looking for 8+ internal 3.5" cases, but I have seen some 5.25" cages to hold multiple 3.5" drives in removable brackets. This does seem like a much better idea especially since I won't have to open the case to replace a faulty drive.

    Is using these types of cages for removable drives a good idea, e.g. do they have sufficient ventilation, can tey reallybe removed without powering down?

    Can you recommend any particular units for SATA-II, maybe 2 x 4-bay to acommodate the 8 x drives?

  3. How much free space do you need? The Intel ICH8 has 6 SATA ports, if you add 6 x 750GB disks in RAID 5 thats 3.75TB - you are kidding me if you need more than that, pls!
  4. Rather than dropping a couple hundred on a case, why not get a 3x5.25" to 4x3.5" Drive Bay converter? Something like one of these...

    -Wolf ponders...
  5. Hmm you mean a couple of 4 in 3x5.25" or maybe a 5 in 3x5.25" and a 3 in 2x5.25" units...

    a pair of the former is ~$208 for 8 drives in 6 bays and the latter option would be ~$192 for 8 drives in 5 bays... Well I can't say I have any experience with add-on units of that type, only the integrated enterprise-level server types like the Dell PowerEdge. Not sure if it would be worth the added complexity or potential for device failure but it would be a great option for a limited amount of 5.25" bays.

    I was thinking of a more simple trayless solution w/ a straight-thru connector like this MASSCOOL MRA200 Mobile Drive Rack $20~$25 shipped so that's only $160 for an 8-pack - seems to be a cheaper, less complex / more reliable solution.
  6. The 2 times 4 x 3.5" drives in the 3 x 5.25" bays seems like a good idea, then maybe I can use 2 of them in a 7 bay case to support 8 disk drives and a DVD drive.

    But I'm not convinced that I really need the hotswap bays, it basically means I will not use the internal 3.5" brackets, and that is a waste of case space.

    I see Adaptec also has a PCI-E solution:

    But thinking about the network access only being Gb/s, maybe the RocketRAID at 50% the cost of the Adaptec is not a problem.

    As for the number of drives, I already have 2TB and I need more, I may not start out with 8 x 750GB drives, but it would great to grow to 8 ;)

    I found an Asus board that includes PCI-E and PCI-X, and two builtin Gb/s ethernet adapters (one for cable modem one for lan), this seems like a good choice if I want to use both PCI-E and PCI-X, and I don't need separate network cards.

    Any comments on this board?
  7. Here are my plans:

    8 external 5.25" bays in the Xion Stacker. $44.99 after rebate

    2 Cooler Master STB-3T4 drive racks. Each is a 4 SATA drives in 3 external 5.25" spaces. $19.99 each.

    RocketRAID2220. I specifically picked the 2xxx series because they have online capacity expansion so there is no need to buy all 8 drives at the start. Without OCE, you had to rebuild every time you add a drive. Be sure the card you want has this feature and you can expand as your needs grow. I picked the 2220 because all the others are PCI-e and I am using an old board w/ PCI only. PCI-X is supposedly backward compatible. ~$250

    One option I am considering is a standard 4U rack mounted case w/ 6 external 5.25" drives, where I would use two STB-3T4 racks. The only draw back is that this leaves no option for CD drive, which is ok because that will only be used during setup.

    More options with a few more bays:
  8. Also, to address cooling:

    I started out a basic server with these POS drive kits you can get on ebay that include three 40mm fans on the front. DO NOT GET THEM. Once I had 3 of them in my case, it sounded like a jet engine.

    My new server system will have two of these 4-in-3 racks with big, quiet 120mm fans. I even went as far as to purchase replacement "silent" ball bearing fans off of newegg because the stock fans are sleeve bearing.

    Dont get the variable speed fans though, because that does nothing for intake air. Variable speed fans should only be used on exhaust air.

    I dont believe that you want to get one of those cases where you have a lot of internal 3.5" spaces where they're stacked very tightly on top of each other. These 7200rpm hard drives run hot. I had 2 stacked on top of each other that ran 24/7 for a few years. I went to work on one one day and it was VERY hot to the touch. The reason I had to work on it was because it was failing -- I assumed after the fact that this was due to the high heat levels it was experiencing.
Ask a new question

Read More

Power Supplies Components Product