Good Internal SATA Hot Swap Enclosure

kirkdickinson

Distinguished
Jan 18, 2001
81
0
18,630
I am thinking about rolling my own RAID File storage/server for our office.

I want to set up a RAID 5 or 6 system with at least 2TB of memory. Thinking about running 4 750 Gig drives, plus one hot standby.

Here is the plan.
I have a huge old tower case.
I have a fairly recent Asus Mobo with ram and cpu. (dual core)
Probably going to install Linux or some variation of it.

Need to get:
Hardware RAID Card: From Toms Hardware I am going with Areca RAID
Drives: Probably WD 7200 SATA
Hot Swap Enclosures: ???

I have found several different ones and I don't know the quality. Here are some links:
ICY Dock
http://www.newegg.com/product/produc...82E16817994028
Super Micro:
http://www.newegg.com/product/produc...82E16817121404
Athena:
http://www.newegg.com/Product/Produc...82E16817995001

I know there are others. Any recommendations?

Thanks,

Kirk
 

sandmanwn

Distinguished
Dec 1, 2006
915
0
18,990

Yeah, dont do it. This is your companies livelihood on the line. The last thing you want to do is put your companies future on the line with off the shelf parts and zero service warranty backing it up. How many days can you really afford to wait for replacement parts and RMA's?

Go to a proper vendor. Buy the most popular model they carry. Get the big warranty. Most vendors offer 24/7 support with 4 hour turnaround. They also store replacement parts at the nearest local vendor based on where you register your product for speedy service calls. Save the cookie cutter builds for the workstations.

You wont regret when it all fails and the only direction they can point the blame finger at is you.
 

SomeJoe7777

Distinguished
Apr 14, 2006
1,081
0
19,280


The thing is, how important is the data to the business? The question that you need to plant in the mind of your CEO/President/Owner is this: If all the data were lost, does the business keep running?

They always want it both ways: They want the data protected, but they don't want to pay for it. Make them understand that the two wishes are mutually exclusive.
 

rozar

Distinguished
Jun 7, 2007
345
0
18,780
It appears that Newegg is having problems at the time of this post so I cant see what you listed but I have used Supermicro 5 bay, and several ICY dock products before and the ICY dock ones do tend to have a higher failure rate than Supermicro. If it were me I would use Supermicro. I have used their products in computer manufacturing for over 10 years. They build a really good stable product. If you want to discuss more detailed specs you can PM me.
 

BrainDeadHead

Distinguished
Aug 13, 2007
10
0
18,510
I've been using a 3 Bay Hot Swap by Icy Dock for about 3 years now. the fans are starting to make a bit more noise than they did when it was new, but it is a solid product.

Supermicro's products are typically good.

Regarding Sandman's comments:

There is nothing wrong with using off the shelf components for these types of applications. As long as you do your homework. Also, take some of that money you save on overpriced warranties and buy a couple hot spares. Get an extra RAID card and at least one spare drive. Test the spares when you build the system. See what happens when you take an existing array and try to bring it back online with a different card.
Since you mentioned using Linux, you should know that Linux's software raid is superior in many ways to the lower end "hardware raid" cards out there. You just have to be careful and RTFM.

$0.02
~B
 

andrewn2008

Distinguished
Apr 15, 2008
4
0
18,510
You don't state your requirements so I'm going to imagine them.

First, I'd drop the 4x750GB to 5x500GB since you'll use RAID 5 and perhaps 5EE (that's raid 5 with the hot spare you state) Make sure you use Retail because OEM can potentially have problems when RAIDing units together as an array. And I think you meant 2TB of Storage, not memory. I think the largest RAM you can get is 128GB.

Second, how I config my host RAID system. Am I going to have my host OS partition my drive array so that I use it as a NAS through a switch? Or am I going to use it like VMWare ESX and cut my array for my VMs? Or am I going to have servers connect to it as its "data" drive aka like a SAN. If you're going for the first option, then maybe all you really need is a JBOD case...and if you want backup heck buy two, stick two drives and you're done. If you're going for something that "probably" will grow out of NAS into a SAN then perhaps you'd better plan it better.

Third, if you're going for a NAS into a SAN-like config, then you have to plan well. Chuck the m/b. Go on ebay and look for some cheap 2-dual core Xeon servers...you can get one with the case and all for under $300. (Heck maybe you even save on shipping and tell them just send you the mobo/chip!) You can then load up on RAM preferrably over the 8GB marker and get a nice supermicro chassis to your liking. Buy a good RAID card (you'll have to research this as most servers come in PCI-X..and that's NOT PCI) and you're good to go.

Let me tell you what's going to happen. All of a sudden you paste/glue up your box for a nice big NAS...slow as heck, but who cares its up and running and you got some redudancy. Now you find some cheapo rack mounted servers (or your company buys the next wizbang product) and it needs a databases the size of godzilla or your storage of 2TB is not enough...Now you're looking at this solution but can't do anything since you've locked yourself into a single solution...and you KNOW this is going to happen so might as well plan for BIG and then add your SATA/SAS drives when needed. By then they'll even be cheaper but now all you have to do is add drives and reconfig you're RAID. Do you really want to build a RAID config for every server need? Or do you want a nice big SAS/SATA that you can cut up and use how you like...

$.02 and change

Andy

 

quadb0y

Distinguished
Jun 12, 2008
1
0
18,510
I was in a similar situation as yourself, only for personal storage. I've had great success with a direct attached storage setup of SATA II drives and Linux software RAID.
You don't need to spend big $$ on traditional server gear for a flexible, scalable array (up to around 20TB raw) - if you do your reading. All you need is commodity disks (all 3yr warranty at least these days), enough ports to attach them & your favourite Linux distro !

Some history: after an LVM array failure, it was clear I needed redundancy. I also needed to add / remove disks in the array over time, preferably whilst online. I started with 500GB disks but recently migrated to 1TB disks for a 7-disk RAID6 array (5TB usable) at present. My filesystem is reiserfs 3.6, so I can also reize the filesystem when mounted.

With linux kernels >2.6.16 (IIRC), udev supports hotplug (no need for 3.3V SATA power) so attaching a new member disk to your system is limited only by your case & PSU plugs (I use several molex to SATA Y splitters).

Once the disk is visible to the system (check with 'lsscsi'), you simply add it to the array with 'mdadm /dev/md0 -a /dev/sdX' and then grow the array onto the new device with 'mdadm -G /dev/md0 -n X'. The rebuild can take a long time - 20hrs to grow from 6 to 7 1TB disks for me. Next, grow your filesystem on the array & you're done.

With linux software RAID you can even increase the member disk sizes; so future migration to (in my case) 2TB disks and above is possible. You can also migrate from one RAID level to another, online. Monitoring support is also built in (e.g. emails on events) and arrays can be members of other arrays (e.g. RAID10, RAID50, RAID60 etc).

Before choosing Linux software RAID, the best thing to do is read the mdadm manpage a few times & experiment with some loopback devices (check out 'losetup') in a dummy array. the linux md driver is very mature & s/w RAID performance is greater than most low-end to mid-range h/w RAID cards. For myself, it was the clear choice for my storage needs both now & for the foreseeable future.

I strongly recommend watching your disks' health via SMART (I also graph my drive temps via MRTG). I also strongly recommend RAID6, although it's expensive initially in terms of usable space with a small number of disks, later on you'll feel much safer knowing that any 2 disks can drop before you are at risk of losing data due to a disk failing. Remember your array is vulnerable when you lose only 1 disk with RAID5, and the rebuild after replacing a disk is the most risky time !

One thing to be wary of, is when you run out of onboard SATA ports & add a controller card, the card ports may supersede the onboard ports for disk numbering; so the first disk cabled to the controller will be /dev/sda instead of the first mainboard port you were used to. Linux software RAID deals with this through reading superblock information stored at the end of each member disk, to rebuild the array if it doesn't match the saved config.

Another piece of advice is to spread your disk purchasing across multiple manufacturers and dates. There is no point in buying lots of disks from the same batch from the same factory, as odds are they will all fail in very close proximity to one another too. Linux software RAID checks for discrepancies in member disk size & allows (IIRC) a 1% variance. I use whole physical disks in my array, not partitions. In my reading I found some people used to recommend ECC memory too for large direct-attached arrays, although I've had no issues with a decent brand of regular DDR2 unbuffered, non-ECC RAM.

I'm also looking now for some high-density SATA-II RAID backplanes to allow easy attachment & removal of disks in my array; I am pretty sure I will go for the Supermicro 5-in-3 modules (same positive experiences with Supermicro as others have stated).

Edit: Oh also I should mention I run my O/S (openSUSE 10.3) from a 4GB USB flash disk; it's a little slower than a regular hard disk but has great benefits such as complete separation from your array & no moving parts.