I'm new to raid. we have a small office and have recently got a new system with the intel S3000AH motherboard which supports in-built RIAD. the system has 2X80GB drives (planning to be used as boot drives for the OS) and 2X120GB drives (planning to be used as data storage)
can RAID 10 be implemented on these set of hard disks? if so can you give me any links to how I can set up RAID 10 . I tried setting up RAID 10 in the bios but I could not get through. I could only manage RAID 1 on the 2X80GB and RAID 1 on the 2X120GB.
I will be using this as a server in my office where we develope web applications based on asp.net and php. as well as as an internet gateway for my office users. there are just about 15 users. the processor being used is intel Xeon 64bit dual core on the intel S3000AH server motherboard.
p.s. The short answer is NO, RAID 10 needs a minimum of four disks.
I think I was not clear on this one. yes there are 4 HDDs. 2 of them are 80 GB and 2 of them are 250 GB (corrected from the last post where I said they were 2X120GB).
so now using these 4 drives can we implement RAID 10? or do I need to have 4 identical drives like 4X250GB?
No you don't need identical drives (but it's always recommended). A RAID 10 configuration on your kit would result in one 160GB drive. I don't think you'd be able to access the remaining 170Gb on each of the 250s.
If you put your four disks in RAID 5 you should get more "bang for your buck"
4 X 250GB in RAID 10 = 500Gb usable space
4 X 250GB in RAID 5 = 750Gb usable space
Your performance should be similar, more usable space and same redundancy.
Agreed... RAID 5 is a much better implementation for a server... you also should get faster read/write times with RAID 5 as the system can read/write 3 drives at the same time... compared to 2 drives on the proposed RAID 10
now with my hard disk setup (2X80 Gb and 2X250 GB) can a matrix raid be a good solution(first RAID1 on each sets of 80GB and 250GB and then have a matrix raid on those 2 Raid1 volumes)? or RAID 5 on 4X250 GB Hard disks be a better solution?
as I said earlier I'm a newbee soo if I made some silly statement please excuse . I'm a newbee but willing to learn.
Matrix raid requires you to define your drive arrays first. So with the 2x80 and 2x250, you'd need to define an array with all four first, and then you could setup up to 2 raid partitions of RAID 10, RAID 0, or RAID 5 (if Sbridge supports). I do not believe that matrix storage will allow you to access the remaining space available on 2 out of 4 disks in an array.
Given your situation (15 users in an office), I would recommend just sticking with 2xRAID 1 arrays - 1x80GB and 1x250GB. Keep the OS and programs on the 80GB, and user data and web sites on the 250 GB. If you ever lose a drive, just replace it with a bigger one and rebuild the RAID 1 array. When you are ready to replace the drives (either because of failure or because you need more space), I would recommend drives in the "sweet spot" between size and cost (currently between 320 and 500 GB), since as you are painfully aware there is a cost associated with the connection used by each additional drive. After you replace both drives in an array with larger drives, you can extend the partition and get more space. Also, buy the best-quality drives you can find (5 yr warranty). WD offers Raid Edition (RE) drives, for example.
Drives are cheap. 500 GB costs around $100. People are expensive. If it takes you even 1 hour to get everyone back online, and your team is non-productive for that hour, and the average employee cost is ~$75 / hour, then you lose $1200 (15+you) for each hour that you spend recovering from a disk failure.
Finally, I would not recommend RAID 5 unless you have a RAID controller with battery backup. Without that, you run the risk of having to rebuild the array on any power outage, hard reset, or other unexpected shutdown of the array... or you have to turn off the write-back cache feature and your write performance will be terrible. I have a RAID 5 array on an intel mobo. I use that array as a write-rarely, read-often storage mechanism so I don't need the write-back cache. When I had it enabled, I had to rebuild the array all the time and the utility of the computer dropped precipitously.
"the system has 2X80GB drives (planning to be used as boot drives for the OS) and 2X<250>GB drives (planning to be used as data storage) "
To put it simply, it sounds like you want 2 partitions anyways, or atleast 2 arrays. If that is your intent, two separate arrays both in RAID1 would net 80gb & 250gb. Depending on the loading this may be faster than a RAID5 as both arrays can be accessed concurrently without noticeable penalty and retain decent redundancy. This is always ideal. Technically you could have one drive fail from each array and still retain data, however not two drives on the same array. This setup would be better than RAID5 overall.
Rather than purchasing 2x250gb drives to create one big array, buy a 500gb drive and a usb caddy and do daily backups.
for us the question is not on the cost. since it is a new system I can have the hard-disks replaced to get all the hard disks to be identical, the cost difference wont be much. I just needed to have the best solution for my setup. as praeses suggested we do have a 500GB external USB drive from WD which we use for all our backups.
so if changing the HDDs is not a factor do you still think that the solution suggested by teramedia is the best for my situation? or the other raid solutions would suit better.
my main goal is to do this right 1 time and dont worry about the server at least for 3 or 4 years
I would recommend against RAID 5 unless:
1) you have and will use a UPS that integrates with and will be configured to safely shut down the O/S,
2) you purchase high-quality, 5 yr warranty, RAID-ready disk drives,
3) you are diligent about daily backups to the UPS drive (RAID is not a backup strategy),
4) you don't run anything on this server that can cause the O/S to crash,
5) you have adequate cooling in your rack / server tower to keep the four drives cool,
6) you test-config in advance and confirm that you can get adequate write performance out of the RAID 5 array (Otherwise, your developers will be waiting for Godot), and
7) you also create a small RAID 0 partition for temp storage (e.g. where your TEMP and TMP directory parameters point, where your pagefile sits, etc.)
If any of these statements aren't true, stick with 2x RAID 1 arrays, or even just 2 larger disks in a RAID 1 array; leave yourself some future expansion room.
The extra performance you can get from a RAID 5 array on large block reads can be substantial, but it sounds like your use case will involve more looking for a lot of small files: increased latency might hurt more than increased throughput helps. The performance hit you'll incur from RAID 5 writes will be greater than the performance hit from RAID 1 writes because of the software-based implementation.
If cost isn't an option, get two (2) PCIe-x4 SATA RAID controllers with battery backed write-back cache (e.g. Areca), put one in a safe place, put the other in your server, setup a RAID 5 array on that, and then configure the O/S to use the disk as two partitions - one small one (16 GB?) for O/S and programs, the other for data.
I'm going to follow teramedia's advise. all the hardware requirement on the points mentioned in teramedia's post pass the test. I will just have to consider the PCIe-4 SATA raid controllers, if we can manage that then I think we shall go ahead with the RAID 5. otherwise I think the 2x Raid 1 arrays are good enough.
Thank you all once again for the great advise. I dont think I would have got this anywhere else