Sign in with
Sign up | Sign in
Your question

Old raid card or X58 software RAID.

Last response: in Storage
Share
October 14, 2009 8:06:07 AM

I'm getting a old HighPoint RocketRAID 2220 PCI SATA II in a C2d system with 8x500GB HDDS.

I'm considering putting the RAID card into my current rig with the 8 drives.
I could also use my current Foxconn Bloodrage X58 to software raid them...

I know that typically a hardware RAID option is higher performance, but this RAID card is far from excellence if you only consider speed. They are typically also more expensive, but I'm getting this RAID card regardless of whether or not I use it.
The software solution is a lot more modern, but regardless, it's still a software solution.
I don't know about reliability on either solution though.

So, my question is...
Will the Foxconn Bloodrage X58 ICH10R softare solution provide me more or less performance/reliability (both are important) at 6x500GB RAID 5 compared to a aging expansion card that can do 8x500GB.


What I have:
i7 with 6GB RAM and a 1Tb and 250GB HDD.
C2D with 2Gb RAM and 8x500GB.

The 500GB drives are all 2 year old Seagates, hence RAID 5.

What I can do:
i7..............................................C2D
6x500GB software RAID 5..............1TB + 500GB + 500GB + 250GB
~~~~~~~~~~~~~~~~~~~or~~~~~~~~~~~~~~~~~~~~~
8x500GB hardware RAID 5.............1TB + 250GB


I'll probably do a fresh install of Windows 7 RTM on each.

The i7 is my main rig, and pending on what PSU the C2D system has, it'll either have a 4670 512MB or a 4870 1GB (assuming the PSU can handle it, if not I may just get it a new PSU, and get a 5870 2GB for my main rig). The C2D will also become my main rigs data backup (and the one I'll use if I have to sell my main rig if I lose my job).

Just curious, assuming a 8x500GB RAID 5, and 1 drive failed, how long would it take to rebuild the lost drive?


...
On a side note, knowing my brother, if I give him the second computer, it'll have more porn on it than a adult DVD store in only a week...
a b G Storage
October 14, 2009 3:32:11 PM

Well, besides the speed issue, the hardware solution will give you a completely portable RAID solution. The motherboard /software solution will more than likely be good only on that particular motherboard.
m
0
l
a c 127 G Storage
October 14, 2009 3:38:09 PM

You have the best onboard RAID there is and you want to switch to an old PCI-X based hardware-assisted controller? The 22xx is not true hardware RAID if you consider parity RAID; its partly software and the cpu heavy stuff is being offloaded to the host CPU. That's not too bad but the PCI-X bus is bad; if you're going to use it on PCI then i guess performance is not important at all; only reliability is?

Do you need to boot from it, why is software RAID a lesser option in your eyes?

Ever heard about ZFS and if its for you?
m
0
l
Related resources
October 15, 2009 6:39:29 AM

PCI-X has a 1GB/s bandwith, so I doubt its bandwidth is a bottleneck.

And no, I have never heard of ZFS. Thanks for bringing it up though, RAID-Z seems quite interesting.

I'll look more into RAIDZ, but if I don't I guess I'll keep the old raid card in the older PC in RAID5, and use the ICH10R in RAID5 for my newer rig.
m
0
l
a c 127 G Storage
October 15, 2009 1:47:59 PM

Well generally you have two options:

- get a RAID solution in your workstation PC
- get a NAS solution and connect to the NAS over the gigabit network

If you want to use ZFS you need a dedicated OS and machine for that. It would give you alot more protection than RAID5. Generally RAID-arrays are not reliable enough to allow you to not use a backup. You always need a backup. For example i have more or less identical fileservers that sync at night.

Using ZFS would mean you don't need that "ugly" PCI-X card. Besides does your mobo have PCI-X? The Foxconn X58 doesn't appear to:



Basically, PCI-X is thown-away money. In the case of ZFS, you need SATA ports; onboard ports and maybe some on these:
http://www.supermicro.com/products/accessories/addon/AO...

That would be much faster. Bandwidth is not important here; latency is. Without the right latencies; you'll never get the max bandwidth. For example 8 disks on PCIe reach for 480MB/s sequential speeds. With one disk on a PCI-bus, the speeds drop to 220MB/s even though one disk doesn't saturate the PCI bus. But because the latency of one disk goes up, they all drag eachother in this waiting cycle. So please, do yourself a favor and leave the old PCI bus to die silently. We got something better now and its called PCI-express or 3GIO.
m
0
l
a c 127 G Storage
October 16, 2009 6:34:51 PM

If you want genuine help you would have to provide feedback to our ideas and suggestions.
m
0
l
!