SOME (Personal) HISTORY AND DEFINITION OF HARDWARE RAID
A long time ago on an old dual CPU (pentium III 1Ghz) I played with two 30Gb drives using the onboard RAID-0. It was fun and you could really see a difference. At the time I thought I was using "hardware" raid. I had no idea what the difference was. (Most Windows users never do.) However, being an avid FreeBSD and sometimes LINUX user I quickly discovered that even though you set it up in BIOS it was not "really" a hardware raid system. It is "software" raid in that you system's CPU must do all the work via drivers. A true hardware raid is always on an external card and has an onboard CPU called an IOP. (NOTE: Some really expensive server motherboards DO have a "true" raid system built into the motherboard. But I have never seen a user/enthusiast motherboard with a "real" raid system.) The reason you see the difference with FreeBSD and LINUX is that the developers do not want to support software RAID. They have an aversion to it. (I think they do support the popular ones nowadays; they didn't back in 2001.)
Anyway fast forward. At one point in time I bought 4x34Gb raptors and used them on a REVO64 PCI card in RAID-0. VERY nice. But after awhile 1 drive died and then I was at RAID-3 with the 3 remaining drives. Nice to have redundancy; but it is not needed on my gaming machine.
When I upgraded to Vista and a 8800GTS, I went ahead and bought a RaptorX (150Gb). It was almost as fast as 2x34Gb of the older raptors in a RAID-0 (or 3x in RAID-3) In addition the Netcell company went out of business... so there would never be a REVO64 on a PCIe card. So I stopped using RAID-0 for a year.
Recently I decided to play with RAID-0 again. So I bought another raptorX. The 2x150Gb Raptors bottlenecked BADLY on the REVO64 card. In addition that card doesn't play well with NCQ. Actually for some unknown reason the two 150Gb raptors ended up being slower than the 2x34Gb "old" raptors. I didn't bother investigating since it was bottlenecked about 108Mb/s even though PCI should be able to do 133Mb/s (or 150Mb/s??). I ripped that card out and went with the onboard NVidia software RAID (i.e. also known as "fakeraid").
I got bad "breathing patterns during benchmarking due to the CPU and waiting.
So I searched and decided on a Highpoint 3120 "True" hardware raid card with an onboard IOP and 2xSata connections. These are 1/3 of the price of a 4xSata card from Areca. This card also supports port multipliers if I ever decide to use an external cage of 5xSATA in the future. The card also knows about NCQ and other advanced features.
I hooked up the 2x150Gb Raptors and tested it out.
Here are some benchmarks I created. I thought I'd present them here for anybody that is interested in this card OR anybody interested in Hardware VERSUS Software RAID-0 (i.e. fakeraid) Please note this is under Vista32 so the CPU utilization numbers are very questionable: (64%? ick. There is another thread about that subject.)
HDTach both with 64K Stripes: Notice the obvious "breathing" pattern due to using the system's CPU instead of an onboard IOP... it's VERY obvious.
HD Tune: (64k Stripes) and the "breathing" gets really heavy here....
and a few using ATTO. I forgot to save the NVidia RAID for this test, but I did both a 64K and 16K stripe on the RocketRaid 3120; I see many people telling others to use 16k stripes. On a true hardware card... that was actually slower:
HOW does it work in "subjective" terms? It's great. There is a noticeable improvement over using the NVIDIA software RAID-0. (I knew there would be. After you use a "true" hardware raid system... it is hard to go back to software raid... because you "know" better.)
I didn't do a software raid-0 test that I want to try out just for giggles: Run a 2x copies of Prime95. One for each of my cores. Then run the disk drive benchmark... see what speed you get. I may actually rip out the card and <sigh> reload again and try that after I do that with the hardware card. (Unless someone with a NVidia software RAID-0 wants to run it so I don't have to...)
But anyway... mostly... the RocketRaid 3120 does exactly what it is supposed to do: be INVISIBLE.
(Also works in FreeBSD and LINUX but I've not had time yet to load either.)
Would I rather have SAS or a new FLASH drive? YOU BET I WOULD. How about 2xflash drives RAID-0 on this card... er... it would probably bottleneck the 250Mb PCIe x1 bus... darn it. I'd be in the same boat that made me buy this card... and I'd have to get the Areca 4 port card and pay a lot of money.
A long time ago on an old dual CPU (pentium III 1Ghz) I played with two 30Gb drives using the onboard RAID-0. It was fun and you could really see a difference. At the time I thought I was using "hardware" raid. I had no idea what the difference was. (Most Windows users never do.) However, being an avid FreeBSD and sometimes LINUX user I quickly discovered that even though you set it up in BIOS it was not "really" a hardware raid system. It is "software" raid in that you system's CPU must do all the work via drivers. A true hardware raid is always on an external card and has an onboard CPU called an IOP. (NOTE: Some really expensive server motherboards DO have a "true" raid system built into the motherboard. But I have never seen a user/enthusiast motherboard with a "real" raid system.) The reason you see the difference with FreeBSD and LINUX is that the developers do not want to support software RAID. They have an aversion to it. (I think they do support the popular ones nowadays; they didn't back in 2001.)
Anyway fast forward. At one point in time I bought 4x34Gb raptors and used them on a REVO64 PCI card in RAID-0. VERY nice. But after awhile 1 drive died and then I was at RAID-3 with the 3 remaining drives. Nice to have redundancy; but it is not needed on my gaming machine.
When I upgraded to Vista and a 8800GTS, I went ahead and bought a RaptorX (150Gb). It was almost as fast as 2x34Gb of the older raptors in a RAID-0 (or 3x in RAID-3) In addition the Netcell company went out of business... so there would never be a REVO64 on a PCIe card. So I stopped using RAID-0 for a year.
Recently I decided to play with RAID-0 again. So I bought another raptorX. The 2x150Gb Raptors bottlenecked BADLY on the REVO64 card. In addition that card doesn't play well with NCQ. Actually for some unknown reason the two 150Gb raptors ended up being slower than the 2x34Gb "old" raptors. I didn't bother investigating since it was bottlenecked about 108Mb/s even though PCI should be able to do 133Mb/s (or 150Mb/s??). I ripped that card out and went with the onboard NVidia software RAID (i.e. also known as "fakeraid").
I got bad "breathing patterns during benchmarking due to the CPU and waiting.
So I searched and decided on a Highpoint 3120 "True" hardware raid card with an onboard IOP and 2xSata connections. These are 1/3 of the price of a 4xSata card from Areca. This card also supports port multipliers if I ever decide to use an external cage of 5xSATA in the future. The card also knows about NCQ and other advanced features.
I hooked up the 2x150Gb Raptors and tested it out.
Here are some benchmarks I created. I thought I'd present them here for anybody that is interested in this card OR anybody interested in Hardware VERSUS Software RAID-0 (i.e. fakeraid) Please note this is under Vista32 so the CPU utilization numbers are very questionable: (64%? ick. There is another thread about that subject.)
HDTach both with 64K Stripes: Notice the obvious "breathing" pattern due to using the system's CPU instead of an onboard IOP... it's VERY obvious.
HD Tune: (64k Stripes) and the "breathing" gets really heavy here....
and a few using ATTO. I forgot to save the NVidia RAID for this test, but I did both a 64K and 16K stripe on the RocketRaid 3120; I see many people telling others to use 16k stripes. On a true hardware card... that was actually slower:
HOW does it work in "subjective" terms? It's great. There is a noticeable improvement over using the NVIDIA software RAID-0. (I knew there would be. After you use a "true" hardware raid system... it is hard to go back to software raid... because you "know" better.)
I didn't do a software raid-0 test that I want to try out just for giggles: Run a 2x copies of Prime95. One for each of my cores. Then run the disk drive benchmark... see what speed you get. I may actually rip out the card and <sigh> reload again and try that after I do that with the hardware card. (Unless someone with a NVidia software RAID-0 wants to run it so I don't have to...)
But anyway... mostly... the RocketRaid 3120 does exactly what it is supposed to do: be INVISIBLE.
(Also works in FreeBSD and LINUX but I've not had time yet to load either.)
Would I rather have SAS or a new FLASH drive? YOU BET I WOULD. How about 2xflash drives RAID-0 on this card... er... it would probably bottleneck the 250Mb PCIe x1 bus... darn it. I'd be in the same boat that made me buy this card... and I'd have to get the Areca 4 port card and pay a lot of money.