Sign in with
Sign up | Sign in
Your question

Hardware raid or software raid? [raid 0+1]

Last response: in Storage
Share
January 7, 2011 9:12:48 PM

I am currently learning that my software raid 0 is not big on safety if a drive dies or some hardware dies.
So I am looking at a setting up a new raid. I am not sure what to go for here. Software or hardware raid?
I use this mobo the Asus P5N-E

What are the pros and cons of hardware raid vs software raid for this mobo and onboard controllers?

What would give me the best choice of redundancy and performance, hardware or software raid?
I am thinking about using raid level 0+1. As it seems to give me the speed of raid 0 and the redundancy of raid 5. As I understand things if 1 drive fails the raid setup will degrade from raid 0+1 into raid 0 until new HDD is added.

Does Linux support software raid 0+1 or do I need to go for BSD?

Will either sw or hw raid give me the option to move raid to new pc with data intact of something like mobo fails?

On widipedia I found this information, unfortunatly the info is not very specific.
Quote:
Software RAID has advantages and disadvantages compared to hardware RAID. The software must run on a host server attached to storage, and server's processor must dedicate processing time to run the RAID software. The additional processing capacity required for RAID 0 and RAID 1 is low, but parity-based arrays require more complex data processing during write or integrity-checking operations. As the rate of data processing increases with the number of disks in the array, so does the processing requirement. Furthermore all the buses between the processor and the disk controller must carry the extra data required by RAID which may cause congestion.


Quote:
Hardware RAID controllers use different, proprietary disk layouts, so it is not usually possible to span controllers from different manufacturers. They do not require processor resources, the BIOS can boot from them, and tighter integration with the device driver may offer better error handling.

A hardware implementation of RAID requires at least a special-purpose RAID controller. On a desktop system this may be a PCI expansion card, PCI-e expansion card or built into the motherboard. Controllers supporting most types of drive may be used – IDE/ATA, SATA, SCSI, SSA, Fibre Channel, sometimes even a combination. The controller and disks may be in a stand-alone disk enclosure, rather than inside a computer. The enclosure may be directly attached to a computer, or connected via SAN. The controller hardware handles the management of the drives, and performs any parity calculations required by the chosen RAID level.

Most hardware implementations provide a read/write cache, which, depending on the I/O workload, will improve performance. In most systems the write cache is non-volatile (i.e. battery-protected), so pending writes are not lost on a power failure.

Hardware implementations provide guaranteed performance, add no overhead to the local CPU complex and can support many operating systems, as the controller simply presents a logical disk to the operating system.

Hardware implementations also typically support hot swapping, allowing failed drives to be replaced while the system is running.

However, inexpensive hardware RAID controllers can be slower than software RAID due to the dedicated CPU on the controller card not being as fast as the CPU in the computer/server. More expensive RAID controllers have faster CPUs, capable of higher throughput speeds and do not present this slowness.


So what to choose, Hw or sw raid for my motoherboard? I am looking for maximum speed and redundancy. What will be fastest for me?

Does Linux support software raid 0+1 or do I need to go for BSD?

Will either sw or hw raid give me the option to move raid to new pc with data intact of something like mobo fails?

a b G Storage
January 8, 2011 1:39:16 AM

You understand that redundancy means a way of keeping your PC going in case of a hardware failure, it is not in any way shape or form a backup. Period. If you get a virus, if you delete files, if you break the array, if you need to move the drives to a different controller, your data is all at risk. In fact, most causes of data loss are caused directly by a person who mistakenly disables or undoes their array while messing around in the BIOS, doing updates, hardware changes, etc. Most of the time, the user breaks the array and before they understand what they have done, they have screwed the array up past the point of any chance of recovery.

Redundancy means a drive can fail and the PC does not need to be shut down, but it does nothing to guarantee data integrity. Nothing at all. Anyone who has much experience with RAID will tell you the same thing.

Software RAID has no performance benefit at all over a single drive, in fact a single fast drive will deliver better performance than software RAID in most all instances.

Lastly, what or why exactly do you need a RAID solution? Most people have no need what so ever. Speed? Bleh, Most single drives today are faster than RAID'd arrays of only a few years ago. RAID was popular when drives were slow and small compared to what you can buy today. As a backup, well RAID is not a backup by any stretch of the means.
January 8, 2011 5:34:43 PM

I agree with the backup part. Raid is no substitute for backups.
But even so there is a performance boost from Raid, I am sure of that. All placed where they write about raid the performance boost is one of the cons.
I will use a script or 3rd party software for backup to external NAS when I got my network issues sorted.

So you mean for certain to drop software raid?
If I use hardware raid 0+1 youactually think the raid will be slower then a singel drive? Mind you I do work with large video files and hi res print quality photos etc.
Also even though I dont consider raid a backup solution bad things have a way of happend when one need it the least. Say I am in the end process of editing large video, hi res photos or graphic and one drive fails, I still would be able to rebuild raid or at least treansfer newest data onto nas or external drive or whatever. Sure ly you can not for certain mean that those benefits are too small to be using raid 0+1?
Related resources
January 9, 2011 4:48:53 AM

Or maybe you are saying raid are slow without a dedicated raid control card?
I know the hardware is not top of the line but one got to do the best with what one got.
And I belive raid will give me performance boost. But do I really need to use a dedicated controller card for that? Will hardware raid with mobo raid be so slow?
January 9, 2011 1:50:00 PM

Blauer,

If you are looking for ultimate speed then a hardware raid 0 is your answer. Jit is correct though in that RAID is in no way a backup. It is for redundancy in the event of a hard drive failure. Without knowing your exact setup it would be hard to provide you with advice. If it were me, and looking at your post about large video or photo files, I would opt for RAID 5 or even 0+1 with 4 drives. With 4 drives you would have 2 sets of RAID 0 which then gets mirrored to eachother. This is by far a greater redundancy than just having two drives raided together.

Be aware that most of your video rendering is not limited by drive speed as all the information must pass through the processor. You would have greater benefits by running additional ram and higher speed processor as these are what are needed for video encoding. If you are looking for speedy boot times then twin SSD drives or greater in a RAID 0 will help with boot times and program launches.

As I do not do video editing I currently have twin 128 GB SSD drives in a RAID 0 for my boot volume, 2 500GB drives for gaming storage and program installations, and 2 2TB drives for music and video storage all of which get backed up to a drobo AND my server. This gives me data redundancy for all my important documents and music, It gives me double redundancy as it is backed up in two locations and I have also created, using Hirems boot disk, an image of my boot drive so in the event of a hardware failure I can restore the boot drive with all updates and applications that are installed onto it, in short order.
January 9, 2011 1:50:58 PM

One more thing. It is always best to use hardware raid when possible especially when encoding video or photos. You do not want your processor or other resources being dedicated to supporting your RAID when they can be used to process your video.
January 9, 2011 2:15:32 PM

Thx beamj. I have a collection of 4 SSD's that I planned to run in raid 0 via a promistek controller card. I have not implemented that yet and probably never will. Thing is I had not done my homework and done a case study about SSD when I bought that bundle. And it turns out the OCZ SSD's are worthless after todays standard. Turns out those SSD is forst generation "prototype" drive with tons of problems affecting them, and as far as I know even the producer OCZ seems to think it is better to put distance between them and that SSD. Cause there is no firmwarefixes that deal with issues and according to people that have great experiance with SSD one of the major issues with the OCZ Core series v2 is that the drive will hang and freeze up for several seconds now and then.

So I belive I have wasted hard earned cash on a rotten deal here. And that means I must do the best of what I have and those SSD still sits in the boxes.
January 9, 2011 2:23:32 PM

Blauer,

use the drives as one big raid set in raid 5. Even if one stalls out for a second or two the entire array wont be down at that point. You get the redundancy plus a speed boost and if one drive fails you can always swap it out while still working.

Try it out, make an image of your boot drive and worse comes to worse if you dont like it then ebay the drives. Someone will buy them. Not everyone had problems with first gen drives and like all things technical we only hear about the bad and rarely hear about the good. So while there may be 100s of reports on this issue or that issue it is really minuscule in comparison to the number of drives/items actually sold. Think about it when you hear complaints from 100 people eventhough 10k or 100k items sold is it really significant? While many first gen drives had some performance issues, especially when they get full, not everyone had that problem or there would be some form of recall.
January 9, 2011 5:14:29 PM

Ok that might work. But why choose raid 5? Raid 5 is slower then raid 0 or even 0+1, right?

Edited:
Quote:
Number of disks 4
Single disk size, 30GB
RAID type 5

Results
Capacity 90 GB
Speed gain 3x read speed, no write speed gain
Fault tolerance 1-drive failure


Quote:
Number of disks 4
Single disk size, 30GB
RAID type 0+1

Results
Capacity 60 GB
Speed gain 4x read and 2x write speed gain
Fault tolerance At least 1-drive failure


Quote:
Number of disks 4
Single disk size, 30GB
RAID type 0

Results
Capacity 120 GB
Speed gain 4x read and write speed gain
Fault tolerance None





Another thing is that the drives are only 30 GB each. And as far as I know they handle garbage control poorly. With complicated or nested raid setups I will get less storage, and the drives will more likely faster slow down as cells are written into, right?

While the boxes of the drives clearly stated that firmware is upgradeable no such fix seems to excist. And other more serious vendors have done several firmware revisions on their firmware for SSD line. One can not help to think why OCZ abandoned thir core v2 drives.

I must think about this before I execute plan. I do run some virtual OS and that have worked well for me in the past. So I need to figure out how to set up things as capacity might be an issue here as well. If I store vmware files on HDD array do you think that will slow down the speed if vmware is installed on small and fast host os? I mean will all importent files that vmware need be loaded into SSD or Ram even if I keep the capicity consuming vmware files on traditional HDD?

Not sure if this make any sense so pls let me know if you need me to ellaborate and I will try to explain better.
January 9, 2011 5:50:13 PM

Raid 5 is slower than raid 0 but faster than raid 1. Also raid 5 is faster than 0+1 as well because only a parity is being written to each drive.

Raid 0 = combines the space of the HDDs
Raid 1 = Space of 1 hard drive (in the case of your 4 30s it would be 60gb total due to twin raid 0's of 60gb each)
Raid 5 = loses the space of one hdd so you would get 90gb of size

regardless of raid or not raid, SSDs of that era will get progressively slower as you write more files to them. Newer SSDs seem to handle "garbage" more effectively and suffer reduced performance losses in comparison.

I would not run virtual OS's on the setup you have showing due to space limitations. Even if you run all 4 drives in a raid 0 you will end up with only 120gb. Personal experience tells me that by the time you install windows, windows updates, a few programs that require installation onto the boot volume, anti virus software and a few pictures you are at around 70gb of used space. With a 120gb drive (partitioned would be closer to 112gb) leaves you with about 50gb to run your virtual OS's. Even if you have lots of memory or move your swap file (you should do this regardless with the older SSDs) it will soon leave your machine very slow.
January 9, 2011 5:58:25 PM

Btw seems there are people that have eliminated their hangs and freeze by using dedicated controller card for those core v 2 SSD's. Might be worth trying after all.
So i guess the reasson for using raid 5 over raid 0 is that raid 5 might even do more for performance if a singel ssd freezes?

But what about write performance? In real world computing is that not a big issue maybe? I mean most of the time files are read, right? But wouldnt it be sosrt of a waste to not increase write performance as well when one could?
January 9, 2011 7:15:13 PM

beamj said:

regardless of raid or not raid, SSDs of that era will get progressively slower as you write more files to them. Newer SSDs seem to handle "garbage" more effectively and suffer reduced performance losses in comparison.

I would not run virtual OS's on the setup you have showing due to space limitations. Even if you run all 4 drives in a raid 0 you will end up with only 120gb. Personal experience tells me that by the time you install windows, windows updates, a few programs that require installation onto the boot volume, anti virus software and a few pictures you are at around 70gb of used space. With a 120gb drive (partitioned would be closer to 112gb) leaves you with about 50gb to run your virtual OS's. Even if you have lots of memory or move your swap file (you should do this regardless with the older SSDs) it will soon leave your machine very slow.


There is a tool they use to wipe SSD's when SSD has become slow. So I think that performance degrade might be possible to fix manually.

Hmm if I drop win 7 and rather run win XP I guess I can trim a install down to about 3-6 GB, for sure less then 10GB.
Also I can run a really small and lean host os. Then I can run a everyday GUI guest Linux os for browsing etc. Seems that all in all should not end up with more then 20GB of diskuse like that.

Maybe if this does not seems doable my best bet is to try to turn the OCZ's on ebay and save up for newer SSD's even if that might take a while. As long as I remember my prayers each night HDD's might just be fine the way they are for some months until funding is in place for better hardware.
January 10, 2011 8:55:52 AM

Seems that a dedicated controller will take care of the stuttering on the core v2 SSD.
Think it might be possible to run them in raid 0 to get the most out of capacity?

I guess the main problem with the core v2 and even more so v1 was that the onboard controller on the SSD.

I do see many sites recomend controller cards with 128 mb of ram, and mine only got 64mb. I guess there is only one to figure this out and that is to test it all out. I will clear my schedual for next weekend and try to get time to test both raid 5 and raid 0. I guess the proof will be in the pudding, and the tests will show what to expect. Also I can do real life tests like open lots of tabs in firefox while running large rar files or something like that.
May 22, 2013 6:17:12 AM

jitpublisher said:

...
Software RAID has no performance benefit at all over a single drive, in fact a single fast drive will deliver better performance than software RAID in most all instances.
...


I have to disagree with you here. In my experience, at least with newer CPU's, software RAID outperforms a single drive, on sheer reads at least.

I have 4 identical drives. (Seagate 1 TB drives, 7200 RPM: ST1000528AS)
I am running a RAID-5 with 3 of them under Linux (MD-RAID) and here is what I get:

Single drive read: ~120 MB /s
RAID-5 read: ~280 MB /s

The CPU is a four core i5 750.
I also have a 14 GB of RAM.

I guess one of the performance advantages of software RAID is that you can easily upgrade it by upgrading the CPU (scales with CPU speed and # of cores), and adding RAM (it's like adding CACHE to the RAID controller card).

Note that these results were under Linux, I have no experience with Windows software RAID.
May 22, 2013 3:56:11 PM

alecz20 said:
jitpublisher said:

...
Software RAID has no performance benefit at all over a single drive, in fact a single fast drive will deliver better performance than software RAID in most all instances.
...


I have to disagree with you here. In my experience, at least with newer CPU's, software RAID outperforms a single drive, on sheer reads at least.

I have 4 identical drives. (Seagate 1 TB drives, 7200 RPM: ST1000528AS)
I am running a RAID-5 with 3 of them under Linux (MD-RAID) and here is what I get:

Single drive read: ~120 MB /s
RAID-5 read: ~280 MB /s

The CPU is a four core i5 750.
I also have a 14 GB of RAM.

I guess one of the performance advantages of software RAID is that you can easily upgrade it by upgrading the CPU (scales with CPU speed and # of cores), and adding RAM (it's like adding CACHE to the RAID controller card).

Note that these results were under Linux, I have no experience with Windows software RAID.


Similar results for me on Windows, but using the Intel RAID0 (which is basically fakeraid, so still dependent on the CPU) - ~500MB/s with one drive, and ~1000MB/s with both the drives in RAID.
May 23, 2013 8:43:35 AM

_deXter_ said:

Similar results for me on Windows, but using the Intel RAID0 (which is basically fakeraid, so still dependent on the CPU) - ~500MB/s with one drive, and ~1000MB/s with both the drives in RAID.


500 MB/s with one drive? What drive is this? can you provide the make and model #?
With RAID0, it is expected to have double the read performance...
May 24, 2013 1:06:52 AM

alecz20 said:
_deXter_ said:

Similar results for me on Windows, but using the Intel RAID0 (which is basically fakeraid, so still dependent on the CPU) - ~500MB/s with one drive, and ~1000MB/s with both the drives in RAID.


500 MB/s with one drive? What drive is this? can you provide the make and model #?
With RAID0, it is expected to have double the read performance...


Samsung 830. The speed is of course, sequential read. Benched with CrystalDiskMark.
!