Sign in with
Sign up | Sign in
Your question

Weird Raid-5 write speeds

Last response: in Storage
Share
May 12, 2010 5:24:25 PM

I recently set up a Raid-5 array using ICH10r software raid on my Gigabyte GA-H57M-USB3 motherboard with 5 Western Digital 2tb EADS drives.

I know hw raid is preferred, but I'm limited by my gigabit network here anyway as this will be used as a network file server.

After setting everything up, I'm getting 150mb/s read speeds, which is fine since I'm limited to 120mb/s through gigabit anyway.



The interesting thing, however, is the write speeds. From what I understand I should be expecting write speeds around 50-80mb/s with this setup. When I have write-caching disabled I get write speeds of 5mb/s (!!!), which obviously is horrible. With write-caching enabled, I get write speeds of around 190mb/s which seems too high to be correct.



So what's going on here? With write-caching off the speeds seem way low, and with write-caching on they seem higher than I should be able to even come close to getting.

Also, if I understand this right, write-caching is going to put my data at risk with a power failure. Is that all the data on the array, or just the data it's cached and not written yet?
a c 127 G Storage
May 12, 2010 8:15:08 PM

You should not use HDTune to test performance on RAIDs; HDTune was not designed for that and does not represent actual performance.

Please re-test and post screenshots of the following benchmarks:
- ATTO (set size to at least 256MB)
- AS SSD (set size to at least 1000MB)
- CrystalDiskMark (set size to at least 1000MB)
- HDTune Pro "file" benchmark (optional)
- HDTune random access benchmark

Those are the benchmarks that are suitable to RAIDs, since except for the HDTune random access benchmark, they all test on the NTFS filesystem with the Windows optimizations being in effect. Without these, performance on RAIDs could be measured much lower than is actually the case.

A few words:
- intel RAID5 is not a safe storage method; expect broken arrays and do not use for anything really important
- use FreeNAS for good+safe RAID5 performance; but you'll be capped by Gigabit ethernet
- write-back mode in RAM puts your entire filesystem at risk. A RAM-error may cause NTFS metadata corruption, lost buffers may be synced out-of-order and other dirty things can happen that destroy all or partial data that wasn't 'touched' for months - so it is not just the data you are writing at that moment which is at risk; the risk is that you corrupt the NTFS filetable and it will remove alot of files after a disk check.

Quote:
When I have write-caching disabled I get write speeds of 5mb/s (!!!), which obviously is horrible.

Only logical, because you force the disks to seek that way. 5MB/s is not so bad for an array of HDDs that have to seek heavily. In fact, you can make workloads where HDDs don't reach 1MB/s or even 0.1MB/s.

HDDs are only fast if you can make them read or write in a sequential pattern; 1 2 3 4 5 6 and not 1 6 4 2 3 5; that will be 100 times slower for the HDD.
m
0
l
May 14, 2010 12:48:55 AM

Here are the images with atto.

Write-back caching off:


Write-back caching on:




It's unfortunate that ICH10r arrays are so unreliable, I did a lot of research before buying and didn't see much about that, but now that I've posted this question around a few places I'm getting lots of responses saying that.

I guess at this point I have to decide if I want to drop another $450 to get an Areca card so I can keep running windows (I have several applications that I wanted to have running on the server that are windows based) or give raid-z a shot with freenas/zfs even though I have no experience outside of Windows, and will have to find some alternative to those windows applications I need.
m
0
l
a c 415 G Storage
May 14, 2010 6:43:40 AM

freebagel said:
From what I understand I should be expecting write speeds around 50-80mb/s with this setup.
That may be high, depending on the access pattern. You need to understand that RAID-5 write speeds are pretty horrible compared to other RAID organizations because the RAID controller needs to READ the old parity and data, update the parity, and then WRITE the new parity and data. The READs cannot be overlapped with the WRITES, it has to be one after the other - so you have to wait for the rotational delay between them.

Buffering in the RAID controller can improve things, but the basic fact remains that RAID-5 has lousy write performance.

You also have to be very careful to consider drive reliability with very large RAID-5 arrays. If you use standard consumer hard drives with unrecoverable read error rates in the 1 per 10^14 bits read range, you end up likely to loose data in an array that has a capacity of several TB. The redundancy of RAID-5 won't help because it depends on being able to successfully read every sector of every non-failed drive. The WD20EADS (green) drives are better in this regard because their unrecoverable read error rate is only 1 per 10^15 bits read.
m
0
l
!