Quick question folks-
I have an Intel i7-950 with 6 gigs of ram running on an Asus P6X58D-E board.
I have a 300 gig WD Raptor for my OS drive on an Intel SATA port. I have 3 Samsung Spinpoint 1 terabyte drives on the Intel SATA ports as well. I created a RAID 5 out of the Samsung drives in the BIOS, installed Windows 7 64-bit on the WD (NON-RAID) Drive.
Disk Management sees the RAID as around 1.8 Terabytes which is correct. I initialized and started a format with GPT under disk management.
It's been over 6 hours and it's only at 3% formatting...
Does that seem right?
You would want write-back mode, by clicking "Write caching" option in the Intel RAID drivers. Do note that once you activate this, crashes, blue screens and power outages could corrupt your filesystem.
I guess this will be done formatting in like 6 more days.
I can't believe it takes 4hrs for 1% though. That just doesn't seem right.
(Update- I turned Write Caching back on (it was on originally, I turned it off before starting the format, and it is moving MUCH faster, up to 18% now, should be done in a few hours.)
Does write caching really make that insane of a difference for all operations? It's now at 19% as I typed that sentence, lol.
If I don't enable Write Caching, what kind of performance hit am I likely to take during normal operation? Will it be worse than just a single drive by a lot?
Without write caching -> Write-through performance
With write caching -> Write-back performance
The difference between the two is especially huge in parity RAID (RAID5 and RAID6) when doing sequential writes. Without write-back, all RAID5/RAID6 would write very slowly; except RAID3 which isn't that common anymore but still could be useful in a number of circumstances.
Assume a 4-disk RAID5 with 128KiB stripesize. Now, when writing to this RAID5 array, we can write extremely fast if we write exactly 384KiB; while anything else would be terribly small. This is because RAID5 can only write normally with a single 'magic' quantity of data that depends on the number of data drives (total drives - parity drives = data drives) and the Stripesize (128KiB).
The 'write caching' which activates write-back, would save up a number of I/O requests until it can form a write request of exactly that 'magic size'; technically known as the full stripe block or aggregated write block. Long story short: write caching means you save up I/O so it can write more efficiently, with huge performance gains. Without it, it would be very slow.
This is no different from a real hardware controller, if you select write-through it will write at 5MB/s or so; not faster. The write-back performance would do the process described above, and thus combining and splitting requests is necessary to make RAID5/6 perform well.
Intel onboard RAID is the only fakeRAID implementation that supports write-back mode. But it uses your RAM to do this; and it's likely not ECC RAM. That also means your RAM can corrupt your disk, and on a power failure you can have a corrupted filesystem. So given these constraints, if you want to store data on a RAID5 with write caching on and without a backup, then you run a considerable risk of data loss/corruption that you may not even know about.
Do not consider RAID5 to protect your data; in fact you may even think a RAID5 on Windows platform is less secure than a single disk without any RAID protection. Huh? Yes, that's right; while RAID was meant to make storage more reliable, on the windows platform we can actually see it degrading reliability and protection for your data. This is because the RAID layer adds another piece to your storage setup which can fail, and when it does it can have far-reaching repercussions.
Thanks again for all the excellent information sub mesa. I do appreciate it.
I know RAID 5 isn't a excuse not to have a backup (I am an IT Manager as a profession actually), but I was hoping for some performance increases and redundancy in case a disk failed.
I am not sure if I will keep this in RAID 5. I will forgo putting much data on it until I decide I suppose.
Thanks again, gives me some stuff to consider. I am used to my RAID arrays at work, and them being generally trouble free, but those are obviously on much better RAID controllers than onboard desktop grade motherboards.