I've put together a 4X750GB raid array in raid 0 on my Core i7 system's ICH10R. However, the raid seems to be underperforming. Each drive gets about 110MB/s peak transfer rate in HD Tach or HD Tune, yet when put into a 4X RAID, they only give about 220MB/s. I tested several configurations, and found that a 3X raid 0 will give the exact same numbers, and a 2X raid 0 will give the same thing at first, but then decline as a normal hard drive. the 3X and 4X configs don't decline, they just stay at 220 through the whole thing. Is anyone else getting this behavior?
Now, I don't really need the storage. I wanted it set up as a 4 drive array for the performance. Currently, I have a OCZ Vertex 120GB drive (which seems to have the same 220MB/s restriction as the raid array when hooked to the ICH10R) as my main OS drive, but I'm running into some performance problems. The write speeds have dropped off considerably recently, and I think it is because it has run out of totally empty cells, and needs to be cleaned up. I'd rather just set up the raid as the main drive (partitioned to the first 300GB as the OS drive and the rest for storage) but I can't.
1. ICH10R doesn't have 64bit LBS, so volumes larger than 2TB aren't bootable. This ticks me off somewhat, but I can stand to lose a little storage if I gain performance.
2. this odd speed restriction is annoying. limited to 220MB/s when I know they can go faster is really getting to me.
A: get a PCIe X4 controller and hook all the drives to that? If so, which controller (under $300) is most stable and best supported?
B: switch to the 3X array and do something to boost the performance of the ICH10R? If so, how would I boost this?
Well, I tried out a different thing and it seems to have fixed my bandwidth problem. Instead of using the regular raid setup on the ICH10R, I put them in as separate drives and then set up a striped set in Windows 7. It performs a lot better now, although I can't measure it through HDTach or HDTune. Sandra measures it at 378MB/s.
I don't know what was causing such bandwidth restrictions, but they seem to be gone now.
Even though your array is bigger than 2TiB; you should still be able to boot from it if you create a boot partition smaller than 2TiB; so the BIOS never needs to access anything beyond 2TiB. The Windows-based RAID drivers need to support >2TiB though, and the Operating System as well.
Also, may i suggest creating a smaller partition on your SSD? If you leave a portion of say 10-15GB unused, it will help keep performance degradation over time to a minimum. Before this works, the drive has to be clean. If it's been used before you need to do a Secure Erase using HDDErase to reset the SSD to factory condition.
After you do this, install Windows 7 to it, creating a partition smaller than the full capacity; so the last 10-15GB will be 'unallocated'. Never use that spot, so the SSD can use this internally.