I've put together a 4X750GB raid array in raid 0 on my Core i7 system's ICH10R. However, the raid seems to be underperforming. Each drive gets about 110MB/s peak transfer rate in HD Tach or HD Tune, yet when put into a 4X RAID, they only give about 220MB/s. I tested several configurations, and found that a 3X raid 0 will give the exact same numbers, and a 2X raid 0 will give the same thing at first, but then decline as a normal hard drive. the 3X and 4X configs don't decline, they just stay at 220 through the whole thing. Is anyone else getting this behavior?
Now, I don't really need the storage. I wanted it set up as a 4 drive array for the performance. Currently, I have a OCZ Vertex 120GB drive (which seems to have the same 220MB/s restriction as the raid array when hooked to the ICH10R) as my main OS drive, but I'm running into some performance problems. The write speeds have dropped off considerably recently, and I think it is because it has run out of totally empty cells, and needs to be cleaned up. I'd rather just set up the raid as the main drive (partitioned to the first 300GB as the OS drive and the rest for storage) but I can't.
1. ICH10R doesn't have 64bit LBS, so volumes larger than 2TB aren't bootable. This ticks me off somewhat, but I can stand to lose a little storage if I gain performance.
2. this odd speed restriction is annoying. limited to 220MB/s when I know they can go faster is really getting to me.
Should I:
A: get a PCIe X4 controller and hook all the drives to that? If so, which controller (under $300) is most stable and best supported?
B: switch to the 3X array and do something to boost the performance of the ICH10R? If so, how would I boost this?
Now, I don't really need the storage. I wanted it set up as a 4 drive array for the performance. Currently, I have a OCZ Vertex 120GB drive (which seems to have the same 220MB/s restriction as the raid array when hooked to the ICH10R) as my main OS drive, but I'm running into some performance problems. The write speeds have dropped off considerably recently, and I think it is because it has run out of totally empty cells, and needs to be cleaned up. I'd rather just set up the raid as the main drive (partitioned to the first 300GB as the OS drive and the rest for storage) but I can't.
1. ICH10R doesn't have 64bit LBS, so volumes larger than 2TB aren't bootable. This ticks me off somewhat, but I can stand to lose a little storage if I gain performance.
2. this odd speed restriction is annoying. limited to 220MB/s when I know they can go faster is really getting to me.
Should I:
A: get a PCIe X4 controller and hook all the drives to that? If so, which controller (under $300) is most stable and best supported?
B: switch to the 3X array and do something to boost the performance of the ICH10R? If so, how would I boost this?