terminus

Distinguished
Apr 9, 2009
74
0
18,630
For GPUs it is common knowledge they do not scale perfectly in SLi or CrossfireX. However I was wonder if HDDs in Raid0 can scale perfectly. For example, a Raid0 (2x500GB HDD) 7200rpm, is this exactly double the speed of a single 1TB 7200rpm HDD?
 

boonality

Distinguished
Mar 8, 2008
1,183
0
19,310
not even close. RAID0 doesn't increase your random I/O at all really, in fact it may even be slower depending on your controller. Now your large files (100MB+) reads and writes will be faster, but certainly not twice as fast.
 

terminus

Distinguished
Apr 9, 2009
74
0
18,630



ahhh...I see, I'm using an Asus P6t X58 MOBO.

So large files such as game and other applications will load faster correct?
 
From what I understand onboard raid controllers don't scale well beyond 2 drives. To scale with 3 or more you need a GOOD dedicated RAID controller. To be fair I've never used a onboard beyond 2 drives and never used a dedicated controller.
 

Large single files will load faster. Lots of small files though will not.
 

sub mesa

Distinguished

That is a myth. RAID0 increases both sequential I/O and random I/O in the same degree if you make sure:

- you fix the stripe misalignment virtually all RAIDs are setup with (blame windows; or don't use windows)
- you use proper drivers (no windows onboard proprietary drivers; but proper open source drivers like in BSD)
- you use fast SATA ports on the chipset and not some PCI controller. Once PCI is used for storage you're thrown back to the stone ages no matter what your benchmarks says. The shared access latencies of PCI ruins your RAID's performance.

So any *GOOD* RAID0 will increase both MB/s and IOps, if one drive does 100MB/s and 500 IOps, two drives in RAID0 should be able to do 200MB/s and 1000 IOps. If you can't realise that, you're using bad software or have setup your improperly.
 

It only increases IOPS for a queue depth greater than 1. While this is always the case on a server or similar large setup, most desktops operate at low queue depth almost 100% of the time.
 

sub mesa

Distinguished
True, but a queue depth of 1 means serial operation by definition, i would argue. Its also true many software use blocking I/O and parallel I/O is not always possible generally due to lack of advanced programming skills or lack of interest in the area. Games could be storing information in such a way so that they can read sequentially instead of a random-like pattern. And they could be using a different design to increase the amount of queued I/O's so RAIDs can take advantage of this.

But ofcourse, as a customer considering RAID you look at what performance you can get and not what is theoretically possible. If you use Windows all you can do is fix the stripe misalignment and hope applications allow the RAID to work in parallel. If that is not possible, then an SSD might be a better option because of its low latency any serial operation is going to be much faster than any RAID ever can.

One strange thing is that in my benchmarks done on BSD, i did get a higher IOps rating when using RAID0 and testing with a queue depth of 1. I have no good explanation for that. Ofcourse values increased as the queue depth increased, but with 4 disks in RAID0 i got about 70% additional performance with a queue depth of just 1. Tested this with software RAID.

I was thinking, perhaps a virtual I/O (done by an application) can be split in multiple physical I/O's going to the storage volume, which is something the Windows APIs do and also the ATA driver. I also know FreeBSD to do this; if you write 1 megabyte to a raw device the ATA driver will write chunks of 64KiB. I can also see the queue'd I/O's rise when benchmarking this way (using dd if=/dev/zero of=/dev/sdX bs=1m). So it seems a queue depth higher than 1 can be caused by the filesystem and/or operating system, not just the application.