Sign in with
Sign up | Sign in
Your question

What stripe size to use for RAID5?

Tags:
Last response: in Storage
Share
Anonymous
a b G Storage
April 16, 2004 6:25:07 PM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

I have a Promise SX6000 PCI RAID5 card w/ 128Mb cache.
I have 6 drives hooked up in a 5x75GB RAID5 with 1x75GB
set aside as a hot-spare drive. Using the default
stripe size of 64KB and NTFS formatted as 4KB cluster
size.

Performance is varied... sometimes the array seems to
bog out at 2-3MB/s, other times it's able to read/write
13-14MB/s. Under load (copying from one partition to
another on the same array) it usually ends up around 5-
6MB/s.

What should the relationship be between stripe size and
cluster size? 1:1? 5:1?

More about : stripe size raid5

Anonymous
a b G Storage
April 17, 2004 5:35:10 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Toshi1873 wrote:
> I have a Promise SX6000 PCI RAID5 card w/ 128Mb cache.
> I have 6 drives hooked up in a 5x75GB RAID5 with 1x75GB
> set aside as a hot-spare drive. Using the default
> stripe size of 64KB and NTFS formatted as 4KB cluster
> size.
>
> Performance is varied... sometimes the array seems to
> bog out at 2-3MB/s, other times it's able to read/write
> 13-14MB/s. Under load (copying from one partition to
> another on the same array) it usually ends up around 5-
> 6MB/s.
>
> What should the relationship be between stripe size and
> cluster size?


Depends on the size of the files you'll be working with and the frequency of
I/O operations. The following articles may help:
http://www.pcguide.com/ref/hdd/perf/raid/concepts/perfS...
http://www.adriansrojakpot.com/Speed_Demonz/IDE_RAID/RA...
Anonymous
a b G Storage
April 17, 2004 6:07:01 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

In article <iR%fc.147062$oR5.117584@pd7tw3no>,
sheenan@wahs.ac says...
> Depends on the size of the files you'll be working with and the frequency of
> I/O operations. The following articles may help:
> http://www.pcguide.com/ref/hdd/perf/raid/concepts/perfS...
> http://www.adriansrojakpot.com/Speed_Demonz/IDE_RAID/RA...
>

Nice links... I did not turn those up in my searches.

The fun is that half the drive is filled with files that
average around 32KB (lots and lots and lots of
development files). This is usually where performance
bogs due to the 64KB stripe size and the 4KB cluster
size.

The other half is MP3 or A/V files of a few GB each.
Easily handled by the 64KB stripe size and the 4KB
cluster size, altough different settings might be
better.

Empirical testing, is of course a real bear when it's a
system that I don't want to tear down, can't change the
stripe size on the fly, and worse... it takes 20-30
hours to initialize the drive array if I would rebuild
and pick a different stripe size.

-----

Doing some down-n-dirty benchmarking today on the
Promise SuperTrak SX6000... testing file-set size is
8GB, each read is 4KB with 256KB being written in a row
before deciding whether to jump to another section of
the file or even another file altogether (if testing
random seeking). Just a simple VB app that I made up to
see what real-world numbers look like (matches very
close to what a 30-second sampling PerfMon graph reports
as the data rates). Numbers are the average transfer
rate over the previous 300 seconds, taken at least 600
seconds after the start of the test-run to allow the O/S
and array to "settle down" after the initial creation of
the test files.

(In other words... these numbers are not comparable to
anything else, but are at least not pie-in-the-sky
numbers.)

Since I can't rebuild the array at the moment (so the
stripe size is fixed at 64KB), I cleared off the last
partition (100GB of the 275GB), and formatted it using
various cluster sizes. I then did sequential
reads/writes and random reads/writes.

Size SqRead SWrite RndRd RWrite
04KB 18.635 15.606 5.321 4.743
16KB 18.903 15.864 7.566 5.863
64KB 21.053 18.018 7.512 7.089

Performance in this case was definitely better at the
larger cluster sizes, dramatically so when it comes to
working with random writes to my array. Since this
particular volume is the one that holds my A/V files
(usually 1GB+ each), I'm going to leave the cluster size
at 64KB.

I may also try switching the 2nd volume to a 16KB
cluster size (that's the volume that is filled with lots
of tiny files). I'll have to re-run the tests with a
lower chunk size (where it writes 2KB at a time, and
then decides where to seek after it writes 32KB worth of
data).
!