Sign in with
Sign up | Sign in
Your question

What's reasonable RAID 5 performance?

Last response: in Storage
Share
Anonymous
a b G Storage
September 6, 2004 8:41:29 PM

Archived from groups: comp.arch.storage,comp.sys.ibm.pc.hardware.storage,cmicrosoft.public.storage (More info?)

I have a nice new server with 15,000RPM SCSI drives in a hardware RAID 5
configuration. I may be wrong, but I don't think it's performing properly
at all. The question is, what's reasonable?

When using large (>500MB) files to swamp out cache effects, I'm getting
roughly 12MB/sec (it varies quite a bit) write performance and maybe 200
MB/sec, when measured with IOzone.

Measuring with batch 'copy' commands, I'm getting 40MB/sec read-only (copy
to NUL:)  and about 14MB/sec copying back to the same array. One of the
challenges has been getting consistent results; not sure why.

These numbers strike me as being OTL (out to lunch) for such
high-performance drives and array controller.

The array consists of five Seagate Cheetah ST336753LC 15,000RPM drives
connected via a 320MB/sec SCSI interface. The controller is an Intel
SRCU42X as provided by Intel, i.e., the standard 128MB cache, and no
on-board battery (the server has a UPS and redundant power supplies
connected to separate power sources) which means the controller will do
write-through, but not write-back, caching. No override is available.

Intel claims that this is the optimum configration for the controller, and
that more RAM or the battery pack will not help performance significantly.
The motherboard, BTW, is an Intel SE7501HG2 with dual 2.8GHz Xeons and 2GB
of RAM.

There must be thousands of RAID 5 arrays out there very similar to this
one. _Somebody_ must know. Are these performance figures reasonable, or
not?
Ken Wallewein
K&M Systems Integration
Phone (403)274-7848
Fax (403)275-4535
kenw@kmsi.net
www.kmsi.net
Anonymous
a b G Storage
September 6, 2004 8:41:30 PM

Archived from groups: comp.arch.storage,comp.sys.ibm.pc.hardware.storage,microsoft.public.storage (More info?)

<kenw@kmsi.net> wrote in message
news:re4pj0hj19blha05h2283n93jf7civi3kv@4ax.com...
> I have a nice new server with 15,000RPM SCSI drives in a hardware RAID 5
> configuration. I may be wrong, but I don't think it's performing properly
> at all. The question is, what's reasonable?
>
> When using large (>500MB) files to swamp out cache effects, I'm getting
> roughly 12MB/sec (it varies quite a bit) write performance and maybe 200
> MB/sec, when measured with IOzone.
>
> Measuring with batch 'copy' commands, I'm getting 40MB/sec read-only (copy
> to NUL:)  and about 14MB/sec copying back to the same array. One of the
> challenges has been getting consistent results; not sure why.
>
The destination file has to be contiguous to get proper results. This is not
likely with cmd's copy or Sandra. Xcopy is if it can preallocate contig free
space, but it will be seek bound if you have a single array.

Create a 10MB temp file (so it stays in cache), and do "copy/b big+big+(18
more) bigger". Or xcopy from a server if you have GB ethernet.
Anonymous
a b G Storage
September 7, 2004 2:44:02 AM

Archived from groups: comp.arch.storage,comp.sys.ibm.pc.hardware.storage,cmicrosoft.public.storage (More info?)

In comp.sys.ibm.pc.hardware.storage kenw@kmsi.net wrote:
> I have a nice new server with 15,000RPM SCSI drives in a hardware RAID 5
> configuration. I may be wrong, but I don't think it's performing properly
> at all. The question is, what's reasonable?

> When using large (>500MB) files to swamp out cache effects, I'm getting
> roughly 12MB/sec (it varies quite a bit) write performance and maybe 200
> MB/sec, when measured with IOzone.

One data-point: On a software RAID5 with 2.6.7 and 5 * Maxtor
200GB DiamondMax 9 plus I get 22MB/s large file write performance
(measured with 1GB data file) and 65MB/s read performance. That is with
ext3 journalling file system, which also journals data, not only
metadata.

> Measuring with batch 'copy' commands, I'm getting 40MB/sec read-only (copy
> to NUL:)  and about 14MB/sec copying back to the same array. One of the
> challenges has been getting consistent results; not sure why.

> These numbers strike me as being OTL (out to lunch) for such
> high-performance drives and array controller.

I would say the performance is rather embarassing when a software
RAID on half as fast disks performs massively better. However I
recenly made the mistake of buying an adaptec SATA RAID controller.
Also slower than software RAID. I also recently talked to some
guy running huge usenet servers: They also have noted that hardware
RAID is now slower than software RAID. As soon as Linux supports
ATA/SATA hotplugging the last advantage of hardware RAID will
be gone.

Arno
--
For email address: lastname AT tik DOT ee DOT ethz DOT ch
GnuPG: ID:1E25338F FP:0C30 5782 9D93 F785 E79C 0296 797F 6B50 1E25 338F
"The more corrupt the state, the more numerous the laws" - Tacitus
Related resources
Anonymous
a b G Storage
September 7, 2004 6:57:14 AM

Archived from groups: comp.arch.storage,comp.sys.ibm.pc.hardware.storage,microsoft.public.storage (More info?)

"Eric Gisin" <ericgisin@graffiti.net> wrote in message
> Create a 10MB temp file (so it stays in cache), and do "copy/b big+big+(18
> more) bigger". Or xcopy from a server if you have GB ethernet.

Gigabit isn't fast enough here.
Anonymous
a b G Storage
September 7, 2004 6:59:39 AM

Archived from groups: comp.arch.storage,comp.sys.ibm.pc.hardware.storage,cmicrosoft.public.storage (More info?)

"Arno Wagner" <me@privacy.net> wrote in message news:2q47hiFqn6sfU1@uni-

> Also slower than software RAID. I also recently talked to some
> guy running huge usenet servers: They also have noted that hardware
> RAID is now slower than software RAID.

Some HW RAID may be slower but not the right stuff configured properly. SW
RAID is moving in on most the territories though.
Anonymous
a b G Storage
September 7, 2004 9:43:32 AM

Archived from groups: comp.arch.storage,comp.sys.ibm.pc.hardware.storage,cmicrosoft.public.storage (More info?)

<kenw@kmsi.net> wrote in message
news:re4pj0hj19blha05h2283n93jf7civi3kv@4ax.com...
[SNIP]
>The controller is an Intel
> SRCU42X as provided by Intel, i.e., the standard 128MB cache, and no
> on-board battery (the server has a UPS and redundant power supplies
> connected to separate power sources) which means the controller will do
> write-through, but not write-back, caching. No override is available.
>

If your 12MB/s is really what you get, then flushing your 128MB cache takes
10 seconds. If some idiot decides to push the power button on the server, it
will switch off before your cache is flushed. Maybe not such a bad idea to
get the battery option anyway??

Rob
Anonymous
a b G Storage
September 7, 2004 5:07:14 PM

Archived from groups: comp.arch.storage,comp.sys.ibm.pc.hardware.storage,microsoft.public.storage (More info?)

"Ron Reaugh" <rondashreaugh@att.net> wrote:

>"Eric Gisin" <ericgisin@graffiti.net> wrote in message
>> Create a 10MB temp file (so it stays in cache), and do "copy/b big+big+(18
>> more) bigger". Or xcopy from a server if you have GB ethernet.
>
>Gigabit isn't fast enough here.

Sure it is. It's far faster than the throughput I'm getting from the array
right now, and faster than a 32-bit PCI bus (the RAID server's 64). As it
happens, the only system I currently have to trade files with has a 32bit
PCI, and when I watch network utilization, the bottleneck is obvious.

A gigabit network should be able to approach 100MB/sec -- say, at least 80.
If I was getting that from my RAID array, I'd be happy.

BTW, the copy append idea in Eric's message is a great idea. It
effectively lets me do write-only write performance testing, almost the
reverse of my copy-to-NUL read test. Cool!

Unfortunately, none of this either confirms or denies whether my current
RAID 5 performance is reasonable.

/kenw
Ken Wallewein
K&M Systems Integration
Phone (403)274-7848
Fax (403)275-4535
kenw@kmsi.net
www.kmsi.net
Anonymous
a b G Storage
September 7, 2004 10:48:24 PM

Archived from groups: comp.arch.storage,comp.sys.ibm.pc.hardware.storage,microsoft.public.storage (More info?)

<kenw@kmsi.net> wrote in message
news:8ubrj0l2h2d2r32a7ueg18cfsajcb7gfg5@4ax.com...
> "Ron Reaugh" <rondashreaugh@att.net> wrote:
>
> >"Eric Gisin" <ericgisin@graffiti.net> wrote in message
> >> Create a 10MB temp file (so it stays in cache), and do "copy/b
big+big+(18
> >> more) bigger". Or xcopy from a server if you have GB ethernet.
> >
> >Gigabit isn't fast enough here.
>
> Sure it is. It's far faster than the throughput I'm getting from the
array
> right now,

You just contradicted yourself overall. Your stated goal is "should be
getting". For that gigabit is NOT fast enough. What about the 200?

> and faster than a 32-bit PCI bus (the RAID server's 64).

No, gigabit is about the same speed at peak of 32 bit 33 Mhz PCI.

> As it
> happens, the only system I currently have to trade files with has a 32bit
> PCI, and when I watch network utilization, the bottleneck is obvious.

Rethink what you are watching.

> A gigabit network should be able to approach 100MB/sec -- say, at least
80.

That's what I've said and 32 bit 33.3 Mhz PCI does 133.3 MB/sec.

> If I was getting that from my RAID array, I'd be happy.

What about the 200?

> BTW, the copy append idea in Eric's message is a great idea. It
> effectively lets me do write-only write performance testing, almost the
> reverse of my copy-to-NUL read test. Cool!
>
> Unfortunately, none of this either confirms or denies whether my current
> RAID 5 performance is reasonable.
Anonymous
a b G Storage
September 8, 2004 1:39:09 AM

Archived from groups: comp.arch.storage,comp.sys.ibm.pc.hardware.storage,cmicrosoft.public.storage (More info?)

"Rob Turk" <_wipe_me_r.turk@chello.nl> wrote in message news:8Ub%c.65750$C7.51368@amsnews05.chello.com
> <kenw@kmsi.net> wrote in message news:re4pj0hj19blha05h2283n93jf7civi3kv@4ax.com...
> [SNIP]
> > The controller is an Intel
> > SRCU42X as provided by Intel, i.e., the standard 128MB cache, and no
> > on-board battery (the server has a UPS and redundant power supplies
> > connected to separate power sources) which means the controller will do
> > write-through, but not write-back, caching. No override is available.
> >
>
> If your 12MB/s is really what you get, then flushing your 128MB cache
> takes 10 seconds.

Oh?
What about the "the controller will do write-through, but not write-back, caching"?

> If some idiot decides to push the power button on the server, it
> will switch off before your cache is flushed.
> Maybe not such a bad idea to get the battery option anyway??
>
> Rob
Anonymous
a b G Storage
September 8, 2004 1:39:10 AM

Archived from groups: comp.arch.storage,comp.sys.ibm.pc.hardware.storage,cmicrosoft.public.storage (More info?)

"Folkert Rienstra" <see_reply-to@myweb.nl> wrote in message
news:2q6hdjFredu7U1@uni-berlin.de...
>
> "Rob Turk" <_wipe_me_r.turk@chello.nl> wrote in message
news:8Ub%c.65750$C7.51368@amsnews05.chello.com
> > <kenw@kmsi.net> wrote in message
news:re4pj0hj19blha05h2283n93jf7civi3kv@4ax.com...
> > [SNIP]
> > > The controller is an Intel
> > > SRCU42X as provided by Intel, i.e., the standard 128MB cache, and no
> > > on-board battery (the server has a UPS and redundant power supplies
> > > connected to separate power sources) which means the controller will
do
> > > write-through, but not write-back, caching. No override is available.
> > >
> >
> > If your 12MB/s is really what you get, then flushing your 128MB cache
> > takes 10 seconds.
>
> Oh?
> What about the "the controller will do write-through, but not write-back,
caching"?

Read the thread before blathering. The controller WILL do write-back when
it has a battery.

> > If some idiot decides to push the power button on the server, it
> > will switch off before your cache is flushed.
> > Maybe not such a bad idea to get the battery option anyway??
> >
> > Rob
!