Sign in with
Sign up | Sign in
Your question

SATA Write Throughput

Tags:
Last response: in Storage
Share
Anonymous
a b G Storage
May 24, 2005 6:33:57 PM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

I've got an Adaptec Serial ATA RAID 21610SA Controller card with 6 pairs of
RAID1 disks attached. The card is rated at 1.5Gbps. Writing to one pair, I
get about 30MB/s, which I would expect. Writing to 2 pairs at the same
time, I get a total of 40MB/s. I get 40MB/s when I right to 3,4,5, or 6
pairs at the same time. It seems 40MB/s is the hard limit for this card. I
was hoping to get something close to 6*30MB/s = 180MB/s for the system. Any
ideas?

More about : sata write throughput

May 24, 2005 10:44:10 PM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

> I've got an Adaptec Serial ATA RAID 21610SA Controller card with 6 pairs
of
> RAID1 disks attached. The card is rated at 1.5Gbps. Writing to one pair,
I
> get about 30MB/s, which I would expect. Writing to 2 pairs at the same
> time, I get a total of 40MB/s. I get 40MB/s when I right to 3,4,5, or 6
> pairs at the same time. It seems 40MB/s is the hard limit for this card.
I
> was hoping to get something close to 6*30MB/s = 180MB/s for the system.
Any
> ideas?

What motherboard and which PCI slot has this RAID card plugged in?
How exactly did you performed "writing", can you elaborate?
Anonymous
a b G Storage
May 24, 2005 10:44:11 PM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Motherboard is an intel serverboard se7501br. The raid card is plugged into
the first 64bit slot. The system Scsi is plugged into the integrated
ultra320 scsi controller. We've also got a fibre channel card plugged into
the 3rd 64bit slot. Windows is reporting:
Raid card - slot1, pcibus4, device2, function0.
Scsi - pcibus3, device3, function0.
Fibre - slot3, pcibus3, device2, function0.

I was using the IOMeter app, but thought it was overly complicated and
wasn't exactly sure what it was doing, so I wrote my own. It basically
creates a thread for each raid0 pair. Each thread open's a file and writes
1MB blocks into it, until the file is 1GB, and keeps doing that. I've
varied the block size, changed the file size, and whether I create a new
file, or just keep reusing the old filename, but nothing really made a
significant difference.

A side note, we're using windows 2003. We were using windows xp before -
got the same throughput, but were seeing raid pairs ocassionally go offline.
We would be writing to a raid pair, then it would disappear. I can't
remember what what the errors were in the event viewer. Adaptec said they
didn't support xp, and we haven't seen the errors since going to win2003.

-Lars


"Peter" <peterfoxghost@yahoo.ca> wrote in message
news:07Oke.5942$dZ5.521541@news20.bellglobal.com...
> > I've got an Adaptec Serial ATA RAID 21610SA Controller card with 6 pairs
> of
> > RAID1 disks attached. The card is rated at 1.5Gbps. Writing to one
pair,
> I
> > get about 30MB/s, which I would expect. Writing to 2 pairs at the same
> > time, I get a total of 40MB/s. I get 40MB/s when I right to 3,4,5, or 6
> > pairs at the same time. It seems 40MB/s is the hard limit for this card.
> I
> > was hoping to get something close to 6*30MB/s = 180MB/s for the system.
> Any
> > ideas?
>
> What motherboard and which PCI slot has this RAID card plugged in?
> How exactly did you performed "writing", can you elaborate?
>
>
Related resources
May 25, 2005 1:46:48 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

> Motherboard is an intel serverboard se7501br. The raid card is plugged
into
> the first 64bit slot. The system Scsi is plugged into the integrated
> ultra320 scsi controller. We've also got a fibre channel card plugged
into
> the 3rd 64bit slot. Windows is reporting:
> Raid card - slot1, pcibus4, device2, function0.
> Scsi - pcibus3, device3, function0.
> Fibre - slot3, pcibus3, device2, function0.
>
> I was using the IOMeter app, but thought it was overly complicated and
> wasn't exactly sure what it was doing, so I wrote my own. It basically
> creates a thread for each raid0 pair. Each thread open's a file and
writes
> 1MB blocks into it, until the file is 1GB, and keeps doing that. I've
> varied the block size, changed the file size, and whether I create a new
> file, or just keep reusing the old filename, but nothing really made a
> significant difference.
>
> A side note, we're using windows 2003. We were using windows xp before -
> got the same throughput, but were seeing raid pairs ocassionally go
offline.
> We would be writing to a raid pair, then it would disappear. I can't
> remember what what the errors were in the event viewer. Adaptec said they
> didn't support xp, and we haven't seen the errors since going to win2003.

I'm assuming motherboard is SE7501BR2 ?
Your setup seems fine. You may move RAID card to PCI slot 2.
Or swap with fibre card in slot 3.

Is your BIOS revision "Build P20-0079" ?
If not, you may need to upgrade.

Try to review your custom benchmark. Did you try to run it
against SCSI drive(s). They are still on PCI-X/100 bus.
Or try to benchmark on read operation for comparison.

May also verify/tweak BIOS settings.

What performance you get from fibre?
Anonymous
a b G Storage
May 25, 2005 2:52:40 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Previously nospam <nospam@nospam.com> wrote:
> I've got an Adaptec Serial ATA RAID 21610SA Controller card with 6 pairs of
> RAID1 disks attached. The card is rated at 1.5Gbps. Writing to one pair, I
> get about 30MB/s, which I would expect. Writing to 2 pairs at the same
> time, I get a total of 40MB/s. I get 40MB/s when I right to 3,4,5, or 6
> pairs at the same time. It seems 40MB/s is the hard limit for this card. I
> was hoping to get something close to 6*30MB/s = 180MB/s for the system. Any
> ideas?

I have seen a similar limit on an adaptec 8 disk SATA controller.
The disks are now on a pair of promise 150TX4 with Linux software RAID.
Writing is not much fater, but reading is faster than with the
Adaptec. IMO Sdaptec SATA controllers are best used as paperweights.

Arno
Anonymous
a b G Storage
May 25, 2005 2:52:41 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

That's kindof sad that software raid is faster than the hardware raid for
sata. I wonder how Promise got their quoted 150MB/s xfer rate for your card.

Also, we've also tried creating a single raid10 disk group of 12 disks
through our adaptec raid card. We still got miserable write performance.

-Lars


"Arno Wagner" <me@privacy.net> wrote in message
news:3fhphoF7sounU1@individual.net...
> Previously nospam <nospam@nospam.com> wrote:
> > I've got an Adaptec Serial ATA RAID 21610SA Controller card with 6 pairs
of
> > RAID1 disks attached. The card is rated at 1.5Gbps. Writing to one
pair, I
> > get about 30MB/s, which I would expect. Writing to 2 pairs at the same
> > time, I get a total of 40MB/s. I get 40MB/s when I right to 3,4,5, or 6
> > pairs at the same time. It seems 40MB/s is the hard limit for this card.
I
> > was hoping to get something close to 6*30MB/s = 180MB/s for the system.
Any
> > ideas?
>
> I have seen a similar limit on an adaptec 8 disk SATA controller.
> The disks are now on a pair of promise 150TX4 with Linux software RAID.
> Writing is not much fater, but reading is faster than with the
> Adaptec. IMO Sdaptec SATA controllers are best used as paperweights.
>
> Arno
>
Anonymous
a b G Storage
May 25, 2005 7:25:38 PM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Previously nospam <nospam@nospam.com> wrote:
> That's kindof sad that software raid is faster than the hardware raid for
> sata. I wonder how Promise got their quoted 150MB/s xfer rate for your card.

That is just the interface rate. You don't get that unless you do
striping with several disks.

Arno
Anonymous
a b G Storage
May 26, 2005 4:32:26 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

"Arno Wagner" <me@privacy.net> wrote in message news:3fjjniF84u32U1@individual.net
> Previously nospam <nospam@nospam.com> wrote:
> > That's kindof sad that software raid is faster than the hardware raid for
> > sata. I wonder how Promise got their quoted 150MB/s xfer rate for your card.
>
> That is just the interface rate.

Which is not a (user)data rate.

> You don't get that unless you do striping with several disks.

Which won't be on the same channel unless connected through a port multiplier.
In which case you still don't get 150MB/s as bus protocol and command over-
head have to be accounted for.

>
> Arno
Anonymous
a b G Storage
May 26, 2005 1:11:00 PM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

I've been mucking w/ IOMeter and have had some better success. By
increasing the number of outstanding I/Os to 16, I'm able to get 78MB/s.
Although this is still far from 150MB/s, it's much better than the 40-45MB/s
that I was getting w/ it set to 1, as well as in my throughput tester.
Looking at the iometer source, it appears they use asynchronous writes using
WriteFile - actually having muliple writers for 1 file. I'm just using
fwrite.

I also tried reading w/ 16 outstanding I/Os and I'm getting huge
throughputs - 256MB/s. I thought the max for this card was around 150MB/s.
Again, my setup is 6 pairs of raid1.

-Lars

"Folkert Rienstra" <see_reply-to@myweb.nl> wrote in message
news:42952c99$0$90881$892e7fe2@authen.white.readfreenews.net...
> "Arno Wagner" <me@privacy.net> wrote in message
news:3fjjniF84u32U1@individual.net
> > Previously nospam <nospam@nospam.com> wrote:
> > > That's kindof sad that software raid is faster than the hardware raid
for
> > > sata. I wonder how Promise got their quoted 150MB/s xfer rate for your
card.
> >
> > That is just the interface rate.
>
> Which is not a (user)data rate.
>
> > You don't get that unless you do striping with several disks.
>
> Which won't be on the same channel unless connected through a port
multiplier.
> In which case you still don't get 150MB/s as bus protocol and command
over-
> head have to be accounted for.
>
> >
> > Arno
May 26, 2005 5:05:35 PM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

> I've been mucking w/ IOMeter and have had some better success. By
> increasing the number of outstanding I/Os to 16, I'm able to get 78MB/s.
> Although this is still far from 150MB/s, it's much better than the
40-45MB/s
> that I was getting w/ it set to 1, as well as in my throughput tester.
> Looking at the iometer source, it appears they use asynchronous writes
using
> WriteFile - actually having muliple writers for 1 file. I'm just using
> fwrite.
>
> I also tried reading w/ 16 outstanding I/Os and I'm getting huge
> throughputs - 256MB/s. I thought the max for this card was around
150MB/s.
> Again, my setup is 6 pairs of raid1.

Card spec says:
"Data Transfer Rate - Up to 1.5 Gbits/sec"
that is per single SATA port, not for the whole card.
Card is "64-bit/66 MHz PCI"

So what is your write performance with concurrent write to all
6 RAID1's ?

You may compare that with:
http://www.pcpro.co.uk/reviews/61847/adaptec-serial-ata...

and read some interesting info:
http://www.tweakers.net/reviews/557/1
Anonymous
a b G Storage
May 26, 2005 8:43:45 PM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Write Performance to all 6 Raid1's
Using IOMeter - 6 workers - each worker having it's own disk pair, 16
oustanding I/Os, 100% writes, 100% sequential, 1MB xfer size,
= 78MB/s total

same setup, but 100% reads was 256 MB/s

-Lars




"Peter" <peterfoxghost@yahoo.ca> wrote in message
news:Blnle.9118$dZ5.754366@news20.bellglobal.com...
> > I've been mucking w/ IOMeter and have had some better success. By
> > increasing the number of outstanding I/Os to 16, I'm able to get 78MB/s.
> > Although this is still far from 150MB/s, it's much better than the
> 40-45MB/s
> > that I was getting w/ it set to 1, as well as in my throughput tester.
> > Looking at the iometer source, it appears they use asynchronous writes
> using
> > WriteFile - actually having muliple writers for 1 file. I'm just using
> > fwrite.
> >
> > I also tried reading w/ 16 outstanding I/Os and I'm getting huge
> > throughputs - 256MB/s. I thought the max for this card was around
> 150MB/s.
> > Again, my setup is 6 pairs of raid1.
>
> Card spec says:
> "Data Transfer Rate - Up to 1.5 Gbits/sec"
> that is per single SATA port, not for the whole card.
> Card is "64-bit/66 MHz PCI"
>
> So what is your write performance with concurrent write to all
> 6 RAID1's ?
>
> You may compare that with:
> http://www.pcpro.co.uk/reviews/61847/adaptec-serial-ata...
>
> and read some interesting info:
> http://www.tweakers.net/reviews/557/1
>
>
May 27, 2005 2:24:15 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

> Write Performance to all 6 Raid1's
> Using IOMeter - 6 workers - each worker having it's own disk pair, 16
> oustanding I/Os, 100% writes, 100% sequential, 1MB xfer size,
> = 78MB/s total
>
> same setup, but 100% reads was 256 MB/s

What disks do you use?

So what is your goal, trying to figure out why write performance is not
higher than 78MB/s or getting it higher than 78MB/s?

You may try things I have suggested before:
"I'm assuming motherboard is SE7501BR2 ?
Your setup seems fine. You may move RAID card to PCI slot 2.
Or swap with fibre card in slot 3.
Is your BIOS revision "Build P20-0079" ?
If not, you may need to upgrade.
Try to review your custom benchmark. Did you try to run it
against SCSI drive(s). They are still on PCI-X/100 bus.
Or try to benchmark on read operation for comparison.
May also verify/tweak BIOS settings.
What performance you get from fibre?"

If you need a better write performance, you may also try
RAID10 config.
Anonymous
a b G Storage
May 27, 2005 4:25:23 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

"Peter" <peterfoxghost@yahoo.ca> wrote in message news:Blnle.9118$dZ5.754366@news20.bellglobal.com
> > I've been mucking w/ IOMeter and have had some better success. By
> > increasing the number of outstanding I/Os to 16, I'm able to get 78MB/s.
> > Although this is still far from 150MB/s, it's much better than the 40-45MB/s
> > that I was getting w/ it set to 1, as well as in my throughput tester.
> > Looking at the iometer source, it appears they use asynchronous writes using
> > WriteFile - actually having muliple writers for 1 file. I'm just using fwrite.
> >
> > I also tried reading w/ 16 outstanding I/Os and I'm getting huge throughputs -
> > 256MB/s. I thought the max for this card was around 150MB/s.
> > Again, my setup is 6 pairs of raid1.
>
> Card spec says:
> "Data Transfer Rate - Up to 1.5 Gbits/sec"
> that is per single SATA port, not for the whole card.
> Card is "64-bit/66 MHz PCI"
>
> So what is your write performance with concurrent write to all 6 RAID1's ?

Would "I'm able to get 78MB/s" ring a bell?

>
> You may compare that with:
> http://www.pcpro.co.uk/reviews/61847/adaptec-serial-ata...
>
> and read some interesting info:
> http://www.tweakers.net/reviews/557/1
Anonymous
a b G Storage
May 27, 2005 4:25:29 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

"nospam" <nospam@nospam.com> wrote in message news:p ymle.4$M91.0@dfw-service2.ext.ray.com
> I've been mucking w/ IOMeter and have had some better success. By
> increasing the number of outstanding I/Os to 16, I'm able to get 78MB/s.

> Although this is still far from 150MB/s,

You won't ever see 150MB/s for a single drive, the 150MB/s is the channel
clock rate. Did you bother to read my and Arnie's post?
The 150MB/s only comes into play when more than one drive (or in your case
all your drives) are connected to a single SATA port, by means of a port multiplier.
In that case you are limited to 150MB/s, minus overhead.

> it's much better than the 40-45MB/s that I was getting w/ it set to 1,
> as well as in my throughput tester.
> Looking at the iometer source, it appears they use asynchronous writes using
> WriteFile - actually having muliple writers for 1 file. I'm just using fwrite.
>
> I also tried reading w/ 16 outstanding I/Os and I'm getting huge throughputs -

> 256MB/s.

That's ~42MB/s per drive.
Still not very fast for a modern day drive, when expecting more like in the 50s.

> I thought the max for this card was around 150MB/s.

What exactly did you not understand in our posts?
Are you even listening or are you just the compulsive-habitual top
poster that doesn't actually read but paints pictures in his head
and starts rambling when the pictures don't make sense to him?

There is only 1 drive per channel and a drive is per definition always slower
than the channel that it is connected to, as controllers are designed to last
a few years, to not be outdated as soon as a newer, faster drive comes out.

So the 1.5Gb/s 150MB/s rates won't figure anywhere in your calculations.
The STR of the drives do. The aggregated STR of 6 drives, in your case.
The bottleneck -if any- will be your system bus, not the channel(s).

> Again, my setup is 6 pairs of raid1.

Yes, we got that.

>
> -Lars
>
> "Folkert Rienstra" <see_reply-to@myweb.nl> wrote in message news:42952c99$0$90881$892e7fe2@authen.white.readfreenews.net...
> > "Arno Wagner" <me@privacy.net> wrote in message news:3fjjniF84u32U1@individual.net
> > > Previously nospam <nospam@nospam.com> wrote:
> > > > That's kindof sad that software raid is faster than the hardware raid for
> > > > sata. I wonder how Promise got their quoted 150MB/s xfer rate for your card.
> > >
> > > That is just the interface rate.
> >
> > Which is not a (user)data rate.
> >
> > > You don't get that unless you do striping with several disks.
> >
> > Which won't be on the same channel unless connected through a port multiplier.
> > In which case you still don't get 150MB/s as bus protocol and command over-
> > head have to be accounted for.
> >
> > >
> > > Arno
Anonymous
a b G Storage
May 27, 2005 2:04:33 PM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

"Folkert Rienstra" <see_reply-to@myweb.nl> wrote in message
news:42969387$0$46948$892e7fe2@authen.white.readfreenews.net...
> "nospam" <nospam@nospam.com> wrote in message
news:p ymle.4$M91.0@dfw-service2.ext.ray.com
> > I've been mucking w/ IOMeter and have had some better success. By
> > increasing the number of outstanding I/Os to 16, I'm able to get 78MB/s.
>
> > Although this is still far from 150MB/s,
>
> You won't ever see 150MB/s for a single drive, the 150MB/s is the channel
> clock rate. Did you bother to read my and Arnie's post?
> The 150MB/s only comes into play when more than one drive (or in your case
> all your drives) are connected to a single SATA port, by means of a port
multiplier.
> In that case you are limited to 150MB/s, minus overhead.
>
> > it's much better than the 40-45MB/s that I was getting w/ it set to 1,
> > as well as in my throughput tester.
> > Looking at the iometer source, it appears they use asynchronous writes
using
> > WriteFile - actually having muliple writers for 1 file. I'm just using
fwrite.
> >
> > I also tried reading w/ 16 outstanding I/Os and I'm getting huge
throughputs -
>
> > 256MB/s.
>
> That's ~42MB/s per drive.
> Still not very fast for a modern day drive, when expecting more like in
the 50s.

Not exactly. ~42MB/s per raid1 pair. w/ a raid1 pair, you should get close
to double the read speed.
Therefore, that's around 21MB/s per drive.

>
> > I thought the max for this card was around 150MB/s.
>
> What exactly did you not understand in our posts?
> Are you even listening or are you just the compulsive-habitual top
> poster that doesn't actually read but paints pictures in his head
> and starts rambling when the pictures don't make sense to him?
No. I didn't understand that is was per disk port. So now my throughputs are
looking comparatively worse.
>
> There is only 1 drive per channel and a drive is per definition always
slower
> than the channel that it is connected to, as controllers are designed to
last
> a few years, to not be outdated as soon as a newer, faster drive comes
out.
>
> So the 1.5Gb/s 150MB/s rates won't figure anywhere in your calculations.
> The STR of the drives do. The aggregated STR of 6 drives, in your case.
> The bottleneck -if any- will be your system bus, not the channel(s).
With a fibre channel card in the same slot that the raid card was in, I can
get 190 MB/s read and write speed. So system bus is not the bottleneck.

>
> > Again, my setup is 6 pairs of raid1.
>
> Yes, we got that.
Peter asked again, so I responded.
>
> >
> > -Lars
> >
> > "Folkert Rienstra" <see_reply-to@myweb.nl> wrote in message
news:42952c99$0$90881$892e7fe2@authen.white.readfreenews.net...
> > > "Arno Wagner" <me@privacy.net> wrote in message
news:3fjjniF84u32U1@individual.net
> > > > Previously nospam <nospam@nospam.com> wrote:
> > > > > That's kindof sad that software raid is faster than the hardware
raid for
> > > > > sata. I wonder how Promise got their quoted 150MB/s xfer rate for
your card.
> > > >
> > > > That is just the interface rate.
> > >
> > > Which is not a (user)data rate.
> > >
> > > > You don't get that unless you do striping with several disks.
> > >
> > > Which won't be on the same channel unless connected through a port
multiplier.
> > > In which case you still don't get 150MB/s as bus protocol and command
over-
> > > head have to be accounted for.
> > >
> > > >
> > > > Arno
Anonymous
a b G Storage
May 27, 2005 2:16:54 PM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

"Peter" <peterfoxghost@yahoo.ca> wrote in message
news:kxvle.9758$dZ5.832757@news20.bellglobal.com...
> > Write Performance to all 6 Raid1's
> > Using IOMeter - 6 workers - each worker having it's own disk pair, 16
> > oustanding I/Os, 100% writes, 100% sequential, 1MB xfer size,
> > = 78MB/s total
> >
> > same setup, but 100% reads was 256 MB/s
>
> What disks do you use?
Hitachi deskstar 400gig
>
> So what is your goal, trying to figure out why write performance is not
> higher than 78MB/s or getting it higher than 78MB/s?
my goal is to get a total of 6*30 = 180 MB/s write speed.

>
> You may try things I have suggested before:
> "I'm assuming motherboard is SE7501BR2 ?
> Your setup seems fine. You may move RAID card to PCI slot 2.
> Or swap with fibre card in slot 3.
> Is your BIOS revision "Build P20-0079" ?
> If not, you may need to upgrade.
> Try to review your custom benchmark. Did you try to run it
> against SCSI drive(s). They are still on PCI-X/100 bus.
> Or try to benchmark on read operation for comparison.
> May also verify/tweak BIOS settings.
Tried all that stuff. I'm thinking this sata controller card is a pos.

> What performance you get from fibre?"
see other post.
>
> If you need a better write performance, you may also try
> RAID10 config.
Also tried that. see previous post.

>
>
>
>
>
May 27, 2005 5:25:03 PM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

> Tried all that stuff. I'm thinking this sata controller card is a pos.

Then you might be right. I didn't see any reference of a good write
performance using this card.

>
> > What performance you get from fibre?"
> see other post.

Which post?
Anonymous
a b G Storage
May 28, 2005 4:21:29 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

"nospam" <nospam@nospam.com> wrote in message news:BqIle.6$mH1.2@dfw-service2.ext.ray.com
> "Folkert Rienstra" <see_reply-to@myweb.nl> wrote in message news:42969387$0$46948$892e7fe2@authen.white.readfreenews.net...
> > "nospam" <nospam@nospam.com> wrote in message news:p ymle.4$M91.0@dfw-service2.ext.ray.com
> > > I've been mucking w/ IOMeter and have had some better success. By
> > > increasing the number of outstanding I/Os to 16, I'm able to get 78MB/s.
> >
> > > Although this is still far from 150MB/s,
> >
> > You won't ever see 150MB/s for a single drive, the 150MB/s is the channel
> > clock rate. Did you bother to read my and Arnie's post?
> > The 150MB/s only comes into play when more than one drive (or in your case
> > all your drives) are connected to a single SATA port, by means of a port multiplier.
> > In that case you are limited to 150MB/s, minus overhead.
> >
> > > it's much better than the 40-45MB/s that I was getting w/ it set to 1,
> > > as well as in my throughput tester.
> > > Looking at the iometer source, it appears they use asynchronous writes using
> > > WriteFile - actually having muliple writers for 1 file. I'm just using fwrite.
> > >
> > > I also tried reading w/ 16 outstanding I/Os and I'm getting huge throughputs -
> >
> > > 256MB/s.
> >
> > That's ~42MB/s per drive.
> > Still not very fast for a modern day drive, when expecting more like in the 50s.
>
> Not exactly. ~42MB/s per raid1 pair.

Doubtful.

> w/ a raid1 pair, you should get close to double the read speed.

Nope. That is RAID0.
Only if the RAID driver alternates consecutive IOs between the
drive pair can you get some RAID0 type performance on big files.
Whether your RAID controller does that remains to be seen.

> Therefore, that's around 21MB/s per drive.
>
> >
> > > I thought the max for this card was around 150MB/s.
> >
> > What exactly did you not understand in our posts?
> > Are you even listening or are you just the compulsive-habitual top
> > poster that doesn't actually read but paints pictures in his head
> > and starts rambling when the pictures don't make sense to him?
>
> No. I didn't understand that is was per disk port.

That is downright silly, 120MB/s for a multichannel Raid card?
You're joking.

So how come then you expected 180MB/s from a '1.5Gb/s' (~120MB/s data) card?

> So now my throughputs are looking comparatively worse.
> >
> > There is only 1 drive per channel and a drive is per definition always slower
> > than the channel that it is connected to, as controllers are designed to last
> > a few years, to not be outdated as soon as a newer, faster drive comes out.
> >
> > So the 1.5Gb/s 150MB/s rates won't figure anywhere in your calculations.
> > The STR of the drives do. The aggregated STR of 6 drives, in your case.
> > The bottleneck -if any- will be your system bus, not the channel(s).

> With a fibre channel card in the same slot that the raid card was in, I can
> get 190 MB/s read and write speed. So system bus is not the bottleneck.

That remains to be seen with 12 drives @ ~50MB/s each.
(Unless it does the Raid1 internally on the card).

>
> >
> > > Again, my setup is 6 pairs of raid1.
> >
> > Yes, we got that.
>
> Peter asked again, so I responded.
> >
> > >
> > > -Lars
> > >
> > > "Folkert Rienstra" <see_reply-to@myweb.nl> wrote in message news:42952c99$0$90881$892e7fe2@authen.white.readfreenews.net...
> > > > "Arno Wagner" <me@privacy.net> wrote in message news:3fjjniF84u32U1@individual.net
> > > > > Previously nospam <nospam@nospam.com> wrote:
> > > > > > That's kindof sad that software raid is faster than the hardware raid for
> > > > > > sata. I wonder how Promise got their quoted 150MB/s xfer rate for your card.
> > > > >
> > > > > That is just the interface rate.
> > > >
> > > > Which is not a (user)data rate.
> > > >
> > > > > You don't get that unless you do striping with several disks.
> > > >
> > > > Which won't be on the same channel unless connected through a port multiplier.
> > > > In which case you still don't get 150MB/s as bus protocol and command over-
> > > > head have to be accounted for.
> > > >
> > > > >
> > > > > Arno
!