Sign-in / Sign-up
Your question

RAID performance

Tags:
  • AMD
  • Overclocking
  • Performance
  • NAS / RAID
Last response: in Overclocking
Anonymous
a b K Overclocking
July 31, 2004 11:23:12 PM

Archived from groups: alt.comp.hardware.overclocking.amd (More info?)

Not exactly an overclocking question, but I'm thinking this is a good place
for performance issues.

How much performance improvement should I expect if Win XP is installed on a
RAID 0 [or 0+1] array vs using a drive on the standard IDE controller?

Thanks

More about : raid performance

Anonymous
a b K Overclocking
August 1, 2004 4:20:35 AM

Archived from groups: alt.comp.hardware.overclocking.amd (More info?)

Frank Jelenko wrote:
> Not exactly an overclocking question, but I'm thinking this is a good
> place for performance issues.
>
> How much performance improvement should I expect if Win XP is installed
> on a RAID 0 [or 0+1] array vs using a drive on the standard IDE
> controller?


Depends what you want to do.

But probably less than installing Windows on one drive and having it swap to
a different one. If you have just two drives, you can usually make more
effective use of two seperate drives than two that are tied together with
RAID. Again, it depends on your usage patterns.

Ben
--
A7N8X FAQ: www.ben.pope.name/a7n8x_faq.html
Questions by email will likely be ignored, please use the newsgroups.
I'm not just a number. To many, I'm known as a String...
Anonymous
a b K Overclocking
August 1, 2004 4:20:36 AM

Archived from groups: alt.comp.hardware.overclocking.amd (More info?)

"Ben Pope" <spam@hotmail.com> wrote in message
news:2n2np9FsrjfeU1@uni-berlin.de...
> Frank Jelenko wrote:
>> Not exactly an overclocking question, but I'm thinking this is a good
>> place for performance issues.
>>
>> How much performance improvement should I expect if Win XP is installed
>> on a RAID 0 [or 0+1] array vs using a drive on the standard IDE
>> controller?
>
>
> Depends what you want to do.
>
> But probably less than installing Windows on one drive and having it swap
> to
> a different one. If you have just two drives, you can usually make more
> effective use of two seperate drives than two that are tied together with
> RAID.
Actually, have several drives.

How about config 1 is two drives, each one on different IDE controllers.
SWAP file on second drive.
Config 2 is system on RAID 0 [two identical drives, each the same as in
config 1], swap file on third drive [this third drive is the same as the
second drive in config 1]

Would you expect to see noticeable/significant increase in performance of
config 2 over config 1?

Thanks
> Ben
> --
> A7N8X FAQ: www.ben.pope.name/a7n8x_faq.html
> Questions by email will likely be ignored, please use the newsgroups.
> I'm not just a number. To many, I'm known as a String...
>
>
Related resources
Can't find your answer ? Ask !
Anonymous
a b K Overclocking
August 1, 2004 4:54:55 AM

Archived from groups: alt.comp.hardware.overclocking.amd (More info?)

Frank Jelenko wrote:
> Actually, have several drives.
>
> How about config 1 is two drives, each one on different IDE controllers.
> SWAP file on second drive.
> Config 2 is system on RAID 0 [two identical drives, each the same as in
> config 1], swap file on third drive [this third drive is the same as the
> second drive in config 1]
>
> Would you expect to see noticeable/significant increase in performance of
> config 2 over config 1?

RAID can be a little faster than a comparable rig without. But only in
certain circumstances. Go find some benchmarks, there was a discussion here
in the last few days about RAID and had links to benchmarks and further
discussion of the pros and cons.

Ben
--
A7N8X FAQ: www.ben.pope.name/a7n8x_faq.html
Questions by email will likely be ignored, please use the newsgroups.
I'm not just a number. To many, I'm known as a String...
Anonymous
a b K Overclocking
August 1, 2004 6:07:53 AM

Archived from groups: alt.comp.hardware.overclocking.amd (More info?)

"Ben Pope" <spam@hotmail.com> wrote in message
news:2n2preFs35rqU1@uni-berlin.de...
> Frank Jelenko wrote:
>> Actually, have several drives.
>>
>> How about config 1 is two drives, each one on different IDE controllers.
>> SWAP file on second drive.
>> Config 2 is system on RAID 0 [two identical drives, each the same as in
>> config 1], swap file on third drive [this third drive is the same as the
>> second drive in config 1]
>>
>> Would you expect to see noticeable/significant increase in performance of
>> config 2 over config 1?
>
> RAID can be a little faster than a comparable rig without. But only in
> certain circumstances. Go find some benchmarks, there was a discussion
> here
> in the last few days about RAID and had links to benchmarks and further
> discussion of the pros and cons.
>
Thanks. Didn't find any posts here, but google provided a couple. Several
comments that IDE RAID doesn't do much for the OS itself or for office apps
[which I run]. Comments did say it should help with large files.

> Ben
> --
> A7N8X FAQ: www.ben.pope.name/a7n8x_faq.html
> Questions by email will likely be ignored, please use the newsgroups.
> I'm not just a number. To many, I'm known as a String...
>
>
Anonymous
a b K Overclocking
August 1, 2004 10:34:55 PM

Archived from groups: alt.comp.hardware.overclocking.amd (More info?)

If you have onboard Raid ,you are wasting your time.
If you buy a seperate Raid card you would notice the difference.

"Frank Jelenko" <jelenko2@hotmail.com> wrote in message
news:ZfYOc.363203$Gx4.37249@bgtnsc04-news.ops.worldnet.att.net...
>
> "Ben Pope" <spam@hotmail.com> wrote in message
> news:2n2preFs35rqU1@uni-berlin.de...
> > Frank Jelenko wrote:
> >> Actually, have several drives.
> >>
> >> How about config 1 is two drives, each one on different IDE
controllers.
> >> SWAP file on second drive.
> >> Config 2 is system on RAID 0 [two identical drives, each the same as in
> >> config 1], swap file on third drive [this third drive is the same as
the
> >> second drive in config 1]
> >>
> >> Would you expect to see noticeable/significant increase in performance
of
> >> config 2 over config 1?
> >
> > RAID can be a little faster than a comparable rig without. But only in
> > certain circumstances. Go find some benchmarks, there was a discussion
> > here
> > in the last few days about RAID and had links to benchmarks and further
> > discussion of the pros and cons.
> >
> Thanks. Didn't find any posts here, but google provided a couple.
Several
> comments that IDE RAID doesn't do much for the OS itself or for office
apps
> [which I run]. Comments did say it should help with large files.
>
> > Ben
> > --
> > A7N8X FAQ: www.ben.pope.name/a7n8x_faq.html
> > Questions by email will likely be ignored, please use the newsgroups.
> > I'm not just a number. To many, I'm known as a String...
> >
> >
>
>
Anonymous
a b K Overclocking
August 1, 2004 10:34:56 PM

Archived from groups: alt.comp.hardware.overclocking.amd (More info?)

Achim Weissegger wrote:
> If you have onboard Raid ,you are wasting your time.
> If you buy a seperate Raid card you would notice the difference.

That statement is far to broad and sweeping, to be correct. It depends a
lot on the RAID implementation. Many newer RAID implementations that are
built into the chipset are extremely good, and can have better performance
than a PCI RAID card due to lower latency, greater bandwidth etc. This more
apparent with 3+ drives rather than 2.

Ben
--
A7N8X FAQ: www.ben.pope.name/a7n8x_faq.html
Questions by email will likely be ignored, please use the newsgroups.
I'm not just a number. To many, I'm known as a String...
August 2, 2004 1:16:05 PM

Archived from groups: alt.comp.hardware.overclocking.amd (More info?)

"Ben Pope" <spam@hotmail.com> wrote in message
news:2n3vq8Fp91apU1@uni-berlin.de...
> Achim Weissegger wrote:
> > If you have onboard Raid ,you are wasting your time.
> > If you buy a seperate Raid card you would notice the difference.
>
> That statement is far to broad and sweeping, to be correct. It depends a
> lot on the RAID implementation. Many newer RAID implementations that are
> built into the chipset are extremely good, and can have better performance
> than a PCI RAID card due to lower latency, greater bandwidth etc. This
more
> apparent with 3+ drives rather than 2.
>
> Ben

Ben, I think you are being very polite and charitable in your response.

I would simply say that Achim's post was a complete load of bollocks.
Ironically the reality is the total opposite from what Achim has said.

The very best raid performance can ONLY come from onboard controllers, since
only the ones integrated into a southbridge can bypass the limited PCI
bandwidth. Therefore, you will find that the best controller is the Intel
ICH5-R, closely followed by the new nVidia southbridge (forget the name) and
the VIA VT8237. No add-in board can come come close to these. Not until
PCI Express is established at least.

But as if that wasn't enough, if you look at the Silicon Image SIL3112
controller (very popular on board controller), it outperforms ALL other
add-in controllers with 2 disk raid0 setups. So even if we ingnore the
southbridge-integrated controllers, Achim's statement is STILL complete
bollocks.

Chip
Anonymous
a b K Overclocking
August 3, 2004 3:18:31 AM

Archived from groups: alt.comp.hardware.overclocking.amd (More info?)

Looking at the original post , Frank talked about standart IDE drives not
SCSI or SATA.
When I bought the ASUS A7V motherboard with integrated raid port I was very
dissapointed
with the very avarage raid port performance.
I searched for month to find a fix and tried patched bioses,drivers etc.I
keeped on reading about people
having the problem with the inbuild Promise Fasttrack 100 raid controller ,
where as the pci version
performs much better.
When my motherboard packed it in a month ago, I thought I can save my kids
10gb of not backed up MP3 buy getting another motherboard with a inbuild
raid port .
I bought the Gigabyte GA7V400Pro2 and the IDE raid performance is not much
better, where as the SATA raid port is meant to be very good.
Read the reviews and compare the benchmark results of the inbuild and pci
card IDE raid controllers
and you see what I mean.

"Chip" <anneonymouse@virgin.net> wrote in message
news:2n6bgnFtfsbjU1@uni-berlin.de...
> "Ben Pope" <spam@hotmail.com> wrote in message
> news:2n3vq8Fp91apU1@uni-berlin.de...
> > Achim Weissegger wrote:
> > > If you have onboard Raid ,you are wasting your time.
> > > If you buy a seperate Raid card you would notice the difference.
> >
> > That statement is far to broad and sweeping, to be correct. It depends
a
> > lot on the RAID implementation. Many newer RAID implementations that
are
> > built into the chipset are extremely good, and can have better
performance
> > than a PCI RAID card due to lower latency, greater bandwidth etc. This
> more
> > apparent with 3+ drives rather than 2.
> >
> > Ben
>
> Ben, I think you are being very polite and charitable in your response.
>
> I would simply say that Achim's post was a complete load of bollocks.
> Ironically the reality is the total opposite from what Achim has said.
>
> The very best raid performance can ONLY come from onboard controllers,
since
> only the ones integrated into a southbridge can bypass the limited PCI
> bandwidth. Therefore, you will find that the best controller is the Intel
> ICH5-R, closely followed by the new nVidia southbridge (forget the name)
and
> the VIA VT8237. No add-in board can come come close to these. Not until
> PCI Express is established at least.
>
> But as if that wasn't enough, if you look at the Silicon Image SIL3112
> controller (very popular on board controller), it outperforms ALL other
> add-in controllers with 2 disk raid0 setups. So even if we ingnore the
> southbridge-integrated controllers, Achim's statement is STILL complete
> bollocks.
>
> Chip
>
>
Anonymous
a b K Overclocking
August 3, 2004 3:18:32 AM

Archived from groups: alt.comp.hardware.overclocking.amd (More info?)

Looking at the original post , Frank talked about standart IDE drives not
SCSI or SATA.
When I bought the ASUS A7V motherboard with integrated raid port I was very
dissapointed
with the very avarage raid port performance.
I searched for month to find a fix and tried patched bioses,drivers etc.I
keeped on reading about people
having the problem with the inbuild Promise Fasttrack 100 raid controller ,
where as the pci version
performs much better.
When my motherboard packed it in a month ago, I thought I can save my kids
10gb of not backed up MP3 buy getting another motherboard with a inbuild
raid port .
I bought the Gigabyte GA7V400Pro2 and the IDE raid performance is not much
better, where as the SATA raid port is meant to be very good.
Read the reviews and compare the benchmark results of the inbuild and pci
card IDE raid controllers
and you see what I mean.

--------------------------------------------------------------------------
----------
Since the sata ports run through the PCI buss,it wont perform any better than
a built in or PCI add in raid controller. If you want good raid performance
you need the INTEL ICH5R chipset which bypasses the PCI buss.
A friend of mine has 2 Maxtor SATA drives on this ICH5R chipset that he does
Video editing and rendering and he says it is a big improvment over the IDA
raid he used to use. DOUG
Anonymous
a b K Overclocking
August 3, 2004 12:03:13 PM

Archived from groups: alt.comp.hardware.overclocking.amd (More info?)

Chip wrote:
[...]
> The very best raid performance can ONLY come from onboard
> controllers, since only the ones integrated into a southbridge can
> bypass the limited PCI bandwidth. Therefore, you will find that the
> best controller is the Intel ICH5-R, closely followed by the new
> nVidia southbridge (forget the name) and the VIA VT8237. No add-in
> board can come come close to these.

An addind board can easily come close to these. That's why there's so many
high-end SCSI RAID cards on the market. Of course, all the decent ones are
64-bit 66MHz PCI cards (which in most decent implementations effectively
hang off the northbridge), which easily gets rid of the bandwidth problem.
You just have to have a motherboard that supports it :)  Also, these add-in
boards are often are much more powerful than onboard solutions (bigger
buffers, more modes, more drives, etc etc).

[...]

--
Michael Brown
www.emboss.co.nz : OOS/RSI software and more :) 
Add michael@ to emboss.co.nz - My inbox is always open
August 3, 2004 12:03:14 PM

Archived from groups: alt.comp.hardware.overclocking.amd (More info?)

"Michael Brown" <see@signature.below> wrote in message
news:D 5xPc.8219$N77.409032@news.xtra.co.nz...
> Chip wrote:
> [...]
> > The very best raid performance can ONLY come from onboard
> > controllers, since only the ones integrated into a southbridge can
> > bypass the limited PCI bandwidth. Therefore, you will find that the
> > best controller is the Intel ICH5-R, closely followed by the new
> > nVidia southbridge (forget the name) and the VIA VT8237. No add-in
> > board can come come close to these.
>
> An addind board can easily come close to these. That's why there's so many
> high-end SCSI RAID cards on the market. Of course, all the decent ones are
> 64-bit 66MHz PCI cards (which in most decent implementations effectively
> hang off the northbridge), which easily gets rid of the bandwidth problem.
> You just have to have a motherboard that supports it :)  Also, these add-in
> boards are often are much more powerful than onboard solutions (bigger
> buffers, more modes, more drives, etc etc).

There's some sense in what you say Michael, but there are parts of the above
that I must comment on.

1. Yes lots of SCSI raid cards exist. Probably because SCSI was the only
sensible option of servers etc until sata came along. And yes, you can
build a more powerful/efficient SCSI controller card by throwing more
silicon at it. It can perhaps buffer better and get more i/o's per second
out of the disks. But that doesn't escape the basic fact that you can only
get around 110~115MB/s through the 33MHz PCI bus. So it doesn't matter how
clever the controller is, PCI is a complete bottle neck if you want to run 2
or more fast disks.

2. Of course the 66MHz PCI standard alleviates the problem. But we are not
talking normal "PC's" here are we. The normal XP / Intel / AMD home PC is
stuck with 33 MHz.

3. "effectively hang off the northbridge"?????!!!? I am afraid that's
complete nonsense. Northbridge's communicate with the CPU and memory and
link up to Southbridges. Everything PCI hangs off the southbridge. That's
true of normal 33MHz PCI buses and 66MHz PCI servers. PCI SCSI controllers
(or any other PCI devices for that matter) don't go anywhere near the
northbridge and are all limited by whatever PCI bandwidth is on offer.

Contrast southbridges with integrated disk controllers. Here the
controllers *do not* go through PCI and can communicate (via the
southbridge) with the CPU at typically 1000MB/s (or whatever the
northbridge - southbridge speed is for that particular kind of chipset).

But anyway, we lose track of the original point of all of this:

Achim said that you will never get decent raid0 performance from an
integrated controller and only from an add-in card. Plainly that is wrong.

Cheers,

Chip.
Anonymous
a b K Overclocking
August 3, 2004 2:31:33 PM

Archived from groups: alt.comp.hardware.overclocking.amd (More info?)

Just to clearify , I said that all normal motherboards with integrated ATA
IDE ports lack behind in
performance to the PCI card version , like the integrated Promise Fasttrack
100 does not perform
as well as the PCI card version does.
I never talked about SCSI or SATA. I know that most servers have integrated
SCSI that perorms
very well or use 64bit PCI bus.


"Chip" <anneonymouse@virgin.net> wrote in message
news:2n7ot3Ftk3naU1@uni-berlin.de...
>
> "Michael Brown" <see@signature.below> wrote in message
> news:D 5xPc.8219$N77.409032@news.xtra.co.nz...
> > Chip wrote:
> > [...]
> > > The very best raid performance can ONLY come from onboard
> > > controllers, since only the ones integrated into a southbridge can
> > > bypass the limited PCI bandwidth. Therefore, you will find that the
> > > best controller is the Intel ICH5-R, closely followed by the new
> > > nVidia southbridge (forget the name) and the VIA VT8237. No add-in
> > > board can come come close to these.
> >
> > An addind board can easily come close to these. That's why there's so
many
> > high-end SCSI RAID cards on the market. Of course, all the decent ones
are
> > 64-bit 66MHz PCI cards (which in most decent implementations effectively
> > hang off the northbridge), which easily gets rid of the bandwidth
problem.
> > You just have to have a motherboard that supports it :)  Also, these
add-in
> > boards are often are much more powerful than onboard solutions (bigger
> > buffers, more modes, more drives, etc etc).
>
> There's some sense in what you say Michael, but there are parts of the
above
> that I must comment on.
>
> 1. Yes lots of SCSI raid cards exist. Probably because SCSI was the only
> sensible option of servers etc until sata came along. And yes, you can
> build a more powerful/efficient SCSI controller card by throwing more
> silicon at it. It can perhaps buffer better and get more i/o's per second
> out of the disks. But that doesn't escape the basic fact that you can
only
> get around 110~115MB/s through the 33MHz PCI bus. So it doesn't matter
how
> clever the controller is, PCI is a complete bottle neck if you want to run
2
> or more fast disks.
>
> 2. Of course the 66MHz PCI standard alleviates the problem. But we are
not
> talking normal "PC's" here are we. The normal XP / Intel / AMD home PC is
> stuck with 33 MHz.
>
> 3. "effectively hang off the northbridge"?????!!!? I am afraid that's
> complete nonsense. Northbridge's communicate with the CPU and memory and
> link up to Southbridges. Everything PCI hangs off the southbridge.
That's
> true of normal 33MHz PCI buses and 66MHz PCI servers. PCI SCSI
controllers
> (or any other PCI devices for that matter) don't go anywhere near the
> northbridge and are all limited by whatever PCI bandwidth is on offer.
>
> Contrast southbridges with integrated disk controllers. Here the
> controllers *do not* go through PCI and can communicate (via the
> southbridge) with the CPU at typically 1000MB/s (or whatever the
> northbridge - southbridge speed is for that particular kind of chipset).
>
> But anyway, we lose track of the original point of all of this:
>
> Achim said that you will never get decent raid0 performance from an
> integrated controller and only from an add-in card. Plainly that is
wrong.
>
> Cheers,
>
> Chip.
>
>
Anonymous
a b K Overclocking
August 3, 2004 2:31:34 PM

Archived from groups: alt.comp.hardware.overclocking.amd (More info?)

Just to clearify , I said that all normal motherboards with integrated ATA
IDE ports lack behind in
performance to the PCI card version , like the integrated Promise Fasttrack
100 does not perform
as well as the PCI card version does.
I never talked about SCSI or SATA. I know that most servers have integrated
SCSI that perorms
very well or use 64bit PCI bus.

--------------------------------------------------------------------------
-------------------------
Do you have some reviews or benchmarks showing where the pci add in card out
performs a built in promise raid controller with a fast CPU like a 3.0 P4 .
I find that a little hard to believe because a built in controller with a
fast cpu would perform much better than the add in with a cheap slow built in
processor. I have built quite a few Gigabyte motherboards with built in
promise raid controllers that performed very good.
Anonymous
a b K Overclocking
August 3, 2004 5:22:41 PM

Archived from groups: alt.comp.hardware.overclocking.amd (More info?)

Iam talking about IDE Raid.......
When I used to have the inbuild Promise fasttrack controller , I would reach
benchmarks of 28mb per second as compared to 36 to 45 mb per second by a PCI
Fasttrack controller cards with the same chipset on test performed by
www.tomshardware.com with simular drives..
I would get 24mb buy running a single drive with the same controller.
I have only read bad reviews about the onboard IDE raid on my G7N400Pro2
motherboard.
I have a XP3200+ CPU and I am lucky to reach 32mb per second.
I used lots of differnt benchmarking programs , and on all of the the
performance is the same.
Have a look at http://forums.pcper.com/forumdisplay.php?f=40 and you will
read a lot of people are very dissapoint with theit onborad IDE Raid.

All I saying is : If you want IDE Raid , instead of buying a motherboard
with onboard , spent an extra
couple of dollars and buy a seperate card ......



"Courseyauto" <courseyauto@aol.com> wrote in message
news:20040802215050.23168.00003024@mb-m07.aol.com...
> Just to clearify , I said that all normal motherboards with integrated ATA
> IDE ports lack behind in
> performance to the PCI card version , like the integrated Promise
Fasttrack
> 100 does not perform
> as well as the PCI card version does.
> I never talked about SCSI or SATA. I know that most servers have
integrated
> SCSI that perorms
> very well or use 64bit PCI bus.
>
> --------------------------------------------------------------------------
> -------------------------
> Do you have some reviews or benchmarks showing where the pci add in card
out
> performs a built in promise raid controller with a fast CPU like a 3.0 P4
..
> I find that a little hard to believe because a built in controller
with a
> fast cpu would perform much better than the add in with a cheap slow built
in
> processor. I have built quite a few Gigabyte motherboards with built in
> promise raid controllers that performed very good.
Anonymous
a b K Overclocking
August 3, 2004 8:14:48 PM

Archived from groups: alt.comp.hardware.overclocking.amd (More info?)

Please look at the original post.
You all talk about a different set up.
Frank is using a standart IDE drive. He never asked about high end setup.
If he would talk about a top of the range motherboard he would not use
standart IDE, he would upgrade his drives aswell.
The Promise Fastrack Raid on Asus Motherboards and the ITE Raid on Gigabyte
motherboards
do not perform as well as they should.
"Frank Jelenko" <jelenko2@hotmail.com> wrote in message
news:AkSOc.361351$Gx4.237014@bgtnsc04-news.ops.worldnet.att.net...
> Not exactly an overclocking question, but I'm thinking this is a good
place
> for performance issues.
>
> How much performance improvement should I expect if Win XP is installed on
a
> RAID 0 [or 0+1] array vs using a drive on the standard IDE controller?
>
> Thanks
>
>
August 3, 2004 8:35:18 PM

Archived from groups: alt.comp.hardware.overclocking.amd (More info?)

"Achim Weissegger" <aweissegger@hotmail.com> wrote in message
news:410edce7$1@duster.adelaide.on.net...
> Just to clearify , I said that all normal motherboards with integrated ATA
> IDE ports lack behind in
> performance to the PCI card version

With respect Achim, you did not.

You said, "If you have onboard Raid ,you are wasting your time.
If you buy a seperate Raid card you would notice the difference."

That was the full extent of your original post. It was - and it remains -
completely incorrect.

Chip.
Anonymous
a b K Overclocking
August 3, 2004 8:35:19 PM

Archived from groups: alt.comp.hardware.overclocking.amd (More info?)

"Achim Weissegger" <aweissegger@hotmail.com> wrote in message
news:410edce7$1@duster.adelaide.on.net...
> Just to clearify , I said that all normal motherboards with integrated ATA
> IDE ports lack behind in
> performance to the PCI card version

With respect Achim, you did not.

You said, "If you have onboard Raid ,you are wasting your time.
If you buy a seperate Raid card you would notice the difference."

That was the full extent of your original post. It was - and it remains -
completely incorrect.

Chip.


--------------------------------------------------------------------------
------------------------ I
I agree,you would have to buy an add in card that cost more than the
motherboard to get better performance than a built in raid controller.DOUG
Anonymous
a b K Overclocking
August 3, 2004 9:36:01 PM

Archived from groups: alt.comp.hardware.overclocking.amd (More info?)

Chip wrote:
> "Michael Brown" <see@signature.below> wrote in message
> news:D 5xPc.8219$N77.409032@news.xtra.co.nz...
>> Chip wrote:
>> [...]
>>> The very best raid performance can ONLY come from onboard
>>> controllers, since only the ones integrated into a southbridge can
>>> bypass the limited PCI bandwidth. Therefore, you will find that the
>>> best controller is the Intel ICH5-R, closely followed by the new
>>> nVidia southbridge (forget the name) and the VIA VT8237. No add-in
>>> board can come come close to these.
>>
>> An addind board can easily come close to these. That's why there's
>> so many high-end SCSI RAID cards on the market. Of course, all the
>> decent ones are 64-bit 66MHz PCI cards (which in most decent
>> implementations effectively hang off the northbridge), which easily
>> gets rid of the bandwidth problem. You just have to have a
>> motherboard that supports it :)  Also, these add-in boards are often
>> are much more powerful than onboard solutions (bigger buffers, more
>> modes, more drives, etc etc).
>
> There's some sense in what you say Michael, but there are parts of
> the above that I must comment on.
>
> 1. Yes lots of SCSI raid cards exist.

I was more pointing out the "high end" part as opposed to SCSI, but it
doesn't really matter :) 

> Probably because SCSI was the
> only sensible option of servers etc until sata came along. And yes,
> you can build a more powerful/efficient SCSI controller card by
> throwing more silicon at it. It can perhaps buffer better and get
> more i/o's per second out of the disks. But that doesn't escape the
> basic fact that you can only get around 110~115MB/s through the 33MHz
> PCI bus. So it doesn't matter how clever the controller is, PCI is a
> complete bottle neck if you want to run 2 or more fast disks.
>
> 2. Of course the 66MHz PCI standard alleviates the problem. But we
> are not talking normal "PC's" here are we. The normal XP / Intel /
> AMD home PC is stuck with 33 MHz.

We talking about the "very best raid performance" or "very best raid
performance that can be obtained with a reasonably cheap motherboard"? :)  I
suppose I was more agreeing with Ben Pope, in that saying "addin > onboard"
(Achim) or "onboard > addin" (you) is a huge generalisation, and as such is
not correct.

Also, given that there's quite a few sub US$400 boards with 66MHz 64-bit PCI
slots (or alternatively PCI-X), I would call it an exotic feature by any
means. There's a lot under $200 as well if you want to go dual-XPs. My
system (MSI K7D) has got two of them, and the entire system cost much less
than an FX-5x. Further up the foodchain are the 100MHz and 133MHz PCI-X
busses, but this time you are getting into the serious hardware realm.

In any case, pretty much the only people who will notice a significant
difference between non-RAID and RAID (regardless of if it's onboard or
addin) are those running servers, which usually have faster busses precisely
for this purpose.

> 3. "effectively hang off the northbridge"?????!!!? I am afraid that's
> complete nonsense.

It depends on the chipset, of course :)  The 64-bit PCI bus in the 760MPX
chipset comes from (ie: is arbitrated by) the northbridge. The southbridge
is in fact just a PCI-PCI bridge that hangs off the 64-bit PCI bus from the
northbridge. This is not an uncommon layout in midrange chipset designs, as
it allows much simpler design of the southbridge. In the high-end chipsets
such as Serverworks, there's usually a another whole chip that's controlling
the fast PCI bus(ses).

[...]
> Achim said that you will never get decent raid0 performance from an
> integrated controller and only from an add-in card. Plainly that is
> wrong.

Oh, certainly. I was just taking issue with your assertion that the very
best RAID performance can only come from an onboard controller. The best
performance today comes from those insane addin cards with 2 or more U320
ports and 128mb of RAM (who admittedly often like to run on 133MHz PCI,
which is nowhere near as common as the 66MHz ones). Fully loaded up, I don't
think there's an onboard controller anywhere than can compete with them. In
the future, PCI-e will provide more than enough bandwidth, especially if
motherboard manufacturers start implementing multiplie 16x slots to cater
for SLI.

--
Michael Brown
www.emboss.co.nz : OOS/RSI software and more :) 
Add michael@ to emboss.co.nz - My inbox is always open