Sign in with
Sign up | Sign in
Your question

The Southbridge Battle: nforce 6 MCP vs. ICH7 vs. ICH8

Last response: in Motherboards
Share
January 3, 2007 10:43:54 AM

The nforce 680i may be the enthusiast's darling, but what about the chipset's interface capabilities? We scrutinize its RAID and USB 2.0 performance and compare it to other chipsets from Intel as well as Nvidia.
January 3, 2007 11:56:40 AM

Even though the ICH8 does not offer dual Gigabit Ethernet in specification, most high-end vendors (such as the Asus P5B series with 965 and ICH8) bolt on a second Gig-E port for the same dual operation the nForce has. While the second port actually lies on the PCI bus, that bus has enough bandwidth to support the 30-50 MB/sec of Gig-E in Windows.

Also, I am happy to report that the transfer rate diagrams and RAID performance shown in this report actually do bear out in real world (not just lab) performance. I bought a P5B specifically because the ICH8 is a newer chipset than contained on the "flagship" 975 boards, suspecting that Intel has had even more time to work out the kinks and improve performance with the latest iteration of its silicon.

There has been talk in the forums here and elsewhere that 975 is a better overall chipset, but my real world experience does not manifest that. Also, overclocking the Core 2 Duo with the 965 is a breeze and a pleasure, as the chipset and CPU both scream for more load, take the load, and like every minute of it.

I dont believe anybody who opts for a 965 with the ICH8 will suffer any degradation in performance compared with any other C2D-compatible chipset.

Rig: Asus P5B-Deluxe/Wi-Fi, Core 2 Duo E6600 under Tt Blue Orb II, 2x1GB OCZ Platinum PC2-6400, eVGA 7600GT, 3x WD 3200KS in RAID 5. Chipset @ 360 MHz, CPU @ 3.24 GHZ, RAM @ 900 MHz. Stable burnt in @ 48 hours each ORTHOS 8k and Gromacs. Temps MB 37, CPU 40 idle 49 load. RAID 5 performance 198 MB/sec reads, 129 MB/sec writes.
January 3, 2007 12:15:27 PM

I know this will depend on drive size, but anyone have any stats on RAID 5 rebuild time? I attempted the Win XP RAID 5 tweak and 4-250GB drives took like 9 hours to build the array. Testing to see if the array worked took forever in my case. I'm pretty sure these chips will still use the CPU for the parity data calculations and should be similar to my experience.

Anyone have any data to share?
Related resources
January 3, 2007 12:24:20 PM

Wow. What is the nVidia RAID bottleneck? Judging by their performance below the 120MB/s wall, if they could fix whatever is creating that wall they might outperform the Intel RAID setups.

It is pretty amazing how poor degraded mode performance is with the Intel setups. RAID5 degraded should still beat a single drive, but this designed for home use and not enterprise, so being able to limp along until the drive is replaced is acceptable functionality.
January 3, 2007 12:32:43 PM

there is an problem with Nforce results for RAID 0

there must be an driver issue as i have had with 4 hdds doing min 240mb/s with an slope down i thought this was an nforce 4 issule useing an Sata to Single UDMA 133 bridge so all ports are limited to 133mb/s (seems 115mb/s with overhead?)

my setup before was 4x 80gb maxter 10 hdds i was getting the propper performace of 4x60mb/s = 240 ish with the slope that you get as hdds get slower the more the way thay get up the disks (i will post results that i had before i replaced them)

i sold them for 4x segates now i seemed to be limted to 115mb agane Flat line like tomshardware has shown

this is based on the spec on my Sig
i going to ghost my hdd agane and mess with dif RAID 0 chunk sizes

my setup was beating 4x raptors in tests but cluster size may have been set to 64k or even 128kb
January 3, 2007 12:33:17 PM

Am I reading this right???

Quote:
ICH7 is superior to ICH8 in all of our I/O benchmarks and in most of the SATA throughput benchmarks.


Quote:
ICH8 is the real surprise, though, as its technical specifications do not read very different from what you will find for ICH7; there are 10 instead of eight USB 2.0 ports, and the Serial ATA connectivity has been expanded from four to six ports. Yet, its performance increased across the board.Its transfer performance slightly exceeds the benchmark results of ICH7 in almost all disciplines, it wins most of the I/O performance benchmarks and it does substantially better in our USB 2.0 bandwidth test.


"its" as in ICH7 or ICH8???

Which is it???

Also, are they saying that I can I take 2 160 GB Seagate HDs and put them in two different raids?

So, are they saying that I can RAID them as RAID 0 at first and then create another partition and put that in RAID 1?

I'm so confused. Can someone explain this to me? Look at sig for computer specs.
January 3, 2007 1:04:31 PM

I thought that too... I think they made a mistake. It would appear so, looking at the benchmarks. The ICH8 wins nearly everything.

With my iminent purchase of a DS4 (with the ICH8R), I almost got a boner reading that review, especially the RAID0 result! :p 
January 3, 2007 1:17:02 PM

i must have skipped over the matrixRAID page. just ignore this post.
January 3, 2007 1:52:21 PM

What seems to utterly baffle me is the total lack of notice of the issue all Nforce chipsets have had for quite some time with USB-based KVM switches which you can see some examples of here:

http://forums.nvidia.com/index.php?showtopic=9269&hl=us...

http://www.homepcnetwork.com/feedbackf.htm

http://www.pricegrabber.com/rating_getprodrev.php/produ...

http://bc.whirlpool.net.au/forum-replies-archive.cfm/48...

I have two Nforce-based systems in our lab and none of the USB-based KVM switches have worked with them at all forcing us to use PS/2-based switches which is quickly not becoming an option on newer motherboards. And while I've not seen anyone post of issues with the 680i-series just the fact that I've experienced this first-hand has soured me to ever buying an Nvidia Nforce-based motherboard.
a b V Motherboard
January 3, 2007 2:00:51 PM

Quote:
Even though the ICH8 does not offer dual Gigabit Ethernet in specification, most high-end vendors (such as the Asus P5B series with 965 and ICH8) bolt on a second Gig-E port for the same dual operation the nForce has. While the second port actually lies on the PCI bus, that bus has enough bandwidth to support the 30-50 MB/sec of Gig-E in Windows.


Wrong, most of the boards I have with dual gigabit use PCI-Express x1, which is faster than "Gig-E"
a b V Motherboard
January 3, 2007 2:07:02 PM

Quote:

Also, are they saying that I can I take 2 160 GB Seagate HDs and put them in two different raids?


Read the part about Matrix RAID.
January 3, 2007 2:26:40 PM

I found the article a little contradictory as well at times. Near the end they say the 680i loses in almost all cases yet their charts show the 680i coming mostly between the two other chipsets, not losing to both.

Its almost as if several people peacemealed the article together without discussing each others results before jamming it all together. Shrug.
January 3, 2007 3:06:39 PM

Clearly the result show the ICH8 is the hands down performance winner.

The article shows on page 6 RAID 1 and RAID 0 on the same set of drives but as seperate partitions, this of course only for intel Matrix raid.

I do not agree with the conclusion, as it would seem to me that unless you are using SLI (and even then 650i would perform in games better) there is no need to buy 680i and the same goes for 975x (good for xfire support). So the conclusion should state that 965 is the performance choice unless you use SLI or Crossfire (Xfire).

So one would think that sooner or later, if possible, someone would put 975 with ICH8 and then you would have the best of both worlds for crossfire.

As to the 680i being the best feature chipset, I do not agree as it all depends on your needs. IE if you like to play games and you want to watch TV on you machine you will have a small issue with the lack of PCI slots on the 680i chipset boards. Though the features from a gaming perspective are the best (well in theory) it is at the cost of overall expandability. Once you put in two double wide PCIe16 cards and your Physics GPU (could be double wide as well) you are left with either one or no slots that are useful. I do not could PCIex1 as usefull as there are no cards out that are not already also on the board.

So 680i (extreme gamer SLI)
650i (smart extreme gamers who know the x8 is enough in SLI)
975x / rd600 (extreme gamers with crossfire)
p965 everyone else who wants to do more than game and pay less to.
January 3, 2007 3:12:25 PM

Wow - you article fails to mention that in 1-4 weeks your raid 5 will melt down :cry:  as is posted my times in asus forums. I have built properly installed raid systems (clean install, no overclocking, fresh drivers) for years and all raid 5 systems have melted down or disintegrated :twisted: . All raid 5 based systems, with in 1 month have failed, all were based on intel matrix. In fact raid 5/0 melted down in mins after testing. Funny thing is asus even recommended raid 5 but their forums are filled with failed attemps. Please post :?: , if you have suceesfully built raid 5 or raid5/0 systems.

Nvidia I have built raid 0/1 dual raid and they melted down with in weeks too - These systems worked for a while but then they just disappear or unbootable and unrepairable.

I be interested if others have had this problem or solved it. I am currently using a 4 drive raid 10 with xp plus a raid 0 with vista - dual raid. I crashed the raid 10 with vista by simple defragging it with the xp program. This system seems fairly stable as long as you do not access one raid from the other - this is not acceptable for resale systems.

As I stated, the concenses at asus forums is the matrix raid 5 is unstable and there is no solution - maybe THG knows something :idea: or is it these systems where never tested more then a few days or hours?
January 3, 2007 3:37:47 PM

I have a stable Raid 1 array since January. It survived a shortage of power when my other two drives fired their data. My raid array is built on a PCI Promise solution though not off of Intel or Nvidia raid.

My next PC will run the Raid 1 off of Intel so I will let you know in a month or so if it dies.

Would seem to me not to be smart to defrag a raid array with one OS and then try to use it from another OS, if both/either OS is installed on that Array. Playing with fire there I would think.

Raiding you OS drive would also seem to me to be a big risk (other then Raid 1). OS's are very picky about files suddenly not being there (physical location on disk) and even though Raid 5 might rebuild it, the OS can not find it where it wants to to start with, then I see a crash coming very quickly.
January 3, 2007 3:57:28 PM

Responding to pshmid above, a raid 1 is really just a drive with a copy - the only issue is with some controllers if you unplug one drive or one breaks the controller gets confused with which is the first and second.

RAID 0, 5 or 10 are totally different they are all Raid 0, that is you are writing different info to each drive which is lot different then raid 1. raid 5 and 10 are variations of raid 0. Raid 0 never melts down, they may need to be rebuilt if the power is shut off wrong but they always return. I have been building raid 0 systems for years with 8212's and so on. I had one system that was constantly crashing, video card upgrade and lack of power , which was a raid 0. Eventually it did need a new boot sector - fixboot in dos repair utility. I ran sfc scannow and defragged it and its and running like new.

So these issues or non- issues are different then a complete melt down and crash and non-repairable raid. In one case the system was over month old and had over 500 gigs of data added before it totally melted down.

I think my problem may due to very small errors - its really the only think I can think of.
January 3, 2007 5:12:13 PM

The latest BIOS revision used for the test set-up was P20.

The current latest BIOS revision is at least P23... maybe P24(haven't checked over the holidays).

Do to the fact that these BIOS revisions were targeted at RAID arrays in general (yes to fix specific problems) for the nvidia chipsets, would it not be possible that the 115mb/s figure improve with fixed I/O instructions?

I am just a little hopeful here since my system was based off the eVGA 680i mobo.
January 3, 2007 5:23:58 PM

Seeing your sig... I wish I had money burning through my pocket like that! 8O
January 3, 2007 5:30:17 PM

Quote:

The article shows on page 6 RAID 1 and RAID 0 on the same set of drives but as seperate partitions, this of course only for intel Matrix raid.


Is that safe??? 8O

I still don't quite understand why RAID 1 isn't faster than RAID 0. Or is it when it comes to read and not write. Say for gaming wise?
January 3, 2007 5:56:23 PM

I have to be missing something.

I have gone over the graphs again and again. To me it looks like the ICH8 leads the pack in almost every case. The best the Nvidia can do is occasionally split between the ICH8 and ICH7. In the Raid 0 Transfer-Diagram and I/O performance it appears that Nvidia is down by 60-70%.

I was all set to pull the trigger on a 680I, but with numbers like these, no way. I do a lot of database work and cannot aford to take this sort of a hit. I don't see how Nvidia can be "recommended". Is this April Fools a fit early?

Like I said, what am I missing here? I need to go clean my glasses.
January 3, 2007 6:42:32 PM

Tis wise to use more than one source when looking for hardware to buy :wink:
January 3, 2007 6:49:13 PM

Raid 0 writes data to two (or more) different drives by breaking the file up into pieces. The size of the pieces are determined when you setup the array. This setup is faster then RAID 1 because you are only writing parts of a file to a drive. In RAID 1 you are writing the complete file to both drives. RAID 1 in writing is no faster and sometimes slower then a single drive. RAID 0 is faster in reading a file because each drive only has to read a part of the file. In RAID 1 reading can be faster than a single drive as each drive could read a part of the file, but still not faster than RAID 0.

With RAID 0 you get performance. With RAID 1 you get security.

In games the execution of the game is not faster but the loading of data from disk is faster in Raid 0 than with a single drive or Raid 1.
January 3, 2007 6:57:04 PM

You missed nothing. It is a POS, except for uber gaming with SLI, even then look to 650i instead.
January 3, 2007 6:57:13 PM

*sighs* I want to do RAID with just two hard drives, but I want it to be fast and secure. The only time I had a bad hard drive was because I was an idiot and tried to plug in a hard drive while the computer was on. Ended up shortening the power circuit for the hard drive, thus giving me a 60GB hard drive paper weight.

Hmm...

Say, this just came to my mind... why didn't Intel make the 965 first before the 975??? If that were not the case, like someone else stated in this forum, a 975 chipset with ICH8R would be superb!!!

*sighs* I'll never get companies like Intel, ATI/AMD, and nVidia. (screw VIA and SIS).
January 3, 2007 7:03:47 PM

Well you can get speed and security. Use RAID 1 + 0. This requires setting up two raid 1 arrays, then use those two arrays as a two disk raid 0 array. This is done in our datacenter and gives the best mix of performance and security.

Given the post by DragonSlayer there maybe issues with stability over time. I would suggest if you want speed put the OS on a Raptor and the rest of the data on a RAID 1+0 set (4 other drives require).

Personally load time is not a major factor once you are on SATA 2. Then I would stick with RAID 1 and do regular backups of super important stuff.
January 3, 2007 7:07:17 PM

Cool beans, we have the exact same rig, except for the CPU, RAM, GPU(s), and case! I am not sure about the PS though; I have the Silencer 750 Quad.
January 3, 2007 7:10:25 PM

So are you saying that with SATA 2 and RAID 1, that is equivalent to running RAID 0 with SATA 1?
January 3, 2007 7:21:04 PM

"I still don't quite understand why RAID 1 isn't faster than RAID 0. Or is it when it comes to read and not write. Say for gaming wise?"

data is scattered on the drive - you system must constantly look for data before it can read it, since you have many operations going on once, the drive is constantly alternating. RAID allows stream lining the read seek process - hence raid 0 writes blocks of data to different drives so while one drive is seeking data the other drives is reading data. The data can be made in different sizes - so if you are a retailer who need small chunks of data of server maybe 4-16k size while if gamer with lots of music you use 64-128k chunks of data. depending on the controller and the number of drives the best size changes.

raid 1 is 1 drive the second drive is copy that does nothing but back up the first - so if anything its slower then 1 drive since it must also run raid

if you looking for fast read/ write keep in mind:

if you want a really fast system you need to set up 3-4 drives in raid 0, if those drives are 10k drives then speed is increased even more since the seek time is lower. I used a 3 raptor raid and it is very fast.

2 7.2k sata drives in raid seem slightly faster but if you raid 3 raptors you notice a huge decrease in boot and load time.

if you use a card you get better results regardless - search around the new there is really good article somewhere comparing raid cards to chip set raid - cards are alot faster.

warpedsystems "the need4speed"
January 3, 2007 7:21:22 PM

No not at all. If you run RAID 0 on Sata1 and switch to RAID 1 on SATA2 it will be slower. The bandwidth difference only counts if your PC can handle the through put, and my current machine can not so I would not see a benefit. I went from IDE drives to SATA1 and found no reason to go for RAID 0 as my load times were fine for me. Could get a drink while I waited. I will switch to SATA2 shortly but I still see no reason to switch to RAID 0. But then again I am not fixated load times. Also with all things new it seems fast but once you use it for a month or so you will still not be happy with load times and the overall performance will seem to no longer be fast.
January 3, 2007 7:38:33 PM

As I said above, I do a lot of database work. When I run queries that have to chew thru a couple of million records, I need all the speed I can get. That is the only reason I use Raid 0. As far as load/boot times, I could care less. Trust me, that is the least part of my day.

I understand the security issue of Raid 0. I only have had an array crap out once (my stupid fault). But it was painful enough that I back everything up on 2 different computers and a network drive. Reconstruct 50+gb of data sometime and you learn real fast!

If indeed the ICH8 is 60%+ faster than the Nvidia offering, this becomes a no-brainer!
January 3, 2007 8:56:20 PM

Quote:
So are you saying that with SATA 2 and RAID 1, that is equivalent to running RAID 0 with SATA 1?

Just to be sure we're both on the same page, let me explain what RAID0 and RAID1 both are. Say you're downloading some pictures of... whatever you download pictures of. Let's use Ms. Alessandra Ambrosio as a nice example. You download a picture of her, and where does that data go? Why to your hard drive(s), of course! You already knew that, but now let's show how the different RAIDs store the picture of Alessandra. Let's assume that you download a pic of her that takes up four blocks of dataspace on your hard drive...



Here you see Alessandra is split across the two Hard Drives in RAID0. How is this fast? Well, you can see that each drive is only holding half of Alessandra's picture, and this means that both discs are writing the photo data to the discs at the same time. So your write speeds (theoretically) double because you have two discs to split the data across. Your read speeds will also increase because the computer can grab data from both discs at once. Now onto RAID1!



Notice that there are two copies of the same picture - one on each drive. This is why RAID1 is more secure. If one hard drive fails, then the other still contains Ms. Ambrosio's photo completely intact. However this security comes at a cost. Even though you have two hard drives, you only get the capacity of one drive, and because you have to write the entire picture onto both drives, write speed has not increased at all. However read speed can improve because you can, say, read her eyes from disc 0 and her hair from disc 1 at the same time.



And SATA1 ~= 150MB/s interface speed (bandwidth) per drive, SATA2 ~= 300MB/s of bandwidth. Since hard drives can't supply data at speeds this fast anyway, it doesn't make much difference which is used.
January 3, 2007 9:14:56 PM

i ONLY RAID 10/0 AS test rig - so i can build them for other people. The first raid is 10 the second is 0. I do not think that is the best way to go.

RAID 10 have crashed for other people even though mine is still up. Both nvidia and intel chip set raid 10 have crashed - nforce5 and ich7. Again I think its small memory errors - I was hoping someone would read my posts and explain why and tell me?

I think, the best system is normal drive no raid with (even better with a raptor) for your os. Then a second raid system for data. You back up your primary drive (single drive) on the raid or a better yet - a external drive.

I used to build gaming systems with a ide os drive and then secondary raid for gaming. That's, 3 drives 1 os and 2 gaming raid 0.

A new system I am going to test, as follows, I have not yet built - only since i am lazy! I like to try a 5 drive system with 3 raid 0 drives and 2 raid 1 drives. The new boards like the asus p5w-dh, the primary sata controller (ich8) has boot toggle on/off in the bios. The secondary raid j-micron (i think) also has a boot on/off. You can set a fast raid with 3 drives in raid 0 and turn off the boot for ich8 then the j-micron controller should boot up fine. you can even set up a raid 1 with the j-controller. 5 low cost seagate sata ($60-70 at new egg for 160-250 gig) with get a 3 drive raid 0 and 2 drive raid 1.

The problem with the raid 10/0 with is the raid 0 is the secondary raid so its on the inside of the drive platter. The outside is the fastest since it has more surface area per rotation. So your raid 0 uses the slower part of the drive. Then if you reverse it the raid 10 uses the slower part of the drive. That's why i like the 5 drive set up above.

WarpedSystems "the need4speed"
January 3, 2007 9:16:32 PM

your raid example roxs! But i am afraid some people will think thats how photos realy store on the drive!
January 4, 2007 1:33:58 AM

So the question regarding Raid0 on the 680i has not been answered yet.

How does this effect real world preformence? considering its sequential read/write speeds, i would imagine any type of data transfer would suffer from this.

Is this a problem with NCQ on the motherboard conflicting with NCQ on the RaptorX hard drives?

Anybody? Beuller?
January 4, 2007 12:22:54 PM

Thank you. I knew the difference between the RAIDS and the difference between SATA 1 and SATA 2. I was just wondering if you could take two hard drives. Partition them so that you have 2 different partitions. And then make one partition RAID 0 and the other partition RAID 1.

As of now, one Seagate 160 is partitioned into 3 parts: Windows XP, Programs, and Movies. The second Seagate 160 is partitioned into 2 parts: Windows Vista 64-Bit and SUSE 10.1 64-Bit.
a b V Motherboard
January 4, 2007 2:57:39 PM

You can do it on the Intel controller, but the controller sets that up for you, it's the Matrix RAID feature.
January 4, 2007 2:58:21 PM

Quote:
I was just wondering if you could take two hard drives. Partition them so that you have 2 different partitions. And then make one partition RAID 0 and the other partition RAID 1.

As of now, one Seagate 160 is partitioned into 3 parts: Windows XP, Programs, and Movies. The second Seagate 160 is partitioned into 2 parts: Windows Vista 64-Bit and SUSE 10.1 64-Bit.

I'd reckon you could do this by following their "Intel Matrix RAID technology" plan on page six of this article. Here's another illustration (yay pictures!):



You'll notice that you have two drives split into two partitions each. Drive 1 has partitions "P1" and "P2" (as does drive 2). Since the computer sees each partition as a different physical drive, I don't see any problem RAID1ing the P1 partitions together and RAID0ing the P2 partitions together.

What would happen if a hard drive fails, though? Here's an illustration of what you would have if, say, hard drive 2 fails:



Sice RAID0 doesn't have any backup/redundancy, you'll end up losing 3/4 of your storage space if a hard drive dies. Which is fine, if that's what you have in mind. The stuff you keep in your RAID1 partition will still be safely stashed away. :p 

Now seeing as you have five partitions amongst two drives, it would be a bigger juggling act for you, because it is best to keep all the partitions in the RAIDs the same size (otherwise you'll waste hard drive space). I don't know if you would benefit from this at all, seeing as you have three different OSes in different partitions - it might be more hassle than it's worth to set up your partitions to RAID0 some of them and RAID1 some of them. :|
January 4, 2007 3:07:50 PM

Regarding the nVidia RAID bottleneck:

2 drives in simple RAID0 should be slightly affected, since the drives have a maximum output of about 50-70MB/s, so two drives can only put out 100-140MB/s max or so, and the 120MB/s wall of the controller is not much faster.

4 drives used in the demo are capable of blowing past the 120MB/s wall that the nVidia setup showed, maxing out at close to 280MB/s. This setup would obviously be very limited by the nVidia wall.

Three things to consider:

Other users have posted that they did not encounter this 120MB/s wall. The out of date BIOS used in benchmarking could be a factor, as could incompatibilities with this specific HD.

Most users will not be putting 4 HD's into a RAID array for home use, so they will not encounter this bottleneck.

Even if you do have 2 Raptors in RAID0 benchmarking at 145MB/s and being bottlenecked to 120MB/s, very few tasks on a computer will cause a sustained read at max throughput for any amount of time. To sustain those throughputs you would have to have another RAID array in the same box and be copying large files between those two arrays, or be running very large queries that require tablescans in a db, or a few other specialized tasks, to notice the 15% increase in max throughput.
January 4, 2007 3:46:34 PM

Sweet! Thanks for the great info *bookmarks page* I'll try it later when I'm not so nervous about it.

I think I'll need bigger hard drives to do what I want though. Like a 500 gig. That way, I can have 3 partitions with OSs on each one, 1 partition for Programs for XP (probably not smart to store XP and Vista programs on the same partition :tongue:)  and then one more partition for DVD encoding. *sighs* I wish I had another 250 GB hard drive right now so that I can go back to FAT32. Only reason being for Linux use. Then again, I haven't read up on it, but will, about whether FAT32 is supported by Vista or not.

Once again, thanks for all the info =)
January 4, 2007 3:47:37 PM

Quote:
Regarding the nVidia RAID bottleneck:

Even if you do have 2 Raptors in RAID0 benchmarking at 145MB/s and being bottlenecked to 120MB/s, very few tasks on a computer will cause a sustained read at max throughput for any amount of time. To sustain those throughputs you would have to have another RAID array in the same box and be copying large files between those two arrays, or be running very large queries that require tablescans in a db, or a few other specialized tasks, to notice the 15% increase in max throughput.


The problem with this assertion is that even smaller reads of only a few MB will be slower on a 120MB/sec limited setup. Encoding a 20 gig HD movie or something similar will eventually have to read all 20 gigs of data, then write 4 or 5 gigs or more back to the drive. If it takes 100,000 reads to read that entire 20 gigs, even a minute time savings per read adds up.
January 4, 2007 3:50:57 PM

So would doing a RAID 0 with two raptors (SATA 1) be equivalent to RAID 0 with two SATA 2 hard drives since the raptors have lower average latency?
January 4, 2007 5:22:15 PM

Quote:
The problem with this assertion is that even smaller reads of only a few MB will be slower on a 120MB/sec limited setup. Encoding a 20 gig HD movie or something similar will eventually have to read all 20 gigs of data, then write 4 or 5 gigs or more back to the drive. If it takes 100,000 reads to read that entire 20 gigs, even a minute time savings per read adds up.


True, but you now have additional factors which mean that you will not see 15% performance increase. With a seek time of 4.6ms (Raptor) factored in for the 100,000 reads, you're going to have:

ICH8:
20GB/145MB/s = 138 secs of reading
100,000*4.6ms = 460 secs of seeking
598 seconds total HD activity

nVidia:
20GB/115MB/s = 174 secs of reading
100,000*4.6ms = 460 secs of seeking
634 seconds total HD activity

That is 40 seconds, but only 5% faster. If the data is in the middle of the platter, or in the inner area, then this 5% decreases or dissappears. And this isn't perfect math either, because well implemented RAID systems can decrease perceived seek times and poorly implemented ones can increase them.

It's kinda like RAM timings. Some people brag about the extra $300 they spent getting timings that give them a few % increase in memory benchmarks, but I'd rather have that $300 for something more useful. If the nVidia chipset has features you want or need and you intend on getting two Raptors in RAID0, you should know that you might be losing a few percent on your HD throughput, but when you post a question about your system on the forums and some troll replies, "THAT NvIDIA BOARD IS KILLING YOUR HD PERF!!! I GOT A 975 WIT ICH8 AND TWO 60GB SAMSUNG 7200S THAT PWN IT!!!" you'll know the truth.
January 4, 2007 5:44:04 PM

Quote:
So would doing a RAID 0 with two raptors (SATA 1) be equivalent to RAID 0 with two SATA 2 hard drives since the raptors have lower average latency?


SATA v SATA2 will only show an increase in performance for the emptying of the cache, since no single drive can even sustain 100MB/s yet. Raptors have crazy fast seek times and the highest throughput of any common drive, so they will definitely be faster in uncached large reads. In small, cached and partially cached reads it is a lot harder to tell.

Of course, a 4x7200 RAID setup will generally beat a 2xRaptor RAID setup, so bang for the buck (not including electricity bills) that would probably be the best bet.
January 4, 2007 6:12:24 PM

Quote:
Sweet! Thanks for the great info *bookmarks page* I'll try it later when I'm not so nervous about it.

I think I'll need bigger hard drives to do what I want though. Like a 500 gig. That way, I can have 3 partitions with OSs on each one, 1 partition for Programs for XP (probably not smart to store XP and Vista programs on the same partition :tongue:)  and then one more partition for DVD encoding. *sighs* I wish I had another 250 GB hard drive right now so that I can go back to FAT32. Only reason being for Linux use. Then again, I haven't read up on it, but will, about whether FAT32 is supported by Vista or not.

Once again, thanks for all the info =)

I'd say get more hard drives instead of bigger ones if you're gonna have various RAIDs. :D  If you have 4 drives than you can have RAID1 on two of them and RAID0 on the other two, then you won't have to worry about partitioning any of them (unless you really want to).

From what I've read, Vista isn't able to run on a FAT32 filesystem, but it has "support" for it, whatever that means. I'll have to read up on this subject as well. :wink:


(100th post FTW!!!!!!!!!!!!! )
January 4, 2007 6:52:19 PM

I also noticed the use of the outdated BIOS used on the NVIDIA board. I'd really like to see some results with the current (P23) version. And soon...I'm getting down to decision time on my next purchase!
January 4, 2007 7:06:36 PM

Thanks mr_fnord. I guess I'll just stick with my setup.

db101, I don't know if you know, but the ASUS P5W DH Deluxe has an odd configuration when it comes to SATA ports. That's why, I'll probably never use more than 5.

Possibilities in the future: 2 Hard Drives for XP/Vista in RAID 0, 2 Hard Drives for Data in RAID 1, and then 1 Hard Drive for Linux.

That's assuming that the two orange ports on the bottom will work right since they're using JMicron or whatever instead of the Intel Matrix Storage :?

NOTE: I know the RAIDs have to be on the same chip and can't be spread ex. Having hard drive 1 on the Intel chipset RAIDed with hard drive 2 on the JMicron chipset will not work.

I've read some forums where people couldn't have 2 RAIDS because of the issues between the JMicron and the Intel Matrix Storage. *shudders*

I guess I should be able that computer runs right now without any problems *knocks on wood*
January 4, 2007 7:27:14 PM

The use of an older bios for the testing, has been mentioned previously. Does anyone have any good information if the "wall" for the Nvidia Southbridge is a problem with the Southbridge itself or is it possibly a bios problem that "may" be resolved in future releases?
January 5, 2007 1:47:07 AM

Am I reading something wrong?

The RAID5 numbers are way better than the Raid 0+1 transfers on all the chips in the summaries.

I want matrix raid so I could do Raid0+1 for the OS and swap, and Raid 5 on another partition.

probably 4x400 or 4x320GB, maybe even 4x500GB.

But if Raid)0+1 can't beat the raid5, might as well do the whole thing in raid5.

I'd like to avoid RAID0 across four drives.

Raid 0+1 should be == to Raid 0 on reads, and half as fast for writes. No parity computes, so it should be faster than RAID5.
January 5, 2007 7:28:43 AM

Quote:
So the question regarding Raid0 on the 680i has not been answered yet.

How does this effect real world preformence? considering its sequential read/write speeds, i would imagine any type of data transfer would suffer from this.

Is this a problem with NCQ on the motherboard conflicting with NCQ on the RaptorX hard drives?

Anybody? Beuller?


Hello, i'm french
I have a EVGA 680I motherboard and i have 3 disk raptor 74 Go in raid 0
I found the solution to pass the limit of 110 MB/S
You must disabled "enabled read caching" for all disk of your raid 0 in the serial ata controller.
Sorry for my limited english
January 5, 2007 9:45:34 PM

I'm currently looking into building a couple new systems for Vista, any information on how well these chipset based RAID systems work under it?
!