The Southbridge Battle: nforce 6 MCP vs. ICH7 vs. ICH8

pschmid

Distinguished
Dec 7, 2005
333
0
18,780
The nforce 680i may be the enthusiast's darling, but what about the chipset's interface capabilities? We scrutinize its RAID and USB 2.0 performance and compare it to other chipsets from Intel as well as Nvidia.
 

sucre

Distinguished
Nov 12, 2005
11
0
18,510
Even though the ICH8 does not offer dual Gigabit Ethernet in specification, most high-end vendors (such as the Asus P5B series with 965 and ICH8) bolt on a second Gig-E port for the same dual operation the nForce has. While the second port actually lies on the PCI bus, that bus has enough bandwidth to support the 30-50 MB/sec of Gig-E in Windows.

Also, I am happy to report that the transfer rate diagrams and RAID performance shown in this report actually do bear out in real world (not just lab) performance. I bought a P5B specifically because the ICH8 is a newer chipset than contained on the "flagship" 975 boards, suspecting that Intel has had even more time to work out the kinks and improve performance with the latest iteration of its silicon.

There has been talk in the forums here and elsewhere that 975 is a better overall chipset, but my real world experience does not manifest that. Also, overclocking the Core 2 Duo with the 965 is a breeze and a pleasure, as the chipset and CPU both scream for more load, take the load, and like every minute of it.

I dont believe anybody who opts for a 965 with the ICH8 will suffer any degradation in performance compared with any other C2D-compatible chipset.

Rig: Asus P5B-Deluxe/Wi-Fi, Core 2 Duo E6600 under Tt Blue Orb II, 2x1GB OCZ Platinum PC2-6400, eVGA 7600GT, 3x WD 3200KS in RAID 5. Chipset @ 360 MHz, CPU @ 3.24 GHZ, RAM @ 900 MHz. Stable burnt in @ 48 hours each ORTHOS 8k and Gromacs. Temps MB 37, CPU 40 idle 49 load. RAID 5 performance 198 MB/sec reads, 129 MB/sec writes.
 

icepop456

Distinguished
Nov 27, 2006
19
0
18,510
I know this will depend on drive size, but anyone have any stats on RAID 5 rebuild time? I attempted the Win XP RAID 5 tweak and 4-250GB drives took like 9 hours to build the array. Testing to see if the array worked took forever in my case. I'm pretty sure these chips will still use the CPU for the parity data calculations and should be similar to my experience.

Anyone have any data to share?
 

mr_fnord

Distinguished
Dec 20, 2005
207
0
18,680
Wow. What is the nVidia RAID bottleneck? Judging by their performance below the 120MB/s wall, if they could fix whatever is creating that wall they might outperform the Intel RAID setups.

It is pretty amazing how poor degraded mode performance is with the Intel setups. RAID5 degraded should still beat a single drive, but this designed for home use and not enterprise, so being able to limp along until the drive is replaced is acceptable functionality.
 

leexgx

Distinguished
Feb 26, 2006
134
0
18,680
there is an problem with Nforce results for RAID 0

there must be an driver issue as i have had with 4 hdds doing min 240mb/s with an slope down i thought this was an nforce 4 issule useing an Sata to Single UDMA 133 bridge so all ports are limited to 133mb/s (seems 115mb/s with overhead?)

my setup before was 4x 80gb maxter 10 hdds i was getting the propper performace of 4x60mb/s = 240 ish with the slope that you get as hdds get slower the more the way thay get up the disks (i will post results that i had before i replaced them)

i sold them for 4x segates now i seemed to be limted to 115mb agane Flat line like tomshardware has shown

this is based on the spec on my Sig
i going to ghost my hdd agane and mess with dif RAID 0 chunk sizes

my setup was beating 4x raptors in tests but cluster size may have been set to 64k or even 128kb
 

Eurasianman

Distinguished
Jul 20, 2006
883
0
19,010
Am I reading this right???

ICH7 is superior to ICH8 in all of our I/O benchmarks and in most of the SATA throughput benchmarks.

ICH8 is the real surprise, though, as its technical specifications do not read very different from what you will find for ICH7; there are 10 instead of eight USB 2.0 ports, and the Serial ATA connectivity has been expanded from four to six ports. Yet, its performance increased across the board.Its transfer performance slightly exceeds the benchmark results of ICH7 in almost all disciplines, it wins most of the I/O performance benchmarks and it does substantially better in our USB 2.0 bandwidth test.

"its" as in ICH7 or ICH8???

Which is it???

Also, are they saying that I can I take 2 160 GB Seagate HDs and put them in two different raids?

So, are they saying that I can RAID them as RAID 0 at first and then create another partition and put that in RAID 1?

I'm so confused. Can someone explain this to me? Look at sig for computer specs.
 

plankmeister

Distinguished
Sep 7, 2006
232
0
18,680
I thought that too... I think they made a mistake. It would appear so, looking at the benchmarks. The ICH8 wins nearly everything.

With my iminent purchase of a DS4 (with the ICH8R), I almost got a boner reading that review, especially the RAID0 result! :p
 

pjladyfox

Distinguished
Jan 3, 2007
2
0
18,510
What seems to utterly baffle me is the total lack of notice of the issue all Nforce chipsets have had for quite some time with USB-based KVM switches which you can see some examples of here:

http://forums.nvidia.com/index.php?showtopic=9269&hl=usb+kvm

http://www.homepcnetwork.com/feedbackf.htm

http://www.pricegrabber.com/rating_getprodrev.php/product_id=8336492/id_type=M

http://bc.whirlpool.net.au/forum-replies-archive.cfm/481926.html

I have two Nforce-based systems in our lab and none of the USB-based KVM switches have worked with them at all forcing us to use PS/2-based switches which is quickly not becoming an option on newer motherboards. And while I've not seen anyone post of issues with the 680i-series just the fact that I've experienced this first-hand has soured me to ever buying an Nvidia Nforce-based motherboard.
 

Crashman

Polypheme
Former Staff
Even though the ICH8 does not offer dual Gigabit Ethernet in specification, most high-end vendors (such as the Asus P5B series with 965 and ICH8) bolt on a second Gig-E port for the same dual operation the nForce has. While the second port actually lies on the PCI bus, that bus has enough bandwidth to support the 30-50 MB/sec of Gig-E in Windows.

Wrong, most of the boards I have with dual gigabit use PCI-Express x1, which is faster than "Gig-E"
 

Talon

Distinguished
Apr 13, 2004
531
0
18,980
I found the article a little contradictory as well at times. Near the end they say the 680i loses in almost all cases yet their charts show the 680i coming mostly between the two other chipsets, not losing to both.

Its almost as if several people peacemealed the article together without discussing each others results before jamming it all together. Shrug.
 

MarcelJV

Distinguished
May 30, 2006
26
0
18,530
Clearly the result show the ICH8 is the hands down performance winner.

The article shows on page 6 RAID 1 and RAID 0 on the same set of drives but as seperate partitions, this of course only for intel Matrix raid.

I do not agree with the conclusion, as it would seem to me that unless you are using SLI (and even then 650i would perform in games better) there is no need to buy 680i and the same goes for 975x (good for xfire support). So the conclusion should state that 965 is the performance choice unless you use SLI or Crossfire (Xfire).

So one would think that sooner or later, if possible, someone would put 975 with ICH8 and then you would have the best of both worlds for crossfire.

As to the 680i being the best feature chipset, I do not agree as it all depends on your needs. IE if you like to play games and you want to watch TV on you machine you will have a small issue with the lack of PCI slots on the 680i chipset boards. Though the features from a gaming perspective are the best (well in theory) it is at the cost of overall expandability. Once you put in two double wide PCIe16 cards and your Physics GPU (could be double wide as well) you are left with either one or no slots that are useful. I do not could PCIex1 as usefull as there are no cards out that are not already also on the board.

So 680i (extreme gamer SLI)
650i (smart extreme gamers who know the x8 is enough in SLI)
975x / rd600 (extreme gamers with crossfire)
p965 everyone else who wants to do more than game and pay less to.
 

dragonsprayer

Splendid
Jan 3, 2007
3,809
0
22,780
Wow - you article fails to mention that in 1-4 weeks your raid 5 will melt down :cry: as is posted my times in asus forums. I have built properly installed raid systems (clean install, no overclocking, fresh drivers) for years and all raid 5 systems have melted down or disintegrated :twisted: . All raid 5 based systems, with in 1 month have failed, all were based on intel matrix. In fact raid 5/0 melted down in mins after testing. Funny thing is asus even recommended raid 5 but their forums are filled with failed attemps. Please post :?: , if you have suceesfully built raid 5 or raid5/0 systems.

Nvidia I have built raid 0/1 dual raid and they melted down with in weeks too - These systems worked for a while but then they just disappear or unbootable and unrepairable.

I be interested if others have had this problem or solved it. I am currently using a 4 drive raid 10 with xp plus a raid 0 with vista - dual raid. I crashed the raid 10 with vista by simple defragging it with the xp program. This system seems fairly stable as long as you do not access one raid from the other - this is not acceptable for resale systems.

As I stated, the concenses at asus forums is the matrix raid 5 is unstable and there is no solution - maybe THG knows something :idea: or is it these systems where never tested more then a few days or hours?
 

MarcelJV

Distinguished
May 30, 2006
26
0
18,530
I have a stable Raid 1 array since January. It survived a shortage of power when my other two drives fired their data. My raid array is built on a PCI Promise solution though not off of Intel or Nvidia raid.

My next PC will run the Raid 1 off of Intel so I will let you know in a month or so if it dies.

Would seem to me not to be smart to defrag a raid array with one OS and then try to use it from another OS, if both/either OS is installed on that Array. Playing with fire there I would think.

Raiding you OS drive would also seem to me to be a big risk (other then Raid 1). OS's are very picky about files suddenly not being there (physical location on disk) and even though Raid 5 might rebuild it, the OS can not find it where it wants to to start with, then I see a crash coming very quickly.
 

dragonsprayer

Splendid
Jan 3, 2007
3,809
0
22,780
Responding to pshmid above, a raid 1 is really just a drive with a copy - the only issue is with some controllers if you unplug one drive or one breaks the controller gets confused with which is the first and second.

RAID 0, 5 or 10 are totally different they are all Raid 0, that is you are writing different info to each drive which is lot different then raid 1. raid 5 and 10 are variations of raid 0. Raid 0 never melts down, they may need to be rebuilt if the power is shut off wrong but they always return. I have been building raid 0 systems for years with 8212's and so on. I had one system that was constantly crashing, video card upgrade and lack of power , which was a raid 0. Eventually it did need a new boot sector - fixboot in dos repair utility. I ran sfc scannow and defragged it and its and running like new.

So these issues or non- issues are different then a complete melt down and crash and non-repairable raid. In one case the system was over month old and had over 500 gigs of data added before it totally melted down.

I think my problem may due to very small errors - its really the only think I can think of.
 

PCAnalyst

Distinguished
Nov 7, 2006
467
0
18,780
The latest BIOS revision used for the test set-up was P20.

The current latest BIOS revision is at least P23... maybe P24(haven't checked over the holidays).

Do to the fact that these BIOS revisions were targeted at RAID arrays in general (yes to fix specific problems) for the nvidia chipsets, would it not be possible that the 115mb/s figure improve with fixed I/O instructions?

I am just a little hopeful here since my system was based off the eVGA 680i mobo.
 

Eurasianman

Distinguished
Jul 20, 2006
883
0
19,010
The article shows on page 6 RAID 1 and RAID 0 on the same set of drives but as seperate partitions, this of course only for intel Matrix raid.

Is that safe??? 8O

I still don't quite understand why RAID 1 isn't faster than RAID 0. Or is it when it comes to read and not write. Say for gaming wise?
 

Monk0101

Distinguished
Jan 3, 2007
3
0
18,510
I have to be missing something.

I have gone over the graphs again and again. To me it looks like the ICH8 leads the pack in almost every case. The best the Nvidia can do is occasionally split between the ICH8 and ICH7. In the Raid 0 Transfer-Diagram and I/O performance it appears that Nvidia is down by 60-70%.

I was all set to pull the trigger on a 680I, but with numbers like these, no way. I do a lot of database work and cannot aford to take this sort of a hit. I don't see how Nvidia can be "recommended". Is this April Fools a fit early?

Like I said, what am I missing here? I need to go clean my glasses.
 

MarcelJV

Distinguished
May 30, 2006
26
0
18,530
Raid 0 writes data to two (or more) different drives by breaking the file up into pieces. The size of the pieces are determined when you setup the array. This setup is faster then RAID 1 because you are only writing parts of a file to a drive. In RAID 1 you are writing the complete file to both drives. RAID 1 in writing is no faster and sometimes slower then a single drive. RAID 0 is faster in reading a file because each drive only has to read a part of the file. In RAID 1 reading can be faster than a single drive as each drive could read a part of the file, but still not faster than RAID 0.

With RAID 0 you get performance. With RAID 1 you get security.

In games the execution of the game is not faster but the loading of data from disk is faster in Raid 0 than with a single drive or Raid 1.
 

Eurasianman

Distinguished
Jul 20, 2006
883
0
19,010
*sighs* I want to do RAID with just two hard drives, but I want it to be fast and secure. The only time I had a bad hard drive was because I was an idiot and tried to plug in a hard drive while the computer was on. Ended up shortening the power circuit for the hard drive, thus giving me a 60GB hard drive paper weight.

Hmm...

Say, this just came to my mind... why didn't Intel make the 965 first before the 975??? If that were not the case, like someone else stated in this forum, a 975 chipset with ICH8R would be superb!!!

*sighs* I'll never get companies like Intel, ATI/AMD, and nVidia. (screw VIA and SIS).
 

MarcelJV

Distinguished
May 30, 2006
26
0
18,530
Well you can get speed and security. Use RAID 1 + 0. This requires setting up two raid 1 arrays, then use those two arrays as a two disk raid 0 array. This is done in our datacenter and gives the best mix of performance and security.

Given the post by DragonSlayer there maybe issues with stability over time. I would suggest if you want speed put the OS on a Raptor and the rest of the data on a RAID 1+0 set (4 other drives require).

Personally load time is not a major factor once you are on SATA 2. Then I would stick with RAID 1 and do regular backups of super important stuff.