Sign in with
Sign up | Sign in
Your question

ICH10R RAID Slower than Single Drive

Tags:
  • NAS / RAID
  • Storage
Last response: in Storage
Share
June 22, 2010 8:10:02 PM

Hi been looking about for an answer on this, but all I see is people with same problem but no resolution.

I have 4x Samsung F3 1TB drives in RAID10 on a ICH10R contoller on the ASUS 6X58D Premium.

My Write speed is topping out at 280MB/s which is pretty good. The Read speed however is very odd:
16MB block = 60MB/s
32MB block = 53MB/s
64MB block = 215MB/s
128MB block = 200MB/s
256MB block = 220MB/s
512MB block = 220MB/s

These results from ATTO.

I was expecting > 350MB/s considering a single Samsung F3 give about 180MB/s.

Write-back is enabled. I'm running 12GB DDR3 and a core i7 930... This array is empty.I'm running Windows7 Pro off of a Intel x25-m SSD.

cheers for any advice.
Jon

More about : ich10r raid slower single drive

a c 99 G Storage
June 23, 2010 3:02:55 AM

Did you download the latest driver from Intel, dated 3/23/2010?

Here it is: Intel Rapid Storage Technology

Here are my benchmarks.



Although my RAID is 2 x 500GB Seagate 7200.11 in RAID 0, it should be close to 2 x 1TB in RAID10, but yours s/b faster.

The RAID 1 (part of RAID 10) might be slowing you down? But you still beat me!
m
0
l
June 23, 2010 5:56:31 AM

Thanks for the reply Foscooter. Yeah I got the newest Drivers and Utility installed (that did actually knock the performance up about 10MB). The numbers you see are the best I'm getting.

It just seems really strange that Write is significantly faster than Read in most cases...

From the various RAID Calculators I've tried I was quoted a theorectical 4x Read boost and 2x Write boost with RAID10... the Write is spot on.. the Read is way low.

A guy here uses 8x Samsung F3 drives on a Hardware Raid card to get 600MB/s: http://forums.overclockersclub.com/index.php?showtopic=...

I was hoping the ICH10R would perform similarly, there are soem benchmarks that show it wiping the floor with some Highpoints and Antec cards.

I'm wondering if I'm getting a bottleneck somewhere.. like do I need to change my stripe, or do disk alignment, or set some jumper somewhere. Is it possible that I'm running in SATA I mode somehow?

It just really seems odd that the read is capping out so low, compared to what I thought it would be.

Any suggestions?
m
0
l
Related resources
a c 99 G Storage
June 23, 2010 8:10:54 PM

I think I'm out of my realm now.

Try contacting the user "sub mesa" as he is great with RAID! He can tell you about stripe size (mine is 128K), and disk alignment (?).

Good Luck.
m
0
l
June 23, 2010 10:11:29 PM

Cheers no problem Foscooter. Appreciate the push in the right direction. Will comment back if I get somewhere. To be honest 250MB/s is pretty decent, so I'm not desperate, but hey, I want the most out of this rig.
m
0
l
a c 187 G Storage
June 24, 2010 12:01:28 AM

When you read a small block, it is likely to come from just one drive. As the blocks get larger, you have a better chance of reading from several drives simultaneously.

What is your stripe size?
m
0
l
June 24, 2010 1:57:41 PM

I see, i knew it'd be slower with smaller sizes, wasn't entirely sure why. Cheers.

Currently the Array stripe is 64kbit. The Array is most likely going to be storing files > 1MB... Very unlikely that there will be small files on it.

I had an interesting thought yesterday.... The MAX bandwidth on SATA2 is 3Gb/s which is 281MB according to WikiP which is dangerously similar to what I'm getting in terms of Max Write (281xxx from atto). Is it possible that my Asus P6x58D Premium Motherboard's ICH10R controller has a maximum output of 3Gb/s overall rather than per port? I know this can't be the norm as I'm sure we've all seen other peoples 600MB/s raid arrays.

Any thoughts?
m
0
l
June 24, 2010 4:28:52 PM

thanks that was a great read. I see what you're saying:
Quote:
If you need a RAID array to deliver maximum read throughput, then you might want to go for a fast system using the ICH10R, since Intel’s controller delivers more than 650 MB/s in RAID 5 and 600+ MB/s in RAID 0.


So I guess I don't have to worry that its the controller limiting my performance. I wonder if its just poor RAID 10 performance... I may try a RAID 0 array to see if performance changes. If it doesn't I'm guessing the issue is somewhere in my set up (BIOS or HDD or Jumpers).
m
0
l
June 24, 2010 10:02:58 PM

meh just wrote a long report but tomshardware failed to post, so I guess I'll keep this short and simple.

RAID 10 is the limiting factor on this ICH10R controller.
In RAID 0 I reach ~600MB/s for Write.. Read still slightly slower at 550MB/s....

Odd that Read is still slower than Write (perhaps a Samsung F3 specific issue/feature)

Write-Back Disabled just lowers the speed of Writing (as expected) in both RAID 0 and RAID 10. The speed still reaches the same maximums.... eventually.

Will post ATTO screenshots in a minute for those interested in my bench results.

Gonna test RAID 5 too, as the review mentioned above shows the ICH10R performs pretty well.
m
0
l
June 24, 2010 11:08:18 PM

ok so here's a summary of Results with ATTO:
m
0
l
June 24, 2010 11:23:10 PM

Benchmark Results from ASUS P6X58D Premium motherboard running Intel ICH10R RAID Controller.

Hope this is useful for others with same problem I stumbled across.

Given the Results, I've opted to run RAID 5 with Write-Back Enabled, instead of RAID 10.


Single Drives (as a Base Comparison):




Samsung F3 HD103SJ x4 in RAID 0:




Samsung F3 HD103SJ x4 in RAID 5:




Samsung F3 HD103SJ x4 in RAID 10:

m
0
l
March 12, 2011 7:27:36 PM


I'm encountering similar issues. It seems my reads top out around 275MBps with raid10 which is slightly slower than where my writes top out at 280MBps on a RAID10 of 4x 7K3000 2TB drives. If I build a raid0, then both read and write jump up close to 600MBps. My prior experiences with raid10 lead me to expect something like raid10 with 280MBps write / 600MBps read (writes x2, reads x4). I also tried a set of four Seagate 7200.11 drives and had similar behavior, albeit lower read/write speeds.

My motherboard is a GA-X58A-UD3R (Rev 1.0). I'm using the latest Intel chipset drivers and RST drivers and application. Why isn't RAID10 benefiting from 4x reads?

m
0
l
March 13, 2011 1:54:07 AM

bluetip said:
I'm encountering similar issues. It seems my reads top out around 275MBps with raid10 which is slightly slower than where my writes top out at 280MBps on a RAID10 of 4x 7K3000 2TB drives. If I build a raid0, then both read and write jump up close to 600MBps. My prior experiences with raid10 lead me to expect something like raid10 with 280MBps write / 600MBps read (writes x2, reads x4). I also tried a set of four Seagate 7200.11 drives and had similar behavior, albeit lower read/write speeds.

My motherboard is a GA-X58A-UD3R (Rev 1.0). I'm using the latest Intel chipset drivers and RST drivers and application. Why isn't RAID10 benefiting from 4x reads?


I came to the conclusion a while back when doing loads of testing, that the Intel Chipset just doesn't perform RAID10 like a premium grade Hardware Raid Device.

My findings show that RAID0 will give you the best speed (obviously) and RAID5 is your best compromise to gain more Speed over RAID10 and have some sort of redunancy. I ended up going with RAID5 myself. If you really want RAID10 you'll have to abandon the Intel chipset and get a good PCI-E Hardware RAID card (which will probably be 300-600$). Not worth it IMHO.
m
0
l
a c 187 G Storage
March 13, 2011 3:09:29 AM

May I suggest that you test the various configurations using your own application as the driver.

What you are doing is optimizing the benchmark which is most unlikely to correspond to what you will actually be doing.
It is convenient, and ok an indicator, but almost certainly wrong.
m
0
l
March 17, 2011 4:13:49 PM

I just wanted to provide a followup on my earlier post. I may not have stated my concern clearly. My concern is that the ICH10R(or Intel RST) didn't appear to take advantage of reads from all four disks in RAID10 and instead was reading only from two of the four.

I booted up Knoppix 6.4.4 and created a RAID10 with far copies... well, the results speak for themselves:

Here's my "untuned" RAID10 built on 10GB partitions of four 7K3000 2TB disks...

root@Microknoppix:~# cat /proc/mdstat
Personalities : [raid10]
md0 : active raid10 sdd1[3] sdc1[2] sdb1[1] sda1[0]
20475904 blocks super 1.2 512K chunks 2 far-copies [4/4] [UUUU]

unused devices: <none>
root@Microknoppix:~# cat /proc/mdstat
Personalities : [raid10]
md0 : active raid10 sdd1[3] sdc1[2] sdb1[1] sda1[0]
20475904 blocks super 1.2 512K chunks 2 far-copies [4/4] [UUUU]

I also created an "untuned" ext4 filesystem on the /dev/md0 device...

root@Microknoppix:~# !mkfs
mkfs.ext4 /dev/md0
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=512 blocks
1281120 inodes, 5118976 blocks
255948 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
157 block groups
32768 blocks per group, 32768 fragments per group
8160 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 30 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

Sequential Write Tests (5GB file)...

root@Microknoppix:~# dd if=/dev/zero of=/raid/test2 count=5000000 bs=1024
5000000+0 records in
5000000+0 records out
5120000000 bytes (5.1 GB) copied, 19.2518 s, 266 MB/s

I actually executed the write test and it varied between 266 MB/s and 297 MB/s. Sorry, I didn't capture the text from the runs.

Sequential Read Tests (5GB file)...

root@Microknoppix:~# dd if=/raid/test2 of=/dev/null count=5000000 bs=1024
5000000+0 records in
5000000+0 records out
5120000000 bytes (5.1 GB) copied, 8.79608 s, 582 MB/s

root@Microknoppix:~# dd if=/raid/test2 of=/dev/null count=5000000 bs=1024
5000000+0 records in
5000000+0 records out
5120000000 bytes (5.1 GB) copied, 8.65911 s, 591 MB/s

root@Microknoppix:~# dd if=/raid/test2 of=/dev/null count=5000000 bs=1024
5000000+0 records in
5000000+0 records out
5120000000 bytes (5.1 GB) copied, 8.59371 s, 596 MB/s

I performed these same tests on some older 500GB Seagate 7200.11 disks and had similar results (about half the expected read speed under Intel RST).

In Summary:

Windows7 (ICH10R, Intel RST, RAID10)
seq read speed: 275 MB/s
seq write speed: 280 MB/s

Linux (mdadm, RAID10)
seq read speed: 596MB/s
seq write speed: 297MB/s

It seems pretty clear that Intel RST (or the ICH10R perhaps) likely does not read from all disks (probably reading from only one of each mirrored set) which effectively reduces your potential RAID10 read speeds by 50%.

Now, I would love to think there might be a way to achieve reads from all four disks in RAID10 under Windows 7, but I'm not that optimistic. If anyone knows of a way, please, let me know.

BTW: The BIOS was in AHCI (NOT RAID) mode for the Linux tests.

UPDATE: I confirmed with Intel support that the Intel RST does not support reading from both disks in each mirrored set which confirms why the read speeds are half that of the Linux mdadm RAID10.
m
0
l
April 18, 2011 10:13:04 AM

Very interersting results. I have posted on the Intel RST forum and it seems people are getting quite varying levels of performance. Myself I have tried RAID5 with 6x2TB Hitachi 7k3000. I am using 64kb stripe on one volume and 128kb stripe on the other. Readspeeds are excellent (450-490 MB/s) but my write speed is around 30-40 MB/s.
There should be no alignment issues since the drives are native 512 byte sector size.

On the intel forums people are getting around 2-60MB/s in write speed with RAID5. However and this is the weird part. One user increased the default cluster size in windows to 32kb (and altering nothing else, he had to reformat the volume for this to work) and his write speed went up from 7-9 MB/s to 260 MB/s. :ouch: 

http://communities.intel.com/message/121027#121027

I am at a loss what to do with my RAID. Converting it to RAID10 seems inferior due to the inability of Intel RST to read from both drives as the poster above me found out (this is also verified by other users talking to support on the intel community forum).
The write performance of RAID5 is absolutely unusable. Experimenting with different cluster sizes might work but every trial and error atempt takes days due to the time initializing the array. I have never seen anything on the internet that cluster size should have this big of an impact.

Intel seems to be having major issues with the RST driver. What irks me is the complete variability of the results. Maybe we all can pool together our different settings so we might discover some optimal setting or at least some pattern in the results?

Is it possible to get other alignment issues than the ones due to the whole 512/4096b sector emulation on some harddrives?

Things that affect performance then should be:

Raid volume stripe size.
Windows cluster size.
Type of harddrive (512/4096b sector issue)

Any other ideas how to crack this? :) 

my setup:
------------------------------
win7 64bit
P67 chipset with latest intel RST drivers.
16GB ram
i7 2600K
2 RAID5 volumes with 64kb and 128kb stripe size.
Default 4096 cluster size.
6x2TB Hitachi 7k3000 (native 512kb sector).
Readspeed:450-490MB/s writespeed:30-40MB/s using crystaldiskmark 3 and AS SSD benchmark.
m
0
l
a c 187 G Storage
April 18, 2011 1:58:20 PM

You might look into getting a discrete raid card.
m
0
l
April 20, 2011 9:53:59 AM

Please... I have worked with discrete RAID-cards extensively and this is not about discrete RAID-cards vs software RAIDs but about Intel Rapid Storage per se.
I could derail this discussion about pros and cons with discrete raidcards but this is not the point of this thread.
Suffice to say that they also come with their individual share of problems and limitations (ask any person setting up or managing corporate or educational servers using RAID-arrays) but of course they have many advantages too.

I am very curious about getting the RST to work since a software based solution has the advantage of it beeing virtually free (a good RAID-card that can support 8 drives costs about $500-600), software is easier to upgrade than hardware and if the motherboard dies I can connect the drives to another MB with a similar controller, boot up from an external drive, install RST-driver and get all my data back. With a fried hardware card I have to get the exact same model of the card that died which believe me can be a bit of an hassle (espcially if the card is no longer manufactured) :p 
The drawback is of course performance, but with the computing power in a multi-core CPU a good software RAID can reasonably compete with a hardware solution (mileage may vary :) ). Look on the LINUX implementations mentioned.

However Intels software RAID implementation seems to be great in some respects (installation and managability) but for some reasons seem to have extreme fluctuations in levels of performance varying between stellar and unusable. This seems to be depending on the configuration (or sometimes seems to be totally random?). This thread is trying to elucidate what factors impact that performance and maybe to distill some guidelines for configuring it :) 
m
0
l
April 21, 2011 9:30:36 PM

I have an theory about the issue with cluster size affecting performance.

In an operation where for instance a write is performed the entire stripe is always read and then re-written. Maybe that is the source of the bad performance? Ie you are writing a series of small clusters but for every cluster the entire stripe is read, parity calculated then rewritten?

So for instance if you have 4kb cluster size and a 64kb stripe size you are reading and rewriting the same stripe 64/4=16 times instead of pooling the writes and reading and writing it once? This would imply a sensationally stupid write strategy for the RST driver. But the performance hit is definately in the ballpark of the numbers we are seeing?

The solutions then would be to set stripe size=cluster size. This regardless of stripe size.


Anyone that wants to help me test this idea? Is it totally bonkers or what do you all think?
m
0
l
October 22, 2011 12:49:16 PM

Thank you guys, this is the only post truly serious about INTEL integrated board setup and 1TB SJ SATA Disks with benchmarks and comments. This discussion deserve to stay on a good book, out of forum's chit chat (I don't mean this forum, but generally speaking of course).

greetings from Italy

Paolo
m
0
l
August 17, 2012 6:54:37 PM

Nice thread, nothing like a digging up old but so valuable thread after some time ;) 

I've got an idea - how about mixing Windows dynamic disks and Intel raid:
- let's take 4 disks, exackly the same model, say 1TB
- make two mirror sets from those disks, so we got Volume1 and Volume2, 1TB each.
- on that set let's make under Windows dynamic volume - in here let's make it RAID0.

In this solution we got a something like RAID10.

I'm curious about performance of this set.


Another question - does anyone have tests of the RAID10 with and without GPT?

Another thing, I think any tests with 256MB file on the array are not sufficient - the best would be at leat 1GB, with 4GB desired.
m
0
l
March 29, 2014 5:25:23 AM

Sorry for posting for old thread, but since google likes this page I have some useful information:

Here's how to make 6 drive RAID 10 with Intel ICH10R and without the "slower than single drive problem":
-Setup two RAID 0 arrays (2 or 3 drive arrays) in Intel raid
-Setup RAID 1 in windows for those two raid0 arrays
-...
-Profit!

With this kind of setup I was able to get 342 MB/s read and 319 write with slow drives (HD203WI).

It's not "perfect" configuration for RAID 10 but much better than RAID 5 for example.

m
0
l
March 29, 2014 6:21:03 AM

Yes, but that requires to set up dynamic disks, and you cannot install system on dynamic volumes.

You could also try another set - set up multiple mirrors on intel and then set up striped dynamic disks under windows. I wonder what would be the result.
m
0
l
March 29, 2014 11:39:05 AM

_KaszpiR_ said:
Yes, but that requires to set up dynamic disks, and you cannot install system on dynamic volumes.

You could also try another set - set up multiple mirrors on intel and then set up striped dynamic disks under windows. I wonder what would be the result.


I have SSD for the system so that's not a problem for me. :) 

Three times intel raid 1 + win 0 was better than intel raid 10 but still much slower than the 0 + 1 conf I posted:

3 times Intel raid 1 (2 disk per array)
raid 0 from windows:


Here's the intel raid 1 arrays separately:
http://i.imgur.com/xPA0cUs.png
http://i.imgur.com/x7StAR2.png
http://i.imgur.com/fLajWpf.png
m
1
l
!