Sign in with
Sign up | Sign in
Your question

More Raid 5 Problems

Last response: in Storage
Share
February 13, 2007 4:09:45 PM

Ok, still pretty new here, anyways,

First, I have been running a Raid 01 setup for almost a year now with no problems. Recently, I upgraded my graphics card to a X1950pro from a 6800XT. The only game I seriously play is Battlefield 2. Now with the resolution cranked up (All high at 1280x1024, no AA) It takes forever to load a map. Before this, I used to be the first one on a map.

Then I changed the RAID from 01 to Raid 5. (looking for performance) Now, BF2 is unplayable. When I play, I can hear my drives constantly seeking, and BF2 stutters, so much that I can't play it.

I want to go back to my Raid 01 setup. How do I do this??
Is there another way to optimize the HDD I already have?? (I dont want to buy anything)


My computer has a Intel 945GNTLKR Mobo, Intel Pentium D 945 (3.4GHz), 1 GB(2x512MB) Crucial Balistix DDR2-667, (4) Seagate Barracudas 7200.9 (80GB each) and I'm running Win XP Pro x64. My drives are currently in the Raid 5 setup and my stripe size= 32KB.

Thanks for any help in advance!!

More about : raid problems

February 13, 2007 7:33:32 PM

offhand, i cant think of a way to switch raid array types 'on the fly' so to speak... going from raid 5 back to raid 0+1 'may' involve a reformat of the disks used in the array... you 'could' try deleting the array in the controllers bios (but not erasing the data on it), and that might allow you to make the hdds a raid 0+1 array again... but, im skeptical that would work so easily... so, to me it just sounds like you would need to back up the data you want (possibly use a disk imaging program), and then recreate the raid array as 0+1
February 13, 2007 9:49:21 PM

Thanks for the reply!
Quote:
going from raid 5 back to raid 0+1 'may' involve a reformat of the disks used in the array

I was hoping to avoid this. But if it's my only option.... :cry: 
Quote:
you 'could' try deleting the array in the controllers bios (but not erasing the data on it), and that might allow you to make the hdds a raid 0+1 array again

I thought about that. But I figured that the actual "data" on the HD isn't what I see in Windows. (I never see the parity data, etc.) So I don't want to risk my data for it.
Quote:
possibly use a disk imaging program

Good idea! Sure beats burning My Documents to a DVD! :D  Do you have any suggestions?? (I don't know of any. :oops:  Free would be nice also)
Related resources
February 13, 2007 9:51:40 PM

i havent used any disk imaging programs myself, but norton ghost frequently comes up by forum members as a recommendation... i cant imagine its free, but, you may be able to find one through googling :) 

edit: just looked on symantecs site, they have a trial of norton ghost for download:

http://www.symantecstore.com/dr/sat2/ec_Main.Entry17C?S...
February 13, 2007 10:25:17 PM

just so that you know...
a friend of mine has a raid5 system on his EVGA 680i MB, and it performs horrible.
It even stutters on playing back an AVI.

I have a Highpoint 8 channel raid controller card and my raid5 screams...
I have 2 raid5 drives and I can get 188MB/sec copying from one drive to another.
February 13, 2007 10:43:37 PM

Quote:


My drives are currently in the Raid 5 setup and my stripe size= 32KB.


I'm set up with 128kb. The higher the number, the faster the performance. The caviat is that your space isn't economically used.

My opinion...RAID 5 sucks. I tried my current setup with RAID 5 when I first built it. It's supposed to be one of the better chipsets(965). I still didn't like how slow it was. I've gotten spoiled with my Raptors in RAID 0.
February 13, 2007 11:10:11 PM

raid 5 is not as fast as a 0+1 or a raid 10...raid 5 is best used for storage efficency...raid 50s are better...but thats 6 drives and a seperate raid card.
February 13, 2007 11:37:37 PM

Quote:
raid 5 is best used for storage efficency

Yea, That's why I changed. So now I'm not getting 50% of the total storage space.
Quote:
I'm set up with 128kb. The higher the number, the faster the performance. The caviat is that your space isn't economically used.

I did a lot of research about this. My computer isn't a dedicated server. All my files are completely different sizes and I have a lot of small files. So I thought a smaller stripe size would be better. Could this be a potential problem?
Quote:
I have a Highpoint 8 channel raid controller card and my raid5 screams...
I have 2 raid5 drives and I can get 188MB/sec copying from one drive to another.

8O 8O :x :evil:  :evil:  :(  :(  :) 
I envy you....

So I've confirmed using a dedicated card is faster than the mobo's built in RAID. (But didicated cards are so expensive! :(  ) And thanks choirbass for the link and info. First I'm going to try changing my stripe size to 128KB and if that doesn't work, I'm going to try Norton Ghost, and if that doesnt work.... :roll:

Thanks guys for your help! Without buying anything, Is Raid 01 the best way to go?
February 13, 2007 11:42:18 PM

its certainly the easiest, when it comes to automatic data backup, especially since its widely supported (referring to raid 1, not raid 0+1)

another way (and possibly better even still, because youre not dealing with the hassles of raid at all, and because youre also offered greater data redundancy) would just be to have your hdds set up as jbod, and copy data manually that youd want to keep to the other hdds (that way youre not stuck with a bunch of OS data occupying space that could otherwise be best used for essential stuff), or you could schedule backups inside windows too, for the same thing, scheduling specific directories and files to be backed up to the other hdds on occasion, or however often

but, unless youre dealing with very large files, transferring or editing them, raid 0 in itself isnt going to offer much benefit (optimally you may see up to around ~15% boost in str when transferring large amounts of data in a 2 hdd raid 0 array, compared to a single hdd)... synthetic benchmarks and windows boot times being the major exception to that, you may experience up to double performance in either of those situations... ...game loading times though are a common misconception of what raid 0 can help a lot with, you may experience an average of ~1 second improvement in load times for most games (unless the game maps consist of mostly only large bitmaps and such, then youll probably see a larger benefit)... google for 'raid 0 game load time' or something similar to that, and you should find quite a few reviews on raid 0 and gaming... a faster single hdd, more system memory, or even a faster cpu will benefit you much more in this case (game map/level loading)
February 14, 2007 1:52:01 AM

The NVIDIA Nforce MCPs (southbridge for Intel or single chips on AMD K8s) have poor RAID 5 performance, even if you don't use the BIOS to set up NVRAID "fake" RAID. My NForce 4 board stinks at RAID 5 while using the onboard SATA-300 ports and using Linux to control the RAID. Friends who use Windows report the same. I also got a HighPoint card, the 4-channel 2130 PCIe x4 unit. Now the RAID 5 performs excellently- 120-130 MB/sec peak reads from 3 250GB drives and peak writes in the 70 MB/sec range. On the NVIDIA SATA ports, that was about 80 MB/sec peak reads and 19-20 MB/sec peak writes. Yuck. The HighPoint card was worth the $140.

OP: An external card is the way to go if you have an NVIDIA board. You've got an Intel 965, which I am not familiar with as Intel never got around to releasing that chipset for the Athlon 64 X2s for some reason :D  RAID 01 is faster than RAID 5, but it's also less efficient at using the disk drive space. I find RAID 5 on my card to be more than sufficient for performance.
February 14, 2007 3:20:41 AM

Quote:


My drives are currently in the Raid 5 setup and my stripe size= 32KB.


I'm set up with 128kb. The higher the number, the faster the performance. The caviat is that your space isn't economically used.

Stripe size has nothing to do with space efficiency. Higher stripe sizes do not waste any space.

Higher stripe sizes affect performance for small files. Files that are smaller than the stripe size do not get as much speed benefit as larger files do because the entire file is not spread out amongst all disks in the array. Only files equal to or larger than the stripe size use all disks in the array and therefore get full speed benefit.
February 14, 2007 4:52:10 AM

Quote:
The NVIDIA Nforce MCPs (southbridge for Intel or single chips on AMD K8s) have poor RAID 5 performance, even if you don't use the BIOS to set up NVRAID "fake" RAID. My NForce 4 board stinks at RAID 5 while using the onboard SATA-300 ports and using Linux to control the RAID. Friends who use Windows report the same. I also got a HighPoint card, the 4-channel 2130 PCIe x4 unit. Now the RAID 5 performs excellently- 120-130 MB/sec peak reads from 3 250GB drives and peak writes in the 70 MB/sec range. On the NVIDIA SATA ports, that was about 80 MB/sec peak reads and 19-20 MB/sec peak writes. Yuck. The HighPoint card was worth the $140.


My 3-drive nForce 430 RAID 5 has done 140 MB/s read/write. Here's a write:

Quote:

D:\tools>xxcopy /y f:\test\test0\10.gb n:\test\test9

XXCOPY == Freeware == Ver 2.93.1 (c)1995-2006 Pixelab, Inc.
...
-------------------------------------------------------------------------------
F:\test\test0\10.gb 10,000,000,000
-------------------------------------------------------------------------------
Directories processed = 1
Total data in bytes = 10,000,000,000
Elapsed time in sec. = 71.19
Action speed (MB/min) = 8429
Files copied = 1
February 14, 2007 7:56:15 PM

:evil:  :evil: 

OK, thanks for the great responses.

Now, I cannot adjust the stripe size. I cannot install Norton Ghost. (Because I was being stupid and bought WinXP x64, which registers as Windows Server 2003! :x ) Now I'm stuck.... and have another problem:

Near the end of March, I'm going to upgrade my system. I'm getting a GIGABYTE GA-965P-DS3. Newegg says the board only supports Raid 0 and Raid 1. I'm also getting another gigabyte of memory, and plan to get a C2D in June. So now my questions are:

1. If I run a Raid 0 setup, will it be recognized on the Gigabyte board?
2. Since the Gigabyte board only supports Raid 0 and Raid 1, Would the best solution (with my same drives) be two Raid 0 arrays?
3. How do I move my data? Norton Ghost will not install on WinXP Pro x64.
4. Until March, what do you recommend that I do?

I'm thinking my best move (now) would be to back up my data (somehow) and rebuild a Raid 01 array until I upgrade. Then I should backup again, and build two Raid 0 arrays.

Sorry for asking so many questions and sounding so helpless :oops:  ....
February 14, 2007 8:39:13 PM

How much data is on your RAID 5 array currently? If it's less than 80GB, I might be able to come up with a procedure that will let you get back to RAID 0+1. If it's more than 80GB and you can't uninstall/delete enough to get under 80GB, then you will HAVE to have another hard drive to temporarily hold your data to transition back to RAID 0+1.

If you have to have another drive, that's not horrendous. The Western Digital 160GB SATA I've seen as low as $60 online.

Let us know how much data we're dealing with and we'll go from there.
February 14, 2007 9:10:56 PM

Well, being that quite a few people here have already beaten me to this I'll tell you something: RAID 5 runs BEST with a DEDICATED controller. Now some of those nifty integrated ones that Intel provides can be pretty good, I'll admit that, but a dedicated controller is much better for RAID 5. Also, 32KB? Dude, I'm using 128KB, it runs MUCH SMOOTHER (that, and I'm running U160 SCSI, got a few drives and a free controller from my CAD instructor). Also, have you thought about maybe bumping up your RAM? I mean, why do you even have a 64-bit OS if you're not even using enough RAM to take advantage of it?
February 14, 2007 9:40:23 PM

Quote:
How much data is on your RAID 5 array currently? If it's less than 80GB, I might be able to come up with a procedure that will let you get back to RAID 0+1.

Really?? Ok then, I have two partitions. There is 8GB used on a 10GB C drive and 38.7GB used out of 100GB on my D drive. (The rest is not formatted) So that's under 80Gigs. I'm very intrested in your plans.

Quote:
have you thought about maybe bumping up your RAM?

Quote:
I'm also getting another gigabyte of memory,

Yea, I have.
Quote:
I mean, why do you even have a 64-bit OS if you're not even using enough RAM to take advantage of it?

I told you,
Quote:
Because I was being stupid and bought WinXP x64

I didn't know the difference between the two when I was buying compontents. Turns out this was the biggist mistake I made in building my system. (4 HDD's is the second biggest. :oops:  )
February 14, 2007 9:44:53 PM

if you need a raid for performance spend the money to get a dedicated controller.
February 14, 2007 10:08:45 PM

RAID 5 uses alot of processing power, proper hardware RAID cards have dedicated XOR engines of 400MHz or more, this is probably equivalent to 1.2GHz or so of general purpose CPU power.

RAID 5 on anything other than a fully hardware RAID card is a bad idea, unless it is for occasional access, you dont care about speed, and need both capacity and data security.

Anyway, you wont be able to perform a RAID level migration on that array. Only serious high end controllers can do that.

You wont find an application to do it, as the controller will be obfusticating the drives themselves and will show only one large drive to the application. All controllers impliment RAID slightly differently in terms of data structure, so any such application would have to be written for that specific controller and firmware revision. RAID 5 to RAID 1/0 would involve recognising and discarding all the parity data, resizing the NTFS partition, and moving a heck of a lot of stuff around. Short of it - No way.

Stripe size is NOT cluster size. Large cluster sizes lead to inefficient storage, large stripe sizes do not. Clusters are to do with the filesystem and are the smallest block that may be allocated to a file. All files are allocated a certain number of blocks, and all files will use a multiple of the cluster size on the disk.

The OS doesnt even SEE the stripe size, the controller deals with all that and presents the OS with a single drive. One file may end, and another may start, in a single stripe block.

Small stripe sizes mean that even small files are striped, and increase sustained transfer reads, at the expense of random seek times. With high queue depths and multiple files, a larger stripe size will be better.

Personally, I'd go RAID 0 for my C: drive, and have a "Storage Drive" that is RAID 5, RAID 0/1, RAID 3, single drive, or whatever for your important stuff. It has always seemed silly to me to put stuff that is easily replaced (massive game installs and dont need to be backed up when they can be replaced from the disks etc) onto inefficient storage, that is, anything that is loosing space for redundancy.
February 14, 2007 10:26:27 PM

Meh, that's acceptable I guess. As long as you've learned from these expriences.
February 14, 2007 10:44:23 PM

Quote:
Really?? Ok then, I have two partitions. There is 8GB used on a 10GB C drive and 38.7GB used out of 100GB on my D drive. (The rest is not formatted) So that's under 80Gigs. I'm very intrested in your plans.


OK, I'm going to need some time to come up with the procedure and verify it works. I happen to have a 945G Intel motherboard at work that I can try this on.

The basic idea is that we're going to pull one drive out of your RAID 5 (making it a degraded array), reformat that drive as a non-RAID drive, copy your partitions to it, remove all the RAID 5 drives, insert the new non-RAID drive and make sure it boots your system, re-insert the 3 other drives as blank drives, and then perform a "RAID Ready" to RAID 10 migration through Intel Matrix Storage Manager.

I have to test whether this is even possible to do, and what freeware tools we can use that will work. You also have Norton Ghost (even though it won't install on XP64), but we might be able to get around that as well.

Some other questions:

1. Do you have a Windows XP (32-bit) CD-ROM? Even if you don't have an activation key for it?
2. Do you have a floppy drive?
3. Do you have a USB flash drive?

I'll have to do my testing this weekend at work when I have time, then I'll post back with the procedure if I can successfully do it.
February 14, 2007 10:45:32 PM

Thanks for the info darkstar. Since I wasn't being smart, I ended up with four HDD's. (80GB each) I thought that putting them in a RAID array would give me storage and performance gains. I knew that a Hardware solution was the best for performance, but I thought the software route would give me gains also, just not as much. But I had no idea that Raid 5 used so much processing power!! So now I'm stuck with 4 80GB drives in a slow RAID 5. I would personally try to avoid reformatting everything unless is is my only choice. I also plan to upgrade to a mobo that only supports Raid0 and Raid 1. My largest concerns were in my previous post.

What do you suggest I do in my situation??
SomeJoe7777 sounded like he had a good plan, but I'm stuck until I hear from him again.
February 15, 2007 1:55:47 PM

Quote:
SomeJoe7777 sounded like he had a good plan, but I'm stuck until I hear from him again.


I did reply, you need to answer my other questions above. ^^
February 15, 2007 2:55:52 PM

Quote:
1. Do you have a Windows XP (32-bit) CD-ROM? Even if you don't have an activation key for it?
2. Do you have a floppy drive?
3. Do you have a USB flash drive?

1.) Yes, Win XP Home
2.) Yes, I can pull one from another system.
3.) Yes, a 2GB Corsair Voyager.

Sorry I missed your post the first time. :oops:  I like your plan, and I really hope it works. Thanks so much!!
February 18, 2007 5:15:27 AM

OK, this procedure took me way longer that I thought to come up with and test. I hope this works for you.

Please read through the entire procedure before proceeding. Make sure you understand what you're doing at every step.

RAID 5 to RAID 10 Migration Procedure

Initial Conditions

- Intel D945 system board w/ ICH7R south bridge
- 4-drive RAID 5 on the ICH7R
- Working XP64 installation on a 10GB partition on the RAID
- 100GB partition (D: drive) on RAID holding ~40GB of data

Final conditions

- Same installation of XP64
- RAID 10

Restrictions

- No additional money can be spent
- No additional hard drive space available
- No commercial utilities available

Assets

- Floppy drive
- USB Flash drive
- Internet connection
- CD-R recorder with blank CDs


Caveats and Warnings

1. I was incorrect, my motherboard at work is a DG965WH, not a DG945NT. The 945 (your board) has an ICH7R south bridge, while the 965 (my board) has an ICH8 south bridge. They each use a slightly different version of the Intel Matrix Storage Manager. Thus, the instructions I'm giving here may not exactly correspond to your versions, and thus, I can't say for sure that this procedure will work on your machine. I believe it will, as the ICH7R and the ICH8 are very similar, the main difference being USB support (8 ports vs. 10), number of SATA ports (4 vs. 6).
2. This procedure is risky. Data loss can result. You may end up needing to rebuild your array & reinstall Windows from scratch if something goes wrong. If you have any data that you cannot afford to lose (financial files/records, pictures, etc.) burn them to CD-R or DVD-R before continuing.
3. I did try this exact procedure as written, step by step, on the hardware I indicated, and it worked. If it doesn't work for you, sorry, but I can't take any responsibility for that. There's a million things that I don't know about your system that could affect this procedure. So, in short, caveat emptor/buyer beware.
4. On my motherboard, the SATA ports were labeled on the motherboard (in white lettering) with one set of numbers, but the Intel Matrix Storage Manager (both the ROM utility and the Windows Management console) labeled them a different set of numbers. SATA port 0 was consistent between the two, but the other ports' numbering was off. Be aware of this when looking at lists of drives in the Intel Matrix Storage Manager. If Intel Matrix Storage Manager is telling you that port 3 is a non-RAID drive, that may not be the port that is labeled "port 3" on the motherboard. For this reason, the important single drive that's going to be our pivot point for this migration is ALWAYS going to be in port 0.

Preliminary

1. Shut down your system, pull the floppy drive from another system and install it in yours. If necessary, go into your BIOS and make sure the floppy is enabled. Start your system back up, boot into Windows.
2. Make sure you have the correct version of Intel Matrix Storage Manager installed (go to Intel's web site, go to the drivers section for your motherboard, and download it and install it if you haven't already). This procedure depends highly on having the Intel Matrix Storage Manager installed. (I mean the full Windows installation of the Intel Matrix Storage Manager, with the console & everything. Not just the driver.)
3. Download Active@ Killdisk from http://download2.lsoft.net/killdiskfloppysetup.exe. Double-click on the downloaded .exe file to create a boot floppy for Active@ Killdisk. Set this aside.
4. Download Terabyte Unlimited's CopyWipe for DOS utility from http://www.terabyteunlimited.com/downloads/copywipe.zip. Unzip this file into a folder on your hard drive. Inside the folder is a utility called makedisk.exe. Double-click that to run it, select all defaults, select drive A: to make a floppy disk, insert a blank floppy, and create it. Once the floppy is made, set this aside.
5. Run Windows Explorer. On your data drive (the 100GB partition you have that's holding ~40GB of data), rename it to "Data 1". (Right-click the drive, select Rename).
6. Run Computer Management (right-click My Computer and select Manage). Go to Disk Management.
7. Create a new partition on your unpartitioned space by right-clicking the unpartitioned space and choosing New Partition. Create a 55GB partition (it wants the partition size in megabytes, so enter 55000), make it a primary partition, accept the drive letter default, and name it "Data 2". Make sure and check "quick format". Wait for the partition to appear in the top section of Disk Management, and it's status to change from "Formatting" to "Healthy".
8. Exit Computer Management. Run Windows Explorer. You will see your 100GB partition that has about 40GB of data on it (named Data 1), and the new 55GB partition (named Data 2) which is empty. Copy all 40GB of your files from Data 1 to Data 2.
9. When the copy is finished, close Windows Explorer, open Computer Management, go to Disk Management. You will see 4 sections of your hard disk: A 10GB partition (C: ), a 100GB partition (Data 1), a 55GB partition (Data 2), and some free space. Right-click Data 1, and select Delete Partition, select yes.
10. You will now see 4 sections of your hard disk: A 10GB partition (C: ), 100GB of free space, a 55GB partition (Data 2), and some free space. Right-click the 100GB of free space, and select New Partition. Create a 55GB partition (it wants the partition size in megabytes, so enter 55000), make it a primary partition, accept the drive letter default, and name it "Data 3". Make sure and check "quick format". Wait for the partition to appear in the top section of Disk Management, and it's status to change from "Formatting" to "Healthy".
11. Exit Computer Management. Run Windows Explorer. You will see your 55GB partition that has about 40GB of data on it (named Data 2), and the new 55GB partition (named Data 3) which is empty. Copy all 40GB of your files from Data 2 to Data 3.
12. When the copy is finished, close Windows Explorer, open Computer Management, go to Disk Management. You will see 5 sections of your hard disk: A 10GB partition (C: ), a 55GB partition (Data 3), some free space, a 55GB partition (Data 2), and some free space. Right-click Data 2, and select Delete Partition, select yes. The purpose of this whole exercise was to reduce the size of your data partition such that the combined partition size of the boot partition and the data partition is less than 80GB (the size of one of your drives).
13. You should now see 3 sections of your hard disk in Disk Management: A 10GB partition (C: ), a 55GB partition (Data 3), and some free space. Close Computer Management.


Procedure

1. Shut down your machine, disconnect any external drives or USB flash drives from the machine.
2. Open your case, you will see your 4 hard drives.
3. Disconnect 3 of them from the SATA connectors on your motherboard, mark which port each SATA cable goes in. We have to put them back the same way later. Leave one drive connected in the port marked "SATA 0".
4. Restart the machine, go into the Intel Matrix Storage Manager ROM by hitting Ctrl-I during the system POST.
5. Intel Matrix Storage Manager ROM will show one drive connected, and the array as "Failed". Delete the array using the menu option. Once the array is deleted, the disk will now show as a "Non-RAID" disk.
6. Restart the machine, start up from the Active@ Killdisk floppy. (Go into your BIOS and change the boot order if you need to so that the floppy is the first device in the boot order).
7. Once Active@ Killdisk is running, select the one lone hard drive and hit F10. (It will be listed as "Hard Disk Drive (80h)". Don't select any partitions that appear under it -- select the entire drive).
8. Once you hit F10, it will present you with a menu of options. Use the down arrow to go to the last option (leave all the rest at the defaults) and select "Confirm and Erase".
9. Type "erase-all-data" and hit enter. Killdisk will begin to erase the drive. This is necessary to wipe out all the RAID structure and partition information.
10. You do not have to wait for the process to complete. Allow killdisk to wipe the drive for about 3 minutes, then hit Escape to stop it. Once it stops and goes back to the main menu, hit Escape again to exit.
11. Once you're back to a DOS prompt, power off the machine.
12. Reconnect your other 3 hard drives to the same SATA ports they were connected to in step 3.
13. Power on the machine, go into the Intel Matrix Storage Manager by hitting Ctrl-I during the system POST.
14. When the Intel Matrix Storage Manager ROM starts up, it will show the array as "Degraded", and will also detect the now blank hard drive, and automatically ask you if you want to rebuild the array. Hit Escape to cancel the rebuild, we do NOT want to do this.
15. You should see 4 drives connected, 3 that are array members and one non-RAID disk. The array should show as "Degraded".
16. Hit Escape to exit the Intel Matrix Storage Manager ROM, and boot the Terabyte Unlimited Copywipe for DOS floppy. (Go into your BIOS and change the boot order if you need to so that the floppy is the first device in the boot order).
17. When Copywipe starts up, you'll be at the main menu. Select Copy a Hard Drive.
18. The next menu is the source drive selection. Select the first option, BIOS HD.
19. The next selection should show 2 hard drives in your system, hard drive 0 and hard drive 1. While hard drive 0 is highlighted, press F6. This will show you the partitions on that hard drive, which is how you will properly identify it. Your degraded RAID array should show a 10000 MB partition (boot), a 55000 MB partition (Data 3), and free space. If the partitions look correct, hit Escape, and then select Hard Drive 0. If the partitions don't look correct, hit Escape, and then highlight Hard Drive 1 and hit F6 and see if they look correct. Whichever hard drive is your degraded RAID array, that's the one you want to select.
20. The next selection that comes up asks for the target drive selection. Select the first option, BIOS HD, and from there this is the same process. You'll select the lone non-RAID 80GB drive, which should be blank and not have any partitions on it, since we erased it with Killdisk. If perchance there happens to be any residual partition information on it, you can delete the partition by hitting F4. Do that if you see any residual partitions, but be careful and DON'T erase any partitions on your array. (Look at the free space listed in the partition list to tell which drive is which. The array should have >100GB of free space, the 80GB will have at maximum, 80GB).
21. After you select the correct target drive (the non-RAID 80GB), you'll then get the Options menu. Select Straight Copy. Copywipe will then proceed to copy your partitions from the degraded RAID array to the non-RAID 80GB.
22. When the copy finishes, hit Escape to exit the program. Power off the machine.
23. Unplug the 3 drives that comprise the degraded RAID 5 array (the same 3 you disconnected back in step 3 and reconnected in step 12). Leave only the non-RAID drive connected to SATA port 0.
24. Power on the machine, boot to Windows to make sure it works. Everything should be exactly the same as it was when it was booting off the RAID array.

This is your abort point. Up to this point, we can back out of this procedure and return to a working RAID 5 by erasing the lone 80GB, connecting all 4 drives back up, booting the degraded array, and telling Intel Matrix Storage Manager to do a rebuild. So make sure everything is cool at this point. Once you continue past here, you're committed.

25. Shut down the machine. Unplug the non-RAID 80GB from SATA port 0. Reconnect the other 3 drives as they were connected before.
26. Power up the machine, go into the Intel Matrix Storage Manager ROM by hitting Ctrl-I during the system POST.
27. Delete the degraded RAID volume using the menu. After the deletion, all 3 drives should show as non-RAID disks.
28. Exit the Intel Matrix Storage Manager ROM by hitting Escape, then power down the machine.
29. Connect the non-RAID 80GB back to SATA port 0. Power up the machine, and see if it boots into Windows.
30. If it doesn't boot into Windows, it's because one of the now-blank drives is set to be the boot drive instead of the 80GB drive that has your partitions on it. You'll need to go into the BIOS to make sure the 80GB you just reconnected is set as the boot drive. However, since all the drives are the same model (Seagate 80GB), they all look alike in the BIOS, so you may have to repeatedly try to change the boot order one drive at a time until the correct drive is at the top. Fortunately, since the drive that we want to be the boot drive is in SATA port 0, this shouldn't be necessary, the machine should put that drive at the top of the boot order by default.
31. Once back in Windows, run Intel Matrix Storage Manager from the Start Menu.
32. Select the Intel RAID Controllers node at the top, and then on the Actions Menu, do Create RAID Volume from Existing Hard Drives.
33. Click next, give the volume a name (I chose "RAID10"), select RAID 10 as the RAID level, select 64K as your stripe size, click next.
34. It then asks you which hard drive is the one we're preserving. Select port 0 and then click next. (I recommend you verify the given serial number of the drive that it says is in port 0 with the physical drive you know is the one holding your partitions - the one plugged into port 0. It is imperative that you select the correct drive here).
35. It then asks you to select member hard drives that will become part of the new RAID array. Select the other 3 drives and click Next.
36. It then asks you the volume size, keep the default of 100% and click next.
37. Click Finish, and it will then say a RAID migration is in progress. The migration will take a few hours.
38. After the migration finishes, you'll need to reboot. At that point, go into the BIOS and make sure the new array is set as the boot device.

That should do it. You're back to a RAID 10.

Optional:

You can extend the size of your data partition back to 100GB (or more) if you want at this point using the Windows DISKPART command-line utilty. If you want to do this, post back and I'll post the procedure.

As far as your other question about moving this RAID when you get a new motherboard: If the new motherboard has a different chipset, then not it won't work at all. If the new motherboard has an ICH7R or ICH8, it might actually come up.
February 18, 2007 6:24:39 AM

Your problem with battlefield2 is probably your ram. Look at task manager during gameplay and you will probably find that you are using over 1gb of memory and that is why your drives are cranking away. May not have been a problem with your old video card because your video settings were turned down and the game used less ram.
February 18, 2007 9:37:55 AM

Quote:
OK, this procedure took me way longer that I thought to come up with and test. I hope this works for you.


Very ingenious solution, well done!

While a single disk to RAID 1 migration is a simple thing, a single disk to RAID 0 migration is alot more complex (and migrating to RAID 1/0 is basically the same thing tbh)

I'm impressed the Maxtrix storage manager can do this, no mean feat for a low end RAID controller on a southbridge.
February 19, 2007 1:31:57 AM

hmm he is right, ram could be the problem. 2gb of ram would help anyones system out.

heck, i recommend 2gb of ram to everyone when getting a computer.
February 19, 2007 9:57:29 PM

8O 8O 8O 8O 8O 8O 8O 8O

Thanks SOOOOOOOOOOOOOOO Much!!!

I can't believe you would write something so detailed and precise to someone you dont even know. Thanks so so so so much!!

BUT: I'm having some problems. I did everything you said, and everything worked exactly how you said it would until step 24. When I booted my system back up, Matrix Storage Manager included the previously "non-raid" disk back as a "member disk". Previously it did say "non raid disk". Why it reverted back I cannot say. Anyways the lone drive will not boot. :evil:  :evil:  Now what??

The system reports a "failed" array when the three drives are connected, and "degraded" when all four are connected. It also says "failed" when the lone drive is plugged in.

My guess would be to re-wipe the drive, and try again. (to make sure I didn't make a mistake.) If that fails should I just boot and try to get to windows again??

Once again, I cannot thank you enough for your time. Thanks!!
February 19, 2007 11:47:32 PM

Quote:
8O 8O 8O 8O 8O 8O 8O 8O

Thanks SOOOOOOOOOOOOOOO Much!!!

I can't believe you would write something so detailed and precise to someone you dont even know. Thanks so so so so much!!

BUT: I'm having some problems. I did everything you said, and everything worked exactly how you said it would until step 24. When I booted my system back up, Matrix Storage Manager included the previously "non-raid" disk back as a "member disk". Previously it did say "non raid disk". Why it reverted back I cannot say. Anyways the lone drive will not boot. :evil:  :evil:  Now what??

The system reports a "failed" array when the three drives are connected, and "degraded" when all four are connected. It also says "failed" when the lone drive is plugged in.

My guess would be to re-wipe the drive, and try again. (to make sure I didn't make a mistake.) If that fails should I just boot and try to get to windows again??

Once again, I cannot thank you enough for your time. Thanks!!


times like this i wish i had that kind of knowledge to help others ( about raid).
February 20, 2007 12:27:14 AM

Quote:
The system reports a "failed" array when the three drives are connected, and "degraded" when all four are connected. It also says "failed" when the lone drive is plugged in.


Oooo ... this kind of bothers me. As you SURE you have the correct drive connected to SATA port 0 when you're attempting to boot up the computer?

The thing that tells me that you may have switched drives somewhere is that the 3 drives that are supposed to comprise the degraded array show "failed" when you plug them in by themselves (they should show "degraded"), and the 1 lone drive shows a "failed" array (it should show "non-raid disk"). The results you're seeing are consistent with what would happen if you have a member of the degraded array plugged into SATA 0 instead of the lone drive.

Do this: plug in all 4 drives, go to Intel Matrix Storage manager ROM (Ctrl-I), and see if one of the drives is a "Non-RAID disk". If it is, you need to find that drive and connect it by itself to SATA 0, because that is your lone drive. (Try one at a time if you need to).

Once you have the correct lone drive hooked to SATA 0 the machine should boot.

If none of this works, STOP and post back. The array is in a degraded state right now and we're unsure of the status of the lone drive. We'll need to rebuild the RAID 5 array and let you start over if we can't figure out what's going on.
February 20, 2007 12:53:03 AM

Thanks for the quick response. I'm fairly sure I did everything right. I could be wrong though. I'm looking at Matrix Storage Magager now. All 4 drives are plugged in and listed as "member disk".

I probably messed something up somewhere. Should I simply rebuild the array and start over? I'm waiting on your command.
February 20, 2007 1:00:36 AM

If all 4 drives are listed as a member disk, then we have no choice but to start over.

Go into Intel Matrix Storage Manager ROM and rebuild the array. (That will take it a while, but the machine should still boot and operate during the rebuild).

After the rebuild is completed, you can start again with step 1 of the "Procedure" part (no need to do the "Premilinary" part again).

One modification I want you to do the 2nd time around: Allow the Killdisk to run all the way through. This way we'll be positive that no array information exists on the lone drive.
February 20, 2007 1:05:08 AM

Thanks for the support. The IMSM ROM says that the array will be rebuilt in the OS. (just letting you know) How long does it take for killdisk to fully run through? (I'm thinking about running Killdisk while I'm at school)
February 20, 2007 1:35:40 AM

Quote:
Thanks for the support. The IMSM ROM says that the array will be rebuilt in the OS. (just letting you know) How long does it take for killdisk to fully run through? (I'm thinking about running Killdisk while I'm at school)


killdisc? erasing a drive fully? basically the same thing as a full format right? if so then it would take 1-2 hours usually...
February 20, 2007 1:38:53 AM

Yes, Killdisk freeware version makes one pass, should take the same time as a full format - 1-2 hours.
February 20, 2007 10:37:50 PM

Quote:
Shocked Shocked Shocked Shocked Shocked Shocked Shocked Shocked

Thanks SOOOOOOOOOOOOOOO Much!!!


OK, I did everything right, and I have my single drive working again!! Now I'm scared to go back to Raid 01 because the Gigabyte DS3 I'm getting (at the end of March) may not recognize the array. (Then I'm in trouble.... again :lol:  ). I'll also be getting another gig of RAM. Probably the same kind (Balistix DDR2-667) and OC it to DDR2-800.

As for my other three drives.... I dunno for now. Can't I wipe them because they contain the same info as my single drive does?

OK, I'm good now. Thanks so much SomeJoe7777. I really don't know where I'd be now if it weren't for you.

I haven't tried any gaming yet. (getting ready to) And thanks again!! I really can't thank you enough :!: :!: :!:
February 20, 2007 11:40:08 PM

Sweet. As long as you're at a point where you feel comfortable, you can of course stop at this point.

If your single drive is working fine (test your gaming performance, make sure it's OK), then you can use the other 3 drives as storage until your new MB comes in. Disconnect the single drive, plug in the other 3, power up, go into Intel Matrix Storage Manager ROM, delete the degraded array so that all the disks become Non-RAID disks. Power off, connect single drive back to SATA 0, boot back up, go into Disk Management & partition and format the 3 blank drives however you want.

Or, you can just disconnect them and put them aside until you get your new MB, and then make a RAID array there.

You're welcome, glad to help. 8)
February 21, 2007 12:42:31 AM

A FEW more questions, and I'll stop buggin' you. :oops: 

If I reformat the other three drives, and use them for storage, Can I later migrate the drives to a Raid 0 or 01 setup? (without a complicated procedure?)
February 21, 2007 1:56:03 AM

No, probably not. The Gigabyte DS3 doesn't use the ICH8's RAID capability, they have their own chip for RAID, and it's only a 2-port. I doubt it supports any kind of migration like the Intel ICH7/8 does.

If you want to set up a new RAID on the Gigabyte board, I'd just wait until you get it.
February 21, 2007 11:42:18 AM

Sorry, more questions keep comming to me. (I don't want this to happen again!) :oops:  :oops: 

OK, will a RAID 0 setup on the Gigabyte board be worth it? (Will it be faster than a single drive?) (Especially loading levels on games (BF2))

Trying to learn from my mistakes, I now know a dedicated card is necessary for "real" Raid performance. However, is Raid 5 on a southbridge really "that" bad? Or was my fatal flaw setting the stripe size to 32KB?

Thanks! :oops: 
February 21, 2007 1:59:58 PM

There's endless debate about whether RAID 0 is worth it for game loading times. The general consensus is that it may net you a few seconds of faster level loads (abound 5-10% improvement). I don't play any online games, so to me that's not worth it. To a hard core gamer, maybe it is. That would be up to you.

The RAID 5 implementation on the Intel ICH7R/8 is not bad, but it still uses the CPU for parity calculations. This will slow down depending on how may other processes are using the CPU, etc. It's much worse on a degraded array as well. I'm sure BF2 uses the CPU heavily during gameplay and level loads, and this squeezes the CPU such that disk reads/writes slow down.

On top of it, if your memory was insufficient and the system began paging to the pagefile, this would massively slow things down as well, since every write to the pagefile involves parity calculations because your boot drive was the RAID 5.

General rules for RAID:

1. RAID 0 is purely for speed where big data transfers matter. Video editing, Photoshop work, DVD authoring, audio editing.

2. Other levels of RAID are for 2 purposes: Protection against hard drive failure, and keeping the system online when a hard drive fails.

3. RAID is no substitute for backup - RAID does not protect against spyware, viruses, accidental deletion, file/directory corruption, OS errors.

4. RAID 5 & 6 require a hardware-dedicated RAID card for acceptable performance. RAID 0, 1, 0+1, and 10 can usually be handled by an inexpensive or motherboard-based controller reasonably well, but not optimally. A high-end RAID card will outperform a motherboard controller on every RAID level, not just 5 & 6.

5. Boot drives should be on a separate array from data drives. (Not necessarily a separate card/controller, but boot drives & data drives shouldn't share physical disks).

6. In general, boot drives and the drive where the pagefile resides should not be on RAID 5.

7. Data drives should be protected with RAID 5 (for capacity), RAID 10 (for speed), or RAID 1 (neither is a large consideration - only protection matters).

8. RAID 6 should only be used for mission critical data, and only with a high-end RAID card.

9. Do not confuse RAID 10 with RAID 0+1. RAID 10 is a superior implementation of 4-drive nested RAID.
February 24, 2007 9:55:38 PM

Sorry I haven't responded, I've been really busy lately. Just as an update, I'm still using a singele HD and everything works fine. I probably will switch to Raid 0 when I get the Gigabyte board.

Can you explain how Raid 10 works? I already know that Raid 0+1 is either striping to to two mirrors or mirroring two striped arrays. (Not sure, can you explain?)

And also, I've heard of something called Intel Matrix Raid. Using this, you could have two physical drives, and run a Raid 1 and a Raid 0 array. Do you know anything about this?

Sorry for the steady stream of questions, and I really appreciate you answering them.
February 26, 2007 1:20:08 AM

For a prevous post of mine where I discuss the differences between RAID 0+1 and RAID 10, see this thread. I made two posts in it, both on the first page.

Intel Matrix RAID allows you to create two separate logical volumes on one array. Each logical volume can have it's own RAID level. Thus you can take two physical drives and use portions of each to make different RAID levels. For example, use 2x 320GB drives and use 50GB on each to make a 100GB RAID 0, and use the remaining 270GB on each to make a 270GB RAID 1. You can also do other RAID levels with 3 or 4 drives, including a RAID 0/RAID 5 combination with 3 drives, or a RAID 0/RAID 10 or RAID 10/RAID 5 combination with 4 drives.

Your current motherboard which uses the ICH7R can already do this. The Gigabyte you're planning on buying cannot, even though it has an ICH8, because Gigabyte has not used that feature.
March 9, 2007 11:02:20 PM

Quote:

The RAID 5 implementation on the Intel ICH7R/8 is not bad, but it still uses the CPU for parity calculations. This will slow down depending on how may other processes are using the CPU, etc. It's much worse on a degraded array as well. I'm sure BF2 uses the CPU heavily during gameplay and level loads, and this squeezes the CPU such that disk reads/writes slow down.

On top of it, if your memory was insufficient and the system began paging to the pagefile, this would massively slow things down as well, since every write to the pagefile involves parity calculations because your boot drive was the RAID 5.

This may come as a shock to you but even $35 processors can do about 4GB/s of parity calculations. Parity calculation, or XOR, is an extremely simple operation.

The biggest performance catch is the combining of I/O requests to circumvent the 'raid5 write hole'. Because normally, when you write to a RAID5 volume in order to calculate the parity the remainder of the stripe block has to be read. So in order to write you must first read. This is catastrophic to performance -- the disks are actually bottleneck since they have to seek alot of the time. I/O request combining or dynamic stripesize are ways to prevent this 'write hole' from happening.
Quote:
1. RAID 0 is purely for speed where big data transfers matter. Video editing, Photoshop work, DVD authoring, audio editing.

RAID0 improves non-sequential performance as well, like writing small files.
Quote:
2. Other levels of RAID are for 2 purposes: Protection against hard drive failure, and keeping the system online when a hard drive fails.

And making one big volume out of multiple drives.
Quote:
3. RAID is no substitute for backup - RAID does not protect against spyware, viruses, accidental deletion, file/directory corruption, OS errors.

A live backup won't protect you either from these risks, although it does decrease them. You need offline backups at a different physical location to provide optimal data security.
Quote:
4. RAID 5 & 6 require a hardware-dedicated RAID card for acceptable performance.

Untrue. Software RAID5 can just as well reach ~400MB/s of write performance, beating even the very renowned Areca ARC-1210 hardware RAID with Intel IOP processor.

If you use Windows however, your options are severely limited. Too bad for you. :wink:
Quote:
5. Boot drives should be on a separate array from data drives. (Not necessarily a separate card/controller, but boot drives & data drives shouldn't share physical disks).

Why?
Quote:
6. In general, boot drives and the drive where the pagefile resides should not be on RAID 5.

I would argue that your operating system should not actually swap and cause I/O drain. And yes i know about virtual memory; but actually swapping (causing I/O) and VM is not the same thing.
Quote:
7. Data drives should be protected with RAID 5 (for capacity), RAID 10 (for speed), or RAID 1 (neither is a large consideration - only protection matters).

With the right tools, RAID5 is faster than RAID10. RAID10 also carries bigger risk of drive failure since you need more disks to reach the same capacity -- and more drives mean more risk. Though both have redundancy so not really a big deal.

Let's round up those RAID myths!
March 10, 2007 12:34:04 AM

You really felt it was necessary to resurrect a 2-week old thread just to argue my points one-by-one? Wow.

Quote:
This may come as a shock to you but even $35 processors can do about 4GB/s of parity calculations. Parity calculation, or XOR, is an extremely simple operation.


I'm very well aware of those points. Are you aware that the overhead from interrupts and context switches between user-mode and kernel-mode on the processor accounts for far more of the CPU usage than the actual XOR calculations? That's why a dedicated application-specific processor like the IOP can frequently attain greater performance with less power and less heat.

Quote:
The biggest performance catch is the combining of I/O requests to circumvent the 'raid5 write hole'. Because normally, when you write to a RAID5 volume in order to calculate the parity the remainder of the stripe block has to be read. So in order to write you must first read. This is catastrophic to performance -- the disks are actually bottleneck since they have to seek alot of the time. I/O request combining or dynamic stripesize are ways to prevent this 'write hole' from happening.


Yes, and the largest thing that can prevent the read-before-write is high-speed dedicated cache, which is available only in hardware implementations.

Quote:
RAID0 improves non-sequential performance as well, like writing small files.


Generally only when cache is available and the controller can re-order the I/O requests. Many cheap motherboard implementations cannot and will not do this.

Quote:
And making one big volume out of multiple drives.


All levels of RAID do this (even JBOD). Since I was trying to highlight the difference between RAID 0 and other RAID levels, this was a non-relevant point.

Quote:
A live backup won't protect you either from these risks, although it does decrease them. You need offline backups at a different physical location to provide optimal data security.


Who said anything about a "live backup"? (What exactly is that, anyway?) A "backup" to me is always offline. That's what makes it a backup.

Quote:
Untrue. Software RAID5 can just as well reach ~400MB/s of write performance, beating even the very renowned Areca ARC-1210 hardware RAID with Intel IOP processor.


I've seen you quote this in numerous threads, but I haven't seen any benchmarks that show that. Nevertheless, even if I give you the benefit of doubt and assume that's true, you have conceded that it wasn't on a Windows machine. Since I was speaking to the original poster in this thread and his machine is Windows, then my statement to him is true. Further, I know that a hardware RAID card can achieve its stated performance in any operating system and on many system implementations, not just a specific setup. Thus my generalized recommendation is more suitable for a list of thumbrules like this.

Quote:
Why?


Because the purposes of a boot drive and a data drive are different. And because the ability to recover the system from disaster is far easier. The first time you need to perform disaster recovery on a Windows server where there was one large partition holding the OS and the data will be your last, and you'll separate your boot and data drives after that.

It's much easier to purpose the arrays properly when they're separate. You can tune the boot drive for protection and minimal rack space (RAID 1), while tuning the data drives for performance (such as a SQL server using RAID 10), or for capacity (RAID 5 or 6). Further, if you want to subsequently move the data drives to another device (external unit, SAN, or another server), that is trivial since it doesn't involve the boot partition on that server.

Quote:
I would argue that your operating system should not actually swap and cause I/O drain. And yes i know about virtual memory; but actually swapping (causing I/O) and VM is not the same thing.


Of course. But it doesn't always work that way. Your 1GB of memory that was perfectly adequate in the SQL server 6 months ago when you first started using it isn't adequate anymore because the database is now getting used more. (Murphy's law - work expands to fill and then exceed available resources). If you keep the boot drive & pagefile off RAID 5, the resultant performance impact is less when this happens. Ideally, sure, you'd order up more memory and put it in. But if you don't control the budget, you better plan ahead more carefully.

Quote:
With the right tools, RAID5 is faster than RAID10. RAID10 also carries bigger risk of drive failure since you need more disks to reach the same capacity -- and more drives mean more risk. Though both have redundancy so not really a big deal.


Yes, you can make RAID 5 fast with a $1000 RAID card or your claimed non-Windows implementation. But that may not be available. Maybe if speed is what you have to have and budget is not available then you use RAID 10.

Quote:
Let's round up those RAID myths!


If that's your goal it might be better to work on the large myths first, like game loading times or RAID for RAID's sake. Nit-picking my perfectly valid points undermines that goal.
March 10, 2007 9:35:39 AM

Quote:
You really felt it was necessary to resurrect a 2-week old thread just to argue my points one-by-one? Wow.

Hehe oops, i searched for "raid" with lots of tabs open but forgot to look at the date :oops: 

Quote:
This may come as a shock to you but even $35 processors can do about 4GB/s of parity calculations. Parity calculation, or XOR, is an extremely simple operation.


I'm very well aware of those points. Are you aware that the overhead from interrupts and context switches between user-mode and kernel-mode on the processor accounts for far more of the CPU usage than the actual XOR calculations? That's why a dedicated application-specific processor like the IOP can frequently attain greater performance with less power and less heat.

Yes XOR is only a fraction of the final CPU usage. But still i hear this many times. The calculation of parity is really not that fancy; even IOP used in Areca ARC-12xx runs at 500MHz; your CPU is many times more powerful even though it is general purpose. It is the reordering and combining of I/O requests that by far consume the most cpu power. So: i don't want to hear (software) RAID5 is slow because it has to calculate parity, thats a myth.

Quote:
The biggest performance catch is the combining of I/O requests to circumvent the 'raid5 write hole'. Because normally, when you write to a RAID5 volume in order to calculate the parity the remainder of the stripe block has to be read. So in order to write you must first read. This is catastrophic to performance -- the disks are actually bottleneck since they have to seek alot of the time. I/O request combining or dynamic stripesize are ways to prevent this 'write hole' from happening.


Yes, and the largest thing that can prevent the read-before-write is high-speed dedicated cache, which is available only in hardware implementations.

geom_raid5 uses request combining and uses about 8MB of RAM to do the job; although with big arrays it feels best with 20MB. Assume that the performance levels i claim for geom_raid5 are correct, this means this myth is also busted. Myth: for decent RAID5 performance you need a hardware controller.

Quote:
RAID0 improves non-sequential performance as well, like writing small files.


Generally only when cache is available and the controller can re-order the I/O requests. Many cheap motherboard implementations cannot and will not do this.

No, parallelisation does not require buffercache other than a simple FIFO buffer. Or how do explain the numbers i get with geom_stripe (simple RAID0 without fancy reordering engine) and strong random I/O where performance increases from 63% to 300%. If you do not believe me:

[code:1:972a115f89]Single drive (ad8)
concurrency Performance in I/O's per sec. average
1 106 106 107 106
4 106 106 106 106
16 116 116 116 116
32 127 125 126 126
128 151 151 150 150
256 156 156 157 156

gstripe 4xad - 128KB stripe - FM off
concurrency Performance in I/O's per sec. average
1 173 173 173 173
4 270 270 270 270
16 338 338 338 338
32 370 370 370 370
128 444 434 434 437
256 465 465 465 465

Analysis: performance increase ranging from 63% to 300%.[/code:1:972a115f89]

This is 100% software RAID0 without any dedicated buffercache (apart from maybe a small FIFO buffer).

Quote:
And making one big volume out of multiple drives.


All levels of RAID do this (even JBOD). Since I was trying to highlight the difference between RAID 0 and other RAID levels, this was a non-relevant point.

Ok, pardon :) 

Quote:
A live backup won't protect you either from these risks, although it does decrease them. You need offline backups at a different physical location to provide optimal data security.


Who said anything about a "live backup"? (What exactly is that, anyway?) A "backup" to me is always offline. That's what makes it a backup.

Well with a 'live backup' i mean something like a second drive running in the same system. While it may decrease the risk to accidental deletion, virusses/spyware etc. it does not kill them, since the virusses etc. can also target the backup drive. Instead, proper backups are done offline and best offsite (other physical location).

By the way i would say a power spike killing multiple disks or fire to have a bigger risk; so a backup should at least not be connected to the same system. An easy solution is to use another network computer's harddrive to dump the backup on. That can be cheap (free) and quite easy to setup with automated tools, even for windows.


Quote:
Untrue. Software RAID5 can just as well reach ~400MB/s of write performance, beating even the very renowned Areca ARC-1210 hardware RAID with Intel IOP processor.


I've seen you quote this in numerous threads, but I haven't seen any benchmarks that show that.

Sorry, let's post my benchmarks:

[code:1:972a115f89]graid5-tng (8xad, 128KB stripe, 20MB buffer, wqp10)
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
DD benchmark(1GB) Results in MB/s avg
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
4k READ 58.1 58.7 59.0 58.6
WRITE 155.5 155.8 154.3 155.2
16k READ 130.0 125.6 129.5 128.3
WRITE 308.5 306.3 306.9 307.2
64k READ 183.8 183.9 188.9 185.5
WRITE 416.9 416.7 415.8 416.4
128k READ 197.3 194.4 197.6 196.4
WRITE 421.0 426.2 399.7 415.6
1m READ 193.0 196.8 198.1 195.9
WRITE 327.6 330.3 331.0 329.6[/code:1:972a115f89]

This is raw disk performance without any filesystem buffercache interference. As a result, reads are slow due to the lack of read-ahead. On the UFS partition read performance is >400MB/s. Write performance is somewhat lowered since the higher queue depth actually makes the combining algorhitm less efficient, creating more work for the CPU.

One weakness i do not like, though, is that graid5 is mostly single threaded. During the benchmarks on Athlon 64 X2 3800+ the cpu usage was 55%; so consuming one core and the other 5% takes place on the other core (interrupt, geom I/O handling, etc). I would get even higher results with a CPU like core2duo - though the physical limit of write performance in my setup was 441MB/s so certainly not bad.


Nevertheless, even if I give you the benefit of doubt and assume that's true, you have conceded that it wasn't on a Windows machine. Since I was speaking to the original poster in this thread and his machine is Windows, then my statement to him is true.

Well.. half truth. I think you should refrase that Windows's software RAID5 implementation is slow; now it appeared (very much) like you were speaking in general about software RAID5. And i guess you're one of the more knowledgeable persons in here so people take for granted what you say; you have to be more nuanced.

Quote:
Why?


Because the purposes of a boot drive and a data drive are different. And because the ability to recover the system from disaster is far easier. The first time you need to perform disaster recovery on a Windows server where there was one large partition holding the OS and the data will be your last, and you'll separate your boot and data drives after that.

For windows i guess yes that is easier, or for people who have just one system. For me, i just insert a livecd and i can do all the fixing i want. For Windows this does not really work, unfortunately.

It's much easier to purpose the arrays properly when they're separate. You can tune the boot drive for protection and minimal rack space (RAID 1), while tuning the data drives for performance (such as a SQL server using RAID 10), or for capacity (RAID 5 or 6). Further, if you want to subsequently move the data drives to another device (external unit, SAN, or another server), that is trivial since it doesn't involve the boot partition on that server.

Ok, nice points. Though i guess it depends on the user if these are actually benefits to him/her.

Quote:
I would argue that your operating system should not actually swap and cause I/O drain. And yes i know about virtual memory; but actually swapping (causing I/O) and VM is not the same thing.


Of course. But it doesn't always work that way. Your 1GB of memory that was perfectly adequate in the SQL server 6 months ago when you first started using it isn't adequate anymore because the database is now getting used more.

So, the systems uses HDD as RAM very actively and the system is basically.. a pentium-1 system with extremely high responsetimes. A memory shortage basically means the system is so much degraded in performance. Proactive swap probably means many seeks to the HDDs themselves - then the memory speed drops from some 6GB/s to a mere 10MB/s at best. Oh boy that will suck big time. :lol: 

No really swapping is bad. Though in the situation you give it would help if some other parts (non-active applications) are swapped. By the way, i heard some positive things about using videocard memory as swap. And ofcourse a 1GB flash pendrive might be nice for swap, since flash has a really low accesstime (no mechanical 'seeks').


If you keep the boot drive & pagefile off RAID 5, the resultant performance impact is less when this happens. Ideally, sure, you'd order up more memory and put it in. But if you don't control the budget, you better plan ahead more carefully.

Swap should not be on RAID5, yes. Except maybe ZFS which uses variable stripesize; so all write are actually 1-phase. I have not benchmarked ZFS yet, though. But i'm eager to. :) 

Quote:
With the right tools, RAID5 is faster than RAID10. RAID10 also carries bigger risk of drive failure since you need more disks to reach the same capacity -- and more drives mean more risk. Though both have redundancy so not really a big deal.


Yes, you can make RAID 5 fast with a $1000 RAID card or your claimed non-Windows implementation. But that may not be available. Maybe if speed is what you have to have and budget is not available then you use RAID 10.

Well geom_raid5 provides both more speed than RAID10 and is also cheaper due to less disks needed for the same capacity. And probably more safe since it uses less disk so less risk on data failure. But the biggest drawback is that many users use Windows and limits their options. Then RAID10 is a good alternative.

Quote:
Let's round up those RAID myths!


If that's your goal it might be better to work on the large myths first, like game loading times or RAID for RAID's sake. Nit-picking my perfectly valid points undermines that goal.

Well, you did say some things which simply aren't true. But i agree this is something that is not very publicly known. But i'm here; i'd like to correct some misconceptions and false facts that people take for granted. In reality they actually were true for the most part - it's only now that software RAID5 demonstrates it's potential.

Oh, and i'm also interested in the effect on game loading times, done with proper benchmarking. Thus without any misalignment which kills parallelisation opportunity. I'm determined to do *proper* benchmarks. Unfortunately it takes a lot of time, im busy since september 2006... :( 
March 10, 2007 6:10:43 PM

All right, so let's sum up.

Your main point that you want to use to dispell "myths" is that graid5 (a software implementation of RAID 5 on Linux) can achieve high performance.

I will grant you that point, assuming your benchmarks are valid because I haven't checked them myself.

However, my advice to most people in these forums is not going to change. If a person comes into these forums and says "my RAID 5 is slow", I cannot in good faith recommend he switch to Linux and graid5 to solve his problem. There would be no practicality in that recommendation, even though it may indeed result in good performance.

The vast majority of users here run Windows. If there is a pervasive belief that software RAID 5 is slow and a hardware card is necessary for good performance, I wouldn't call that a myth, because it's true for any Windows user. Currently, there is no Windows software RAID 5 implementation that can compete with even a 2-generation old hardware RAID 5 implementation. If that changes in the future, I will immediately change my recommendations. But currently, anyone here who is running Windows who wants a redundant array with maximum capacity and maximum performance, I will continue to recommend hardware RAID cards.

If you feel you want to participate in those threads and point out that there is a software Linux RAID 5 alternative, please do so. But do not undermine, refute, or discount my recommendation of a hardware RAID card as invalid. It is not.

As to the other points, we seem to have agreed for the most part on all of them.

Thus, discussion is over.
March 10, 2007 9:06:31 PM

Here's a few example numbers just for consideration.

I've got a couple of RAID arrays here.

One's a RAID5 in a webserver, using 6 Fujitsu MAH3091MC U160 SCSI drives (one's the hot spare), these are hooked up to a Dell PERC3 U160 hardware raid controller with an intel RISC raid I/O chip and 128MB cache, in a 64 bit PCI slot.

With iostat on this array I get 42MB/s read (dd if=/dev/rsd0d of=/dev/null bs=1024k) and 35MB/s write (cat /dev/prandom > /home/somefile). The individual drives can hit 33MB/s read, so I'm not too impressed with their perfomance in RAID5.


The other's a RAID0 in my file server, using 8 Seagate ST3320620AS SATA2 drives, hooked up to a Highpoint RocketRaid 2220 in the 64 bit PCI slot of a PowerMac G4.

I get 145MB/s read and 101MB/s write (xbench sequential with 256K blocks). Tom's shows a similar single 7200.10 as having 63MB/s read.



RAID5 is not usually a performance choice, it's used as a compromise between redundancy and storage space because RAID1 uses too many disks to be economical. If you want to add performance you go up to RAID50.

Using a software RAID5 in anything but a dedicated file server will use CPU cycles needed elsewhere to calculate XOR for the parity. Having swap space on the array will end up slowing the system down to write this, making swapping even more of a drain on resources. Putting swap on any kind of RAID is discouraged (I use a fixed sized swap file in it's own FAT32 partition, right at the end of the disk where the transfer rates are highest)

For reading, RAID5 is slightly slower than RAID0, as it needs to skip over parity blocks, which adds processing overhead.

The write speed of RAID5 is influenced a lot by the number of disks in the array, which should be a power of 2, + one for the parity, like 3,5 or 9 drives. This is because block sizes in powers of 2, like 32k or 64k, fit better onto this many disks, and parity processes better with no left over bytes. Most SCSI hot swap bays in Dell/HP/Intel/Compaq servers that have RAID5 controllers have 6 slots, 5 for a RAID5 array, and one for the hot spare.


If you need any more proof that disk speed has nothing to do with loading times, I'm usually the first into maps on the BF2 server I play, and that's because I've got 2GB of RAM and a Core Duo. The hard drive is a 5400RPM laptop drive. The loading time is much longer than it would take just to read all the map data, because it also has to be checksummed to make sure it hasn't been modified/corrupted. Being able to put all the map data into memory at once makes the checksumming quicker.


I'd still like to get a bunch of 133X CF cards and stick them into an ATA RAID controller though :)  (didn't work with my 8 port Escalade)
March 11, 2007 3:07:43 AM

Quote:
All right, so let's sum up.

Your main point that you want to use to dispell "myths" is that graid5 (a software implementation of RAID 5 on Linux) can achieve high performance.

My main point is: Software RAID5 is not slow. It's blazingly fast, unless a 'dumb' implementation is used.

On Windows your options are very limited; Windows has RAID layers that are 'better than nothing' but extremely weak. Basically there is no software RAID5 solution on Windows. But do implicate that windows' quick-n-dirty implementation of RAID5 is representative of the concept of RAID5. That's basically my point.

Quote:
However, my advice to most people in these forums is not going to change. If a person comes into these forums and says "my RAID 5 is slow", I cannot in good faith recommend he switch to Linux and graid5 to solve his problem. There would be no practicality in that recommendation, even though it may indeed result in good performance. The vast majority of users here run Windows.

I would never recommend users to quit their favorite OS just because i think it's less good. So for these users, indeed, Windows RAID5 is no option and since the user probably does not want to switch to another OS there's no other option than buying a RAID card.

That still does not mean the user *needs* true hardware RAID. The user might get away with some good hardware assisted (semi-software) solutions like Highpoint offers (their 23xx line). It cannot catch up with geom_raid5 in terms of STR performance, but the write performance is higher than a single drive which means it's probably good enough.

Quote:
If there is a pervasive belief that software RAID 5 is slow and a hardware card is necessary for good performance, I wouldn't call that a myth, because it's true for any Windows user.

Hm well there's still a problem. Assume Intel processors in my country are twice the price in yours. If i would yell hey you should buy AMD because Intel is too expensive to justify you would (rightfully) complain that this is not true. It might be true in my country, in my situation, but then i should mention that. In your case i think you should say "Windows software RAID5 is slow" then i have no problem with that because it's absolutely true.

Second, there may be RAID controllers that use drivers to implement the RAID which are written in an intelligent way. They still use your processor for XOR and combining but it might be very efficient in this and achieve high performance. No true hardware RAID is needed; though ofcourse this is not the same as true software RAID.

Quote:
If you feel you want to participate in those threads and point out that there is a software Linux RAID 5 alternative, please do so. But do not undermine, refute, or discount my recommendation of a hardware RAID card as invalid. It is not.

I agree, the advice you had given is valid and you should not change it. Perhaps Windows users do not need true hardware RAID with their own processor though - as i explained. A semi-software solution might be sufficient to get fair to even very good performance.
March 11, 2007 3:34:17 AM

Quote:
With iostat on this array I get 42MB/s read (dd if=/dev/rsd0d of=/dev/null bs=1024k) and 35MB/s write (cat /dev/prandom > /home/somefile). The individual drives can hit 33MB/s read, so I'm not too impressed with their perfomance in RAID5.

Your benchmark is flawed. You should use:
dd if=/dev/zero of=/dev/rsd0d bs=128k count=10000

Using pseudo random data might be a bottleneck in itself, and i would suggest using dd for the task instead of cat, just because i do not know if it maybe is a limiting factor to do it that way. dd is pretty much optimized for moving and processing data.
Quote:
The other's a RAID0 in my file server, using 8 Seagate ST3320620AS SATA2 drives, hooked up to a Highpoint RocketRaid 2220 in the 64 bit PCI slot of a PowerMac G4.

I get 145MB/s read and 101MB/s write (xbench sequential with 256K blocks). Tom's shows a similar single 7200.10 as having 63MB/s read.

That's a pathetic performance score. But you are still using a legacy PCI bus for the task (even if it's 64-bit). I get 177MB/s of STR with just 2 drives (Samsung T166 500GB) in RAID0 (geom_stripe).

Quote:
RAID5 is not usually a performance choice, it's used as a compromise between redundancy and storage space because RAID1 uses too many disks to be economical. If you want to add performance you go up to RAID50.

I have tried this but found that using this actually degrades performance. Because now you might have a double 2-phase write going on plus more overhead on combining algorhitms. I would vote against RAID50, unless benchmarks show it to perform higher on your system and configuration.

Quote:
Using a software RAID5 in anything but a dedicated file server will use CPU cycles needed elsewhere to calculate XOR for the parity.

Just a few of them, yeah. But the major task for the CPU is combining I/O requests, the XOR itself is not really worth mentioning.

Provided your CPU can do 400MB/s with 65% utilization (on my Athlon 64 X2 3800+) then writing 20MB/s will consume only 3,25% of your CPU. If your system really wants to transfer extremely large parts of data then it's logical to assume you're waiting for that to finish and i'd rather spend as much CPUtime on it so it finishes faster. Do you not agree?

Quote:
Having swap space on the array will end up slowing the system down to write this, making swapping even more of a drain on resources. Putting swap on any kind of RAID is discouraged

1) i would vote against swapping anytime, unless in emergency situations. Either way, performance is bad
2) why would putting swap on RAID0 be a bad thing? It will certainly speed up the swapping, although it stays very slow.

Quote:
For reading, RAID5 is slightly slower than RAID0, as it needs to skip over parity blocks, which adds processing overhead.

Yeah true, though it does not *have* to be slower it could very well just be as fast and only take a bit extra processing overhead.

Quote:
The write speed of RAID5 is influenced a lot by the number of disks in the array, which should be a power of 2, + one for the parity, like 3,5 or 9 drives. This is because block sizes in powers of 2, like 32k or 64k, fit better onto this many disks, and parity processes better with no left over bytes.

Nice theory. Though, in my benchmarks there was no real *relative* difference between 4-disk and 5-disk RAID5 array. But i will investigate further.

It's certainly better when it gets a full stripe (with 5 disks and 64KB stripe that is (5-1) * 64KB = 256KB) right away. But since MAXPHYS (the largest I/O request size) by default is 128KB, it would never get such a large request, so the request will be stored in the combining thread of the RAID5 algorhitm anyway.

Quote:
If you need any more proof that disk speed has nothing to do with loading times, I'm usually the first into maps on the BF2 server I play, and that's because I've got 2GB of RAM and a Core Duo. The hard drive is a 5400RPM laptop drive. The loading time is much longer than it would take just to read all the map data, because it also has to be checksummed to make sure it hasn't been modified/corrupted. Being able to put all the map data into memory at once makes the checksumming quicker.

You mean filesystem buffercache, yeah that will speedup things significantly; so a good portion of RAM would really speedup things - but only if it has been read at least once before. It has to reach your RAM at one point.

This is the same as first starting firefox after bootup (which takes some time) - then close it and start it again after 5 seconds. Then it will boot much faster - mainly due to filesystem buffercache. But if you're playing different maps which have not been 'fetched' before then much RAM won't help you, only if you play the same map again after.

Quote:
I'd still like to get a bunch of 133X CF cards and stick them into an ATA RAID controller though :)  (didn't work with my 8 port Escalade)

Hehe yeah that would be very cool. :) 
Flash is extremely fast in random I/O performance and other non-sequential access, because it has no seek times. It's accesstime is basically 0.0ms. Just like RAM.
Too bad it can only be written x-times though; if they would solve that and have some 64GB flash disks for a reasonable price i'll buy them for sure. :D 
!