A question about RAID 1 on 2 drives.

:pfff: [Rephrased due to PUI (Posting Under the Influence of Alchohol) last night] :non:

Motherboard: Gigabyte GA-Z77X-UD5H
HDDs: 2x 1TB Seagate Barracudas, 1x Western Digital 500GB Cav.

With this motherboard, I can set up mirroring (RAID 1) using the on-board raid controller (I don't have it yet, but the specs on Newegg say which SATA ports to use for RAID). Or I can set it up in Windows as a Dynamic Disk mirror.

I'd like to know what the community recommends as far as setting up mirroring (RAID 1) for storage redundancy.

1 - With controller-based RAID 1, am I able to disconnect the drive while the system is on (via a hot-swap bay on my case) to plug that drive into another PC to copy files over? I didn't see any information on this in the motherboard specs or on Gigabyte's web site, so I figured I'd ask.

2 - If something happens with the motherboard, I lose the array, which means I lose all the data on the disks? Or I can set them both up with another board, and rebuild the array?

3 - If the answer to #2 is a "No", would it be recommended to just set up the array in Windows 7, and thus I can move the drives between Windows installations freely, as long as I tell it to use this disk as a dynamic disk in the array?

4 - Which RAID option would you recommend for 5x 1TB hard drives? Raid 3? 5?

I read the sticky, but didn't see the answers in there. I figured I would ask now, BEFORE the equipment from Newegg arrives, so that I can go out and purchase more drives from Best Buy. My equipment arrives tomorrow, and I'm super excited!


Neb :hello:
8 answers Last reply Best Answer
More about question raid drives
  1. Shameless bump. Parts arrive today, but I guess I'll just go with 5 disk Raid 5.
  2. I would recommend using hardware RAID1 normally, the individual drives should be readable on a different controller if it is simply RAID1 in the event that your MB goes bad etc. As for 5 1TB disks, def go RAID 5 over RAID 3 but it really depends on your requirements. RAID 10 across 4 1TB disks will actually be faster for writes but will give you only 2 disks worth of space. I would then keep the 5th disk as a spare for the array.
  3. Best answer
    When using any RAID1 system, you should NOT plan to disconnect any RAID member during operations. Doing so just MIGHT cause some data corruption that could be difficult to recover from. RAID1 is very good at recovering from faults, but why tempt fate?

    Even if you remove a HDD from a RAID system while the unit is shut down, then re-start, the system will understand that to mean that the RAID array is broken because one member is not operating properly. The RAID management system then will handle this by reverting to using the remaining HDD as a stand-alone unit, just as if you had not removed a drive, but it had failed for some other reason. It will then expect you to replace the faulty HDD and will assist you in restoring the RAID1 array when this has been done. "Restoring" means duplicating the still-good HDD back to the new replacement HDD.

    So, if you remove one HDD from the RAID1 array while it is shut down, and then copy to the removed unit some files from another machine, and then re-install it before starting up, what will happen? The system will detect that the two units in the RAID array are no longer identical, and will try to resolve that by making the "faulty" unit exactly match the "good" unit. Likely that would mean the stuff you copied to the removed-and-replaced drive would be wiped out or corrupted as its contents were adjusted. Do no do this.

    Basically, RAID1 and "Hot Swapping" do NOT mix! Do not try this at home.

    If you need to copy files from another machine to your RAID1-equipped machine, do it some other way. If necessary, use a third HDD you can move between machines as a stand-alone drive in each case. Or, consider connecting machines on a small network and setting up sharing access to drives so you can copy from one machine's drives to the other's.

    Item 2: Generally, RAID1 is the easiest to migrate to another machine and RAID control system. More than that, I'd say it's a VERY good bet that, if you lost the mobo and had to replace it, you should replace with a new mobo that also uses an Intel chipset. Your uses the Z77 chipset. I would expect that, in future, most Intel chipsets deployed on new mobos will use virtually the same RAID management systems, so that they would have no trouble working with HDD's written by your Intel Z77 chipset. The result could well be that you would not even know there was an issue - it would just work. But even if it did not work right away, usually there are some relatively easy tools to use to gain access to ALL of the files on a RAID1 pair. Basically, all that any new RAID1 management system has to do in the worst case is treat ONE of the disks like the surviving good disk after a failure of the other, and act like it is working with a "broken" RAID1 array and continue on that way to use the "good" disk and maybe "restore" the other. So, with RAID1, it is very UNlikely that you would lose your data, especially if you moved to another mobo with an Intel chipset.

    Item 3: many people find the RAID systems built into Windows not so good, and probably slower than a system built into your mobo and BIOS. The only advantage it has is what you cite - Windows in future is not likely to forget how to deal with a RAID1 array written by an earlier Windows.

    Item 4: tokencode's comments are good. One thing you should understand. In all of the RAID3, RAID4 and RAID5 systems, the RAID system can automatically recover completely from the failure of ONE of the HDD's in the array. BUT after you have replaced the faulty HDD and during the time that array rebuilding is happening, two effects emerge. One is that the system is doing a LOT of work, so its performance is slowed down. And this can take from hours to several days, depending on what else is being done. The other is that EVERY disk is being used heavily for the rebuilding operation, and all of them are just as old and worn as the one that failed. So there is a possibility (small) that the rebuilding operation itself could cause the failure of a second HDD, and that is catastrophic. The system cannot recover itself from the failure of two HDD's. SO: you MUST still have a backup system. I actually saw this happen in a professionally-managed server system. Two drives in a 5-drive RAID5 system failed, and once they were replaced the system had to be rebuilt from backup tapes. They were not today's current huge drives and they were in a busy server in constant use, but it took over 3 days to restore. RAID10, by the way, can recover from a single HDD failure and MAY be able to recover from 2 drive failures - it all depends on which two.

    The only RAID system that can recover from simultaneous failure of TWO drives is RAID6, which your machine can't do within its own RAID tools. But VERY few people would use such a system for a home unit - it tends to be used in large enterprise server systems.

    As a final note, you may hear the term "hardware RAID". To be correct, such a system means that all of the RAID operations are being carried out by dedicated hardware (a processor, HDD controllers, BIOS and cache RAM) on a separate card usually plugged into a PCIe slot. This moves virtually all of the work off the mobo and its resources, thus speeding up overall operations. But such add-on cards are expensive. (Moreover, some cheap add-on cards actually are not fully hardware RAID systems - they are more like the systems "built into" your mobo (below)). The opposite, "software RAID", comes it two sub-versions. One is what Windows itself makes available - code contained in an app that runs on the mobo and its CPU and resources. The other is what is in your mobo's BIOS (and most others these days). The code is in the BIOS, and many of the operations necessary can be done in the support chipset (not in the CPU), so it is faster than a purely software-only RAID. But it still uses mobo and CPU time and resources, and is not as fast as a full dedicated RAID controller board.
  4. Paperdoc,

    Thank you very much for your response! Wow, an astounding amount of information!

    I, too, have seen multiple drives fail in RAID arrays, although I was not the technician working on them. The vendor who supplied the drives for the equipment that had them go down flew from their company building to where we were, and had new drives installed within a few hours - something that I found very admirable, considering. But I guess that is why corporations pay large sums of money for the service, eh?

    This is my first time actually setting up RAID - before now, I haven't really paid much attention to it. Of course, I can also blame this on me being lax due to the advent of virtual computing and point-in-time "snapshots", etc. It's my own fault for being in the dark on the issue!

    Something that I was wondering, was with all of the new motherboards and PC cases, they often say "comes with xx hot-swap bays", etc. Is that just for normal SATA drives? The last time I "accidentally" disconnected a SATA cable from a drive that was spun-up in a running Windows 7 installation, it rebooted the

    I think the plan now will be to go with RAID 10 - I get the benefits of RAID 0 and 1, and while I understand that scalability is an issue, I am going to purchase 2 more 1TB drives on my way home from work today (for a total of 4x 1TB). This way, I'll have 2TB of total storage space, with decent performance for read/write and they're mirrored.

    I would go with a simple RAID 1, however the read/write times for the amount of data that I'll be transferring once a month or so are horrendous as it is with 1 drive...I can't imagine that it would be faster with mirroring only. Unless I'm misunderstanding something, at any rate!
  5. Best answer selected by Nebulocity.
  6. Hot Swap bays are a bit different from regular ways of mounting HDD's. Certainly these days such bays will be designed for SATA HDD's - I don't know if anyone has made any for older IDE drives. The drives that plug into such bays are not special - they are standard SATA units. The differences are in both hardware and software.

    On the hardware side, the connectors in the bays are constructed so that certain power and data lines are connected BEFORE others as you insert a HDD unit, and these same are then disconnected last as you remove. On the software side (the OS and BIOS), the drives MUST be set up as AHCI devices (the normal way to use SATA drives). They cannot be set in BIOS as IDE Emulating devices, and the OS must have the AHCI device drivers installed. (For the device drivers, Vista and Win 7 take care of all that themselves, whereas Win XP must have the driver(s) installed.)

    Moreover, support for Hot Swapping was not part of the original SATA specs. It IS part of the eSATA specs, and many chipmakers now include that feature in their SATA internal HDD controllers on the mobo even though it's not required for internals. So obviously the ones that promote Hot Swap bays in the machine are using a chipset and BIOS that adds that feature. Hot swap support, among other things, means that certain disk write operations are carried out quickly rather than delayed, thus ensuring that all the files on the HDD are completely up to date. And that, in itself, requires a certain ability to keep on working briefly if a power interruption happens during a write.

    Hot Swapping HDD units MIGHT make some sense in a RAID1 or RAID5 or RAID6 system, but certainly NOT in a RAID0. So I think it a bad idea for RAID10. But even then, that can only be done IF the RAID system is set up to handle that kind of event. This ability is a common feature of more advanced RAID controllers used in enterprise servers (so the system can keep running when a failed unit must be replaced and the array rebuilt), but is not always part of a mobo's built-in RAID abilities.

    A RAID10 array will give you the mirroring peace of mind of RAID1 (plus its small speed improvement), plus the speed advantage of RAID0 (not huge, but still there, without the huge DISadvantage of RAID0 - NO tolerance for disk failure. I'm no RAID0 fan. To me, it is worse than a stand-alone HDD on failure tolerance. It is possible for a RAID10 array to lose everything if two specific units fail simultaneously, but that event is rare.

    When you do your big monthly data transfers, how is that done? If it is between internal HDD units, there's not much you can do to speed it up. If it involves an external drive connected to your computer via eSATA or Firewire, those two are almost as fast as internal. BUT if it involves using an external unit connected by USB2, that system's data transfer rate is often only half of what you can get with internal or eSATA drives. And don't forget, the RAID system you are setting up uses the CPU and mobo resources to some extent, so try not to have the computer doing other heavy jobs at the same time.

    Let me repeat a common reminder: no RAID system can replace a true backup system. If your data re important to you, plan a good backup system, and then USE IT!

    Thanks for the BA designation!
  7. Jeez...I have long been perplexed by the whole RAID thing.
    I stumbled on this thread by accident. paperdoc's explanation here is, to say the least is... awesome.
    I have a useable understanding now of RAID vs backup etc... thanks man.. a million!

    BTW I came here looking for an answer to a question I googled (with no luck).
    I want to set up a RAID 1 on my new WD MyBook.
    How do I know or how can I see or be certain there are actually files on the 2nd drive of the RAID?
    Sounds silly but I just want to make sure they are there :)

    I a 2012 15" MacBook Pro.

    thanks again paperdoc.
  8. This is a followup question

    z77x ud3h motherboard
    2x 1 tb raid mirror hard drives
    windows 7 home premium installedon an ssd

    Had ssd issues. chkdsk would not complete. So decided to by a new vertex3 drive and reinstall windows 7. windows 7 woould not installed from disk while my RAID disks were listed in the boot loader. so went to the bios and the boot loader and disabled them. They were listed as UFI disk.

    Was able to install windows.
    Windows boots up and does not see my RAID array.
    I look at my disks. 1 disk has some really old files on it. other disk is listed as not formatted. many disks are missing.

    I have had raid mirroring arrays break on me (due to motherboard battery dying) and I did not lose files. I did not take out either of the RAID drives.

    I went back into the BIOS and I dont see the option for the UFI raid disk. They are both listed separately. How do I repair this array without losing data? and what did I do wrong? I had to disable this in order to reinstall windows. I thought raid mirroring was not that fragile? Shouldnt i still be able to see all my files?

    another really odd thing. I booted off my windows 7 CD and tried to restore an image file I had on my RAID disk. It said I couldnt because backup was in UFI. So I take it my RAID disks were configured in UFI (its the default on my motherboard. I dont need it). This is why I had to reinstall. However, when I disabled the RAID and reinstalled. I was then able to reinstall my i mage file. This image is very old. I really need the windows restore points and an image i just took yesterday so i can get my current data.
Ask a new question

Read More

Hard Drives NAS / RAID Motherboards Storage