Advice for Converting an Intel ICH8R RAID 5 to RAID 10

I'm looking to convert my 3 HDD RAID 5 system to a 4 HDD RAID 10 system. I have the Intel Maxtrix Storage Manager 7.0.0.1020 installed but there doesn't seem to be an easy way to convert within the software. I wanted to see if anyone has suggestions on the best way to make this conversion.

The option I see right now:

Remove one of the RAID 5 drives and use it and the new HDD to create a RAID 0 volume. Ghost over the existing data, and use the last two RAID 2 HDDs for the mirror.

I just don't know if this will work. Has anyone done this before that might offer suggestions?

System is on a P5B Deluxe with 2Gig of RAM running a E6400 (2.13GHz Core 2 Duo). Currently it had 3 400GigHDDs and I'm looking to go to 4 400GigHDDs.
33 answers Last reply
More about advice converting intel ich8r raid raid
  1. Why convert to a RAID 10? Why not a 4 disk RAID 5? Your read performance will be better and I bet write performance too. Plus then your array will end up with a total of 1.2TB total storage, but the RAID 10 will only be 800GB.

    Before you do anything, BACKUP ALL OF YOUR DATA. I cannot stress this enough. Migrating arrays can be tricky; don't risk your data in the process.

    Ryan
  2. I'm looking for the added punch of the read/write rate of a raid 0 system. Does anyone have performce numbers for a RAID 5 three disk vs four disk array? I don't need the space (yet) but really would like any suggestions on how to convert between different types of RAIDs.
  3. There's an Intel whitepaper that list the supported migration possibilities for matrix raid.
    http://www.intel.com/design/chipsets/applnots/310855.htm
    Appendix C on the last page.
    The user's manual is also available here...
    http://www.intel.com/support/chipsets/imst/sb/CS-012543.htm

    Looks like migrating from Raid5 to any other mode isn't supported so you're right, you'll have to do it the hard way.

    Your plan seemed sound to me but as rninneman pointed out, Murphy's law always applies.

    Looking again at the migration maxtrix, migrating from a 2-disk raid 0 to raid 10 isn't possible either. So it won't be possible to ghost to a raid0 and then convert to raid10.
    Looks as if the only way to get to raid10 is from a single drive or from scratch.

    That means you have to find a way to back-up or ghost to another disk that won't be part of the raid10, and then ghost or copy back to the raid10 once it's created.

    rninneman's suggestion of adding a disk to your raid5 doesn't look possible either since there's no migration shown for 3-disk raid 5 to 4-disk raid 5. Perhaps adding a disk isn't the same as migration but I didn't see anything in the user's manual about adding a disk either.
    These manuals could be (probably are) outdated though so maybe it's possible with recent versions of Matrix drivers & firmware.
  4. RAID 5 benefits from the same advantages as RAID 1 plus it adds redundancy. RAID 0 is just striping. RAID 5 is striping w/ distributed parity. RAID 5 performance scales with each additional disk just like RAID 0. The 4 disk RAID 5 will most certainly read faster than the 4 disk RAID 10 because you essentially have only 2 members striped. The 4 disk RAID 5 essentially has 3 striped members to pull data from. Since you have an ICH8R southbridge, your write performance should be better with the RAID 5 too.

    You say you don't need the storage just yet, but if you're going through the trouble, why migrate twice?

    Ryan
  5. Thanks. Looks like the latest one with southbridge info can be found here:

    http://www.intel.com/support/chipsets/imsm/sb/CS-020674.htm

    So while it looks like it should be easier to convert from a RAID 10 to RAID 5, there doesn't seem to be any way to convert to RAID 10.

    Does anyone know if I HAVE to have 4 drives connected at the time of a RAID 10 creation?

    It looks like I might also have an option of creating a 'RAID Ready' system.

    Thoughts?
  6. Quote:
    RAID 5 benefits from the same advantages as RAID 1 plus it adds redundancy. RAID 0 is just striping. RAID 5 is striping w/ distributed parity. RAID 5 performance scales with each additional disk just like RAID 0. The 4 disk RAID 5 will most certainly read faster than the 4 disk RAID 10 because you essentially have only 2 members striped. The 4 disk RAID 5 essentially has 3 striped members to pull data from. Since you have an ICH8R southbridge, your write performance should be better with the RADI 5 too.


    You will not get the write performance in RAID 5, as the array controller must generate the parity data in case of a drive failure. In RAID 0, 1, and 10, you do not have the parity data to maintain, thus writes become faster.

    I have everything from 3 to 54 drives in a RAID 5 and the write performance only gets really better with a write cache and fibre channel disks.

    In this scenario, you can test it, but I don't think you'll get better write performance to a RAID 5.
  7. Quote:
    yone know if I HAVE to have 4 drives connected at the time of a RAID 10 creation?


    Yes, as you need a minimum of 4 disks for RAID 10 or 01. From the above text, it appears you can't migrate from RAID 0 to RAID 10, so you'll need all drives together at creation.
  8. Quote:
    RAID 5 benefits from the same advantages as RAID 1 plus it adds redundancy. RAID 0 is just striping. RAID 5 is striping w/ distributed parity. RAID 5 performance scales with each additional disk just like RAID 0. The 4 disk RAID 5 will most certainly read faster than the 4 disk RAID 10 because you essentially have only 2 members striped. The 4 disk RAID 5 essentially has 3 striped members to pull data from. Since you have an ICH8R southbridge, your write performance should be better with the RADI 5 too.


    You will not get the write performance in RAID 5, as the array controller must generate the parity data in case of a drive failure. In RAID 0, 1, and 10, you do not have the parity data to maintain, thus writes become faster.

    I have everything from 3 to 54 drives in a RAID 5 and the write performance only gets really better with a write cache and fibre channel disks.

    In this scenario, you can test it, but I don't think you'll get better write performance to a RAID 5.

    I agree with you that in enterprise environments, RAID 5 writes can only be accelerated with a controller with a dedicated XOR engine and writeback cache. In the desktop environment with 3 or 4 drives, the ICH8R coupled with a C2D has proven to be a highly competent RAID setup.

    Fiber channel only helps when the total drive throughput can saturate the interface. 4 Raptors have a hard time saturating an SATA2 controller. If his drives are 400GB, they are not Raptors.

    Tom's did a review back in January of the nForce 6 MCP vs. the ICH7 vs the ICH8. (Link) The ICH8 was able to write faster in RAID 5 than in RAID 10. RAID 0 could write faster than RAID 5 but they were roughly the same for read performance. This is moot though because it provides no redundancy.

    I stick with my previous recommendation.

    Ryan
  9. Ryan, Thanks for the link to the review. Looks like there is no reason to have a RAID 10 if you have a ICH8R. So, does anyone have suggestions on adding a 4th drive to my RAID 5 setup?
  10. No problem, unfortunately it is virtually impossible to create a method for a controller to migrate an additional drive into a RAID 5 array. The best thing for you to do is backup all of your data and start from scratch again. I know that is a time consuming process, but trust me, it's worth it. Plus you can change your stripe size again. Depending on what you mostly work with, 64k or 128k will provide the best performance. For example, if you work with a lot of pictures, video, music files, or just files larger than 192-384k, you will get better performance with a largest available stripe size for a 4 drive RAID 5 array.

    Ryan
  11. Quote:
    No problem, unfortunately it is virtually impossible to create a method for a controller to migrate an additional drive into a RAID 5 array.


    Not true at all. Virtually all enterprise-level controllers (and a few prosumer models now, like some of the Highpoints) can easily do this. Intel just chose not to implement this functionality on the ICH7/8/9.

    Quote:
    For example, if you work with a lot of pictures, video, music files, or just files larger than 192-384k, you will get better performance with a largest available stripe size for a 4 drive RAID 5 array.


    Not necessarily true. That's highly dependent on the controller. Unless you're working with a library of Gigabytes of video/media files, the default stripe size of the controller of 32K or 64K will work best. Certainly for Windows boot times, anything larger than 64K will likely reduce performance.

    As for the original poster, the only way to perform this migration is to backup the data and recreate the RAID 10 (for RAID 5 -> RAID 10 migraton), or backup the data and recreate a 4-drive RAID 5 (for RAID 5/3 drive -> RAID 5/4 drive).
  12. Quote:
    No problem, unfortunately it is virtually impossible to create a method for a controller to migrate an additional drive into a RAID 5 array.


    Not true at all. Virtually all enterprise-level controllers (and a few prosumer models now, like some of the Highpoints) can easily do this. Intel just chose not to implement this functionality on the ICH7/8/9.

    Sorry, I was in a hurry to reply. I should have been clear that without a dedicated controller, the load on a system to migrate an additional drive in would be immense. There is no easy way around that because in a desktop environment a time consuming process such as that can easily be interrupted where as in an enterprise environment, the machines are usually running 24/7 anyway, and the controller can work on the migration in the background.

    Quote:
    For example, if you work with a lot of pictures, video, music files, or just files larger than 192-384k, you will get better performance with a largest available stripe size for a 4 drive RAID 5 array.


    Not necessarily true. That's highly dependent on the controller. Unless you're working with a library of Gigabytes of video/media files, the default stripe size of the controller of 32K or 64K will work best. Certainly for Windows boot times, anything larger than 64K will likely reduce performance.

    Show me benchmarks that show this. From my own testing and benchmarks I've seen in the past, as long as the file size exceeds the number of drives times the stripe size, the performance improves. (The same goes for cluster size when formating as well. Although, with cluster size, other factors come into play when choosing an optimal size.)

    Ryan
  13. Quote:
    Show me benchmarks that show this. From my own testing and benchmarks I've seen in the past, as long as the file size exceeds the number of drives times the stripe size, the performance improves. (The same goes for cluster size when formating as well. Although, with cluster size, other factors come into play when choosing an optimal size.)


    Well, the stripe size is the amount of disk space that is distributed across all drives in the array for one "chunk" for lack of a better term. Thus, files that are smaller than the stripe size by a significant amount don't get the benefit of striping because less physical disks get to participate in the retrieval of those files.

    For example, if the stripe size is 256K on a 4-drive RAID 0, that's a 64K block on each drive within the stripe. A 96K file will likely only span 2 disks of the RAID 0, thus cannot get the benefit of 4x the transfer rate that a 256K+ file would get.

    If the controller is a high-end, hardware-based controller with on-board cache, then it may not matter. They read-ahead cache algorithm and command reordering algorithms will minimize the delay that this would cause. But in an on-board controller like the ICH6/7/8/9 with no cache, I believe the penalty would be greater.

    Windows uses many small files during startup. This is why you see Raptors win over RAID 0 for Windows startup - their access time advantage is more important than RAID 0's transfer rate advantage in that case.

    Nevertheless, you've now got me curious as to real results, so I'm thinking of doing my own benchmarks and posting them here. I believe I have a motherboard available with an ICH RAID on it, and I'll set up a RAID 0, build a Windows installation, Ghost it, and time some startups using different stripe sizes. It may take me a while to compile that data, but I will work on it.
  14. I'm looking forward to seeing your results.
    I'm getting ready to build a new system and planning on making a raid0 boot drive with 2xRaptors.

    I found a couple of threads on OCForums that claim 128K is the best stripe size for Intel Matrix controllers even on the windows startup partition. Benchmarks included.
    http://www.ocforums.com/showthread.php?t=476776
    http://www.ocforums.com/showthread.php?t=467848

    I'm not totally convinced so I'm still trying to decide on 128 or 64k.
    I'm looking not just for minimum startup times but also for lowest latency loading os files after startup.
  15. I think we are somewhat in agreement. My point is that it depends on what you are doing with the machine. Obviously a desktop machine will have different RAID needs than a database server. I wouldn't base my stripe size on Windows startup time though. Once the machine is booted, thats what matters. (Unless you reboot a lot I guess.) Since large media files generally mean large sequential reads and writes, larger stripes perform better on a less sophisticated controller because it cannot reorder commands as efficiently as a dedicated controller.

    Everyone's mileage will vary because everyone uses their system differently. Generally if you are not sure, 64k is a safe bet. It will be interesting to see in the near future what happens when hard drive manufacturers shift from 512 byte sectors to 8k sectors. This will affect the minimum stripe size depending on the number of drives in the array.

    In all fairness to the OP (sorry for kind of hijacking this thread), what do you do with your computer? Gaming, digital photography, video editing, etc? I would recommend RAID 5 no matter what, but depending on what you do, the stripe size should be adjusted accordingly.

    Ryan
  16. I use it for just about everything. A little bit of FPS games, more just storing pictures and video. One of the things I want to prevent is dropping frames when doing a capture from the camcorder.
  17. Quote:
    I use it for just about everything. A little bit of FPS games, more just storing pictures and video. One of the things I want to prevent is dropping frames when doing a capture from the camcorder.


    I would suggest RAID 5 with a 64k stripe. That should be a large enough stripe that your video transfers should be silky smooth, but small enough that your game load times will be quite fast. Unfortunately, you cannot migrate to this configuration, so you must backup your data and reinstall from scratch. I hope this has helped; let me know if there is anything else I can do.

    Ryan
  18. With RAID 5, you need to be careful about write performance, and from some simple tests I've recently (ICH8R (ICH8DO) under Vista with 7.5 drivers), 128k stripe size is a particularly bad choice for RAID 5 -- write performance is much worse than with other stripe sizes.

    When in doubt, the default is worth trying, and the default is 64k, and seems to give good performance.

    RAID 5 also out-performs RAID 10 here in simple sequential read/write throughput test.

    RAID 10 has a relative advantage in CPU utilization, but this should not be material in most cases. RAID 5 CPU utilization appears to increase with smaller stripe size.

    You should have a backup for at least the important data even without considering the risk of migration. "RAID alone is not a backup." A backup/restore is also likely to be faster than a migration in this case.
  19. Quote:

    Remove one of the RAID 5 drives and use it and the new HDD to create a RAID 0 volume. Ghost over the existing data, and use the last two RAID 2 HDDs for the mirror.

    I just don't know if this will work. Has anyone done this before that might offer suggestions?


    Again, I'd strongly recommend a backup.

    Moreover, I haven't done anything like this with ICHxR myself. If you're still crazy enough, you could try the following:

    Remove a drive from RAID 5, degrading it.

    Note that at this point, you're running without any redundancy.

    Disable RAID for that drive / (wipe it / re-partition it / whatever on another SATA controller / machine), whatever, so that when you plug it back in, it won't be recognized as part of that array.

    Use that drive and the new one to create a new RAID 0 array.

    Copy the degraded RAID 5 array's data to the RAID 0 array. Note that a degraded RAID copy is going to take a long time.

    Delete the RAID 5 array. Well, you had redundancy for a second, now it's gone again. [This step gets more complicated if you're booting off the array. Boot off the new array before doing anything like this.]

    Use conversion off the RAID 0 to whatever you want. Note that this step will also take a long time,

    Note again that you still have no redundancy at this point until the RAID is fully re-built, and you've been running without any redundancy for a long time.

    And you still have to expand the file system. If that fails badly, you'll lose your data.
  20. Quote:
    With RAID 5, you need to be careful about write performance, and from some simple tests I've recently (ICH8R (ICH8DO) under Vista with 7.5 drivers), 128k stripe size is a particularly bad choice for RAID 5 -- write performance is much worse than with other stripe sizes.


    You cannot give a generalization like that. It depends on what the machine is used for. For large sequential reads and writes like working with large video files, larger stripes will perform better. For smaller files with lots of random access, smaller stripes will perform better.

    Quote:
    When in doubt, the default is worth trying, and the default is 64k, and seems to give good performance.

    RAID 5 also out-performs RAID 10 here in simple sequential read/write throughput test.

    RAID 10 has a relative advantage in CPU utilization, but this should not be material in most cases. RAID 5 CPU utilization appears to increase with smaller stripe size.

    You should have a backup for at least the important data even without considering the risk of migration. "RAID alone is not a backup." A backup/restore is also likely to be faster than a migration in this case.


    The small advantage RAID 10 has in CPU utilization over RAID 5 does not come close to the difference in actual array performance. A C2D system should have no appreciable difference because of CPU utilization between RAID 5 and RAID 10 with an ICH8R controller. Have a look at the overall performance between RAID 5 and RAID 10; RAID 5 clearly wins. Link

    Ryan
  21. Quote:
    With RAID 5, you need to be careful about write performance, and from some simple tests I've recently (ICH8R (ICH8DO) under Vista with 7.5 drivers), 128k stripe size is a particularly bad choice for RAID 5 -- write performance is much worse than with other stripe sizes.


    You cannot give a generalization like that. It depends on what the machine is used for. For large sequential reads and writes like working with large video files, larger stripes will perform better. For smaller files with lots of random access, smaller stripes will perform better.


    I'm afraid that you're the one who's generalizing based on theory here, I'm reporting specific behaviour that I've measured, with a number of different access sizes.

    How one applies that behaviour in their application is up to them. If you're OK with really poor write performance, for no appreciable benefit in read performance, then, it's a choice you can make. Dumb IMO, but yours to make.

    Granted that my tests were only synthetic benchmarks in a specific setup. Consider it a "heads up". I don't accept random internet posts as facts either -- I'd check it for myself with my applications of choice.
  22. Quote:
    With RAID 5, you need to be careful about write performance, and from some simple tests I've recently (ICH8R (ICH8DO) under Vista with 7.5 drivers), 128k stripe size is a particularly bad choice for RAID 5 -- write performance is much worse than with other stripe sizes.


    You cannot give a generalization like that. It depends on what the machine is used for. For large sequential reads and writes like working with large video files, larger stripes will perform better. For smaller files with lots of random access, smaller stripes will perform better.


    I'm afraid that you're the one who's generalizing based on theory here, I'm reporting specific behaviour that I've measured, with a number of different access sizes.

    How one applies that behaviour in their application is up to them. If you're OK with really poor write performance, for no appreciable benefit in read performance, then, it's a choice you can make. Dumb IMO, but yours to make.

    Granted that my tests were only synthetic benchmarks in a specific setup. Consider it a "heads up". I don't accept random internet posts as facts either -- I'd check it for myself with my applications of choice.

    Your post is the random internet post. I backed mine up with real tests by real reviewers. If you even bothered to look at the links I provided, you would have seen that RAID 5 is faster than RAID 10 in read and write performance.

    You are contradicting yourself. You say the behavior depends on how someone uses it, but you just said that 64k is best. Best for what? For you? For who? I guess for everyone; sounds pretty general to me. Plus lets not for get this is based on your testing which is admittedly synthetic and in no way applies to the real world. Well you must have at least some reliable source of data to back you up.

    Where is your data anyway? Oh, you don't have any. All your're doing is trolling the forum. You're doing a disservice to the people that have real questions here. Everyone else on this thread is trying to help the OP; you are just showing your ignorance. It's ok if you don't know about RAID or anything else for that matter, just don't post as if you do. Maybe you should read what else has been posted and ask some questions of your own; you could learn something.

    It's good you don't accept random posts with no data to back up their claims. I sure everyone else here feels the same way about your random posts.

    Ryan
  23. Take a pill Ryan. Read what I wrote about RAID 10 again, and notice that the energy you're spending is a waste.

    I fully realize that my posts are also random internet posts, and don't even ask let along insist with such hostility that anyone takes it at face value.

    It's a beautiful day here, I'm going to try to enjoy it. I suggest you try to lighten up too.

    As to the OP, I hope that if he's still around, he won't have as much disgust for this exchange as it probably deserves, and will note what information may be of use.
  24. The only reason I got defensive is because you called the information and advice in my posts dumb and your tone was sarcastic. If I misunderstood you, I apologize.

    It's a beautiful day here too. Too bad I can't enjoy it because I'm stuck at work with nothing to do.

    Ryan
  25. I quote: "Arguing in internet forums is like running in the special olympics; even if you win, you are still retarded."

    As I understand it, Write performace of Raid 5 is HIGHLY dependent on the raid controller. It can be very fast if you have a good controller that bufers writes and calculates the parity data for you. It can be very slow if your slow CPU has to calulate parity data itself.
    So, depending on the controller, Raid10 can be faster and use less CPU since it's all IO with no magic calculations needed.
  26. You are absolutely right. The OP said he is using an ICH8R controller so I provided him with benchmarks showing the ICH8 performance in both RAID 5 and RAID 10 modes. His particular controller will be faster with RAID 5. Just because the controller doesn't buffer the writes nor calculate the parity doesn't mean RAID 5 will automatically be slower than RAID 10. Yes, the ICH8 is slower than a dedicated controller, but those controllers probably cost more than his motherboard. At the end of the day, what matters is that with the OPs setup, RAID 5 will be faster than RAID 10.

    BTW, I made no claims that I wasn't retarded. :D

    Ryan
  27. To the OP:

    If you run your RAID 5 array with write-back cache enabled, you'll get good write performance - more than adequate for use with your camcorder - but you will not have any protection against a power outage or an abnormal shutdown / restart. In those situations, your RAID array will need to rebuild itself, which will take you 8-12 hours or so with 4 x 400 GB drives. That's how long it takes mine. And it will happen even if the hard disks were doing nothing at the time of shutdown.

    If you run with write caching disabled, you will be far less likely to have to rebuild your RAID 5 array on reboot, but you also will have pretty poor write performance - on the order of 10-20 MB / sec.

    Since you have matrix RAID, you might consider creating a small 4-drive RAID 0 array for high-speed writes, and put your page file, your temp directories, and your camcorder buffer there. You might lose temporary work in the event of a crash, but you won't lose the data on your RAID 5 array.
  28. You can avoid having to do rebuilds by doing to simple things. A decent UPS with connectivity to the machine will avoid unintended shutdowns due to power outages. Use good quality hardware with good drivers. Also, maintain current drivers for all of your hardware, especially the Intel Matrix Storage Manager. This will eliminate 99% of the situations that will require a rebuild. (BTW, rebuilds are done in the background. They slow the machine slightly, but you can continue working as normal.)

    Matrix RAID with the Intel controller is a tricky thing because should a drive fail, it becomes difficult to recover your data since the OS partition (presumably on the RAID 0 array) is destroyed. Very few utilites can read the RAID 5 array until it is rebuilt with the Matrix Storage Manager, which requires seting up the RAID 0 array again and reinstalling Windows. If the RAID 5 array is first, and the RAID 0 array is second, this somewhat helps this situation.

    Ryan
  29. It's true you can avoid most abnormal shutdowns with the approaches rninneman described. You cannot avoid all of them, however. I had shutdowns caused by antivirus software (machine locked up), ATi HDTV Wonder, and an earthquake (don't ask...) before I got tired of the rebuilds.

    I would definitely recommend keeping the O/S on the RAID 5 array if you matrix. Boot performance is very good, and it makes it a lot easier to recover from a disk failure than having to rebuild the whole thing. Generally, anything write-once / read-many (such as O/S, programs, permanent data) should go on the RAID 5 array, and anything write-once / read-once (such as scratch space, temp folders, page file) should go on the RAID 0 array. Just make sure you have enough RAM to boot without your pagefile, if you ever need to b.c. of a disk failure.

    The rebuild won't slow your CPU down much, but you won't want to do much video recording / editing while it's happening.
  30. Quote:
    The only reason I got defensive is because you called the information and advice in my posts dumb and your tone was sarcastic. If I misunderstood you, I apologize.


    I accept your apology, and apologize in turn for using let's say unpeaceful language at times, although it wasn't intended as badly as you seemed to have taken it. I believe you've misunderstood me at times, particularly seeing disagreement when there isn't any, but I have not taste for going over these debates point by point, correcting every little unimportant detail and making more mistakes and misunderstandings along the way, etc., so I'll try to be brief, and get back to the subject with more data.
  31. Quote:
    With RAID 5, you need to be careful about write performance, and from some simple tests I've recently (ICH8R (ICH8DO) under Vista with 7.5 drivers), 128k stripe size is a particularly bad choice for RAID 5 -- write performance is much worse than with other stripe sizes.


    I did some more tests, now with XP-64 and 6.2 drivers, and got similar results.

    Motherboard: Asus P5B-VM DO (ICH8R variant - ICH8DO). THG article used ICH8DH.
    Drives: 4 Maxtor
    Matrix RAID in initial part of disks. Tests run on formatted but otherwise empty arrays. Write caching was enabled.

    Benchmark: ATTO diskbench.

    Used largely for convenience and speed of read+write test. There is run-to-run variance, so don't read too much into individual results. Look at the order of magnitude.

    Similar results were obtained with IoMeter simple sequential access and simple file copies. (I didn't test all access sizes.)

    128k stripe size:



    64k stripe size:



    I can't guarantee that the OP will see this sort of behavior on his machine, but I think there is a case for a "heads up".

    Don't look at read-only results when picking stripe sizes for RAID 5 if RAID 5 write performance matters at all to you. Ideally, test write performance for yourself in your own environment.

    It'd be interesting to see if anyone gets significantly different write results with ICHxR RAID 5 128k stripe size.
  32. Has anyone moved a system using ghost and RAID? I had one guy suggest just changing it to a dynamic disk and adding the new drive.
  33. Don't convert to a dynamic disc and add the drive. You are essentially at the same risk as a RAID 0 array by using dynamic disk spanning. You don't even get the performance benefit because there is no striping.

    Ghost can copy a RAID drive because the RAID array is transparent to the software. If you are going to restore from a boot disc, just make sure it has drivers on it for your RAID controller.

    Ryan
Ask a new question

Read More

NAS / RAID Hard Drives Intel Storage