I have an AsRock based motherboard that I created a RAID1 array with. About 5 months ago, upon a planned restart, I had noticed that the array was degraded, but the system booted up just fine into VMware Vshpere Hyperviser ESXi. Upon rebooting the server last night, the status was the same, but the system just hangs with a flashing cursor on POST.
I pulled both drives in the array and tried each one separately. One drive on the RAID status screen came up as a (type/status)MEMBER disk, with a status of (RAID)DEGRADED, the other coming up as (type/status)FAILED disk with a status of (RAID)FAILED. With this information, I assumed the drive with the status of (type/status)MEMBER disk was the good disk of the array. I went and purchased a new drive and put in my good drive and the new drive. The RAID BIOS screen had detected a new drive, and allowed me to add it as a new member of the array and then the status of the RAID become REBUILD and at the bottom of the screen a message came up saying "Volumes with "Rebuild" status will be rebuilt within the operating system." The computer however, does the same thing, it will not boot, it just comes up with a blinking cursor. So I concluded that I cannot get this array rebuilt until I can get it booted, which boots into VMware Vshpere Hyperviser ESXi, and I am not sure it would actually rebuild in that environment anyways. What I did next is shuffled the drives around and used an older version of Gparted to try and just get more information on ways to handle this. At one point I did see several mount points on the good drive of the array, but I am not seeing that anymore. A couple times while shuffling the drives around, it had detected a new drive and prompted me to add it into the array, which I did, but I had noticed it had created an additional array with the same array name, but appending a ":1" after it. after several attempts I ended up deleting the new arrays it had created, and got it back to the point where I think it is right where I started. My first question is I believe my original Volume name was just "SSARAID" but now (see picture)
http://i1137.photobucket.com/albums/n518/mcgendraft/IMG_0487.jpg
its called "SSARAID:1" if anyone is experienced with this RAID setup, is that normal to append a ":1" on the first created RAID? If the RAID name does not match exactly, can this cause issues. The second question is I am trying to figure out the best way to move forward. I tried booting into a live version of Ubuntu 12.10 (Quantal Quetzal) to use the tool "mdadm" and had problems installing "mdadm" with missing dependecies and bad links. Another thought I had was to throw a separate drive in the machine, install Windows, and run the Intel Matrix Storage Manager, and attempt to rebuild the raid there? I rarely post on forums because I do alot of research on my own, but this one I am just struggling with, and help/guidance would be so much appreciated!
I pulled both drives in the array and tried each one separately. One drive on the RAID status screen came up as a (type/status)MEMBER disk, with a status of (RAID)DEGRADED, the other coming up as (type/status)FAILED disk with a status of (RAID)FAILED. With this information, I assumed the drive with the status of (type/status)MEMBER disk was the good disk of the array. I went and purchased a new drive and put in my good drive and the new drive. The RAID BIOS screen had detected a new drive, and allowed me to add it as a new member of the array and then the status of the RAID become REBUILD and at the bottom of the screen a message came up saying "Volumes with "Rebuild" status will be rebuilt within the operating system." The computer however, does the same thing, it will not boot, it just comes up with a blinking cursor. So I concluded that I cannot get this array rebuilt until I can get it booted, which boots into VMware Vshpere Hyperviser ESXi, and I am not sure it would actually rebuild in that environment anyways. What I did next is shuffled the drives around and used an older version of Gparted to try and just get more information on ways to handle this. At one point I did see several mount points on the good drive of the array, but I am not seeing that anymore. A couple times while shuffling the drives around, it had detected a new drive and prompted me to add it into the array, which I did, but I had noticed it had created an additional array with the same array name, but appending a ":1" after it. after several attempts I ended up deleting the new arrays it had created, and got it back to the point where I think it is right where I started. My first question is I believe my original Volume name was just "SSARAID" but now (see picture)
http://i1137.photobucket.com/albums/n518/mcgendraft/IMG_0487.jpg
its called "SSARAID:1" if anyone is experienced with this RAID setup, is that normal to append a ":1" on the first created RAID? If the RAID name does not match exactly, can this cause issues. The second question is I am trying to figure out the best way to move forward. I tried booting into a live version of Ubuntu 12.10 (Quantal Quetzal) to use the tool "mdadm" and had problems installing "mdadm" with missing dependecies and bad links. Another thought I had was to throw a separate drive in the machine, install Windows, and run the Intel Matrix Storage Manager, and attempt to rebuild the raid there? I rarely post on forums because I do alot of research on my own, but this one I am just struggling with, and help/guidance would be so much appreciated!