Accidentally selected a drive to rebuild to

G

Guest

Guest
Hi,

Recently my 4 disk raid5 (software raid, with intel matrix storage manager) array was degraded because a disk failed. When trying to figure out which disk faild I activeted the option rom by pressing ctrl-i and I have accidentally selected a used (full) non raid disk in the BIOS to rebuild to. I recognized the error ad did not boot into the OS, as I know it will perform the rebuild.

I did not select mark the disk as non raid, because it said it will cause data loss.

Is there a way to recover this disk? If yes, how?
 

sub mesa

Distinguished
So if i understand, all you did was set rebuild on your non-RAID disk, but did not boot into your operating system. That would mean the setup utility only wrote 512 bytes to the end of your drive, at least i suspect. That means your data is still on the drive, and you're right you should not boot into your OS.

You should put the disks in another system and copy off the data, without the RAID disks being present in that system - just to be on the safe side. You can do the same on your current system by disconnecting all current RAID disks so just your non-RAID disk is connected which you want to recover, then booting into Ubuntu livecd and copy over the data via the network, to another computer.
 
G

Guest

Guest
yeah, this is what happened, thanks for the reply. currently I don't have enough space to do what you propose. Is there a way to identify and erase that block? with e.g. a live linux distro?
 

sub mesa

Distinguished
Yes, but its kind of dangerous. You see, you want to zero-write one sector on your drive that already has data on it you do NOT want to lose. So bear in mind, this operation is kind of dangerous.

First, burn an Ubuntu livecd, download from ubuntu.com.

Then, disconnect your RAID-disks; so only the non-RAID disk is connected.

Then, boot Ubuntu livecd, open partition editor (System->Administration->GParted) and determine which drive is your non-RAID disk. I assume its called /dev/sda.

Then, open a terminal and execute:
$ hdparm -l /dev/sda | grep LBAsects

In the output, you will see LBAsects=<number> - that number is the total number of sectors. You need to substract 1 sector from this number. So if the output shows "7800500" sectors, you substract 1 and get "7800499". Be sure not to make mistakes with this!

If you know the drive and the sector count, you can then perform zero-write on the last sector to wipe the RAID config:

sudo dd if=/dev/zero of=/dev/sda bs=512 seek=7800499 count=1

# warning: this command may be dangerous; double check everything. And make sure the seek=<number> is the right number; one sector less than the total sector count.

# warning2: do not use any partitions like /dev/sda1; use the raw device /dev/sda. The sudo command is necessary or you will get a permission denied error.
 

frank_41

Distinguished
Mar 18, 2010
3
0
18,510
Well what I have done was deleting my Raid 10 array and redefined it again with
the exact parameters as before. And voila the Raid array was up and running again.