I have setup raid 1 on my Fedora 10 box for data on a silicon image SiI 3112a pci card. I boot from a different raid 1 setup on another pci card.
I also have a LVG on the raid 1 data drive array. All partitions are formatted as ext3.
Recently I experienced a failure on one of my raid 1 data drives (each drive is 320GB). I removed the drive and booted back into my system and mounted the remaining drive as just a SATA drive. To my surprise the data I saw was was around 5 months old. For example the directory structure was old (I had deleted a number of folders ages ago that now reappear) and my accounting data file is last modified 5 months ago when I was using it last week.
Is there any explanation for this? Can the raid card be using some different file addressing structure that is not available when mounted non raid 1?
This is very scary as I will miss the last 5 months of data severely.
More about :raid failure data left remaining drive
Managed to mount the broken disk long enough to back up the data I wanted but it seems to be the same data as the non broken disk in the array.
It's almost as if a separate file index was used when the raid 1 was in tact and when the raid broke it reverted back to the original file index. This would have caused little issue when I copied the information to a backup disk after the raid broke because most of these files will not have been moved from when they were originally saved on the disk back when the disks were installed 5 months ago.
I just need to get access the files saved subsequently.
Maybe an intense file scan could locate these files?