RAID ARRAY NOT BOOTABLE

esquire68

Distinguished
Feb 6, 2010
2
0
18,510
Asus rampage II extreme
Windows 7
I7 920
12 gig ram
4 x seagate sata 1.5 tb drives
I can create an array and it seems to have done properly but when you try to install windows it says it cannot install on that disk to make sure controller is enabled. All that seems fine and you can see the drives in BIOS but the array shows not bootable. I've tried to build a 0 and a 10, both configs the same.
 

grafixmonkey

Distinguished
Feb 2, 2004
435
0
18,790
Not familiar with the specific controller, unless it happens to be the Intel MegaRAID...

You often have to specifically set an array bootable in the raid config.

Need a bit of clarification though. What Windows are you installing? It makes a big difference as far as whether the RAID drivers should already be there. Windows 7 recognizes pretty much everything. Vista I haven't messed with much. XP doesn't recognize ANYTHING and you'll need a driver disk.

If the Windows installer can see the array but refuses to install to it, the drivers are there but there's some other problem.


I always recommend that people do not boot to their RAID arrays. In the case of a RAID-10 I'll loosen that recommendation a little bit, because the failure scenario is fairly unlikely. My recommendation is because of the behavior of the RAID controller if a bad block is encountered:

Single Drive: Windows hiccups for a bit but reports the bad block in its event manager and continues on.

RAID array: If a RAID disk times out, that disk drops from the array. If there is no redundancy at the time, it drops the entire array immediately. This causes an instant blue screen and reboot in Windows and since the array has been dropped you don't get any events in the event viewer. The next time you boot windows it has no idea what happened, for all it knows it lost power.

A disk can drop from the array because of a bad block on a disk, or because the disk is not meant for RAID and goes through a longer than normal self-service cycle. Depending on the controller and the type of disk, it may try to move the bad block and continue. If you are using WD Caviar Black disks, make sure you have used the TLER utility to set the Time Limited Error Recovery bit on the disks, that prevents the disks from timing out as easily. (Does not make them 100% as good as buying Raid Edition disks though.) If you are using Seagate, I don't know if it's possible to fix the error recovery timing. I had a raid-0 array of five 1TB Seagate drives, two months old, and it would time out constantly. (I think one of the disks was legitimately bad though.)

This is on top of the fact that installation is easier, and you don't have to worry about a driver update or a raid card bios flash preventing you from booting.


In your case, with Raid 10, I think you're pretty safe as long as you have either Raid Edition disks or WD Caviar Black disks with the TLER bit set. If you do get Windows installing though, I highly recommend making a single-drive-sized (500GB to 1TB) partition for the OS. That way if you have difficulty booting at any point, you can use a Linux boot CD to image the RAID boot partition onto a single drive and get your OS back.
 
Your problem is that you've exceeded the 2TB limit for booting from an MBR-type partition. The only way to boot from a disk volume larger than 2TB is to use a GPT partition, and from the sounds of it that's not supported by your BIOS (GPT partition support is still very uncommon in desktop computers).
 

grafixmonkey

Distinguished
Feb 2, 2004
435
0
18,790
Thanks sminlal, I learned something new today!

esquire68, that would mean all you have to do is partition a 500GB or 1TB chunk for Windows, which is a good idea anyway. For one it lets you install a new OS without having to worry about backing up 3TB of data, for another thing you would probably be able to image your OS from the RAID into a single disk if you ever needed to do that.

(I've imaged an OS from single disk to single disk before and it works great. Haven't tried from a RAID array but it should not be different.)

I suspect you may have problems with those 1.5TB Seagate drives in RAID. For one thing Seagate has a really terrible track record with their 7200.12 series drives. (Six months after they came out they had about a 60% approval rating on Newegg reviews, 30% saying their purchase of those drives had led to early failure, many saying multiple failures.) I personally had five Seagate 1TB drives from the 7200.12 series and one of them failed within two months. (Company's choice, not mine.)

If they do not fail, you still have to worry about drives dropping due to timeouts in self-service or error-recovery cycles. Watch the array closely to make sure it never drops any drives and goes into rebuild mode. With a four-drive raid 10 array it would be hard to notice the speed reduction so check your raid status and logs every now and then until you're satisfied your controller is being tolerant enough that the drives don't drop out.

For some reason reading is more critical than writing. The Seagate drives I mentioned would write data quite happily, and then 10 minutes later when I tried to read it the array would drop during the transfer. (Raid-0.) Had to manually locate the last file copied, delete it (corrupted due to partial transfer), re-scan the array and start where it left off. Had to do that twelve times to get 2.5TB of data off of that and onto a more solid array. The Seagate drives are now holding down my shelf for me.

(Of course, my fault for putting the only copy of data on a raid-0, but I figured I could probably trust it for a few hours and avoid buying new drives. Wrong. Thanks Seagate!)


If your array ever drops into critical or rebuild status, you should probably image your OS onto one of the disks and use the others separately somehow.
 
The problem is that a disk using MBR-style partitions cannot have a partition that's bigger than 2TB or that starts beyond the 2TB boundary - so there's really no point in creating a volume larger than 2TB in the first place. The best bet would probably be to create a pair of RAID 1 mirror sets, which would give two volumes of 1.5GB each.

Of, if the OP hasn't got a backup strategy yet, just use two drives without RAID in the system and put the other two drives into external enclosures for use as alternating backups. Offline backups provide better protection for your data than a pair of RAID 1 volumes since they protect against risks that RAID can't, especially if you keep one of them offsite.