Sign in with
Sign up | Sign in
Your question

Large (8+ disk) RAID5 Array Solution - for PC?

Last response: in Storage
Share
August 9, 2007 6:09:03 AM

This is something I haven't been able to find any material on, but am wanting to do in the short term (before the new year, at the latest).

Basically, what I want to do is build a large RAID5 array, to be attached to my current PC (it was bleeding edge at the start of 2006 - so dual core, 4 gigs of DDR, etc) as a media dump - to be attached to my new large screen LCD TV. The idea being that I'm lazy and don't like changing DVDs or going through 6 DVDs per season of TV)\, and would much rather be able to play them on my TV without getting off the couch and wading through mounds of DVDs. With the computer attached, I'm hoping I can even use DiVX/XViD to compress my DVDs and fit my whole media library on there. At least, I think a large external drive array unit with a big RAID5 array

I'd like to do this for less than $1500, which is my problem. All the 8+ disk drive arrays I've been able to find online are about $3000 and up (even for SATA, nevermind SCSI), drives sold separately. Clearly aimed at the enterprise segment of the market. The 4 disk arrays aimed at consumers don't look like they'll cut it for me - 4 750GBs in RAID5 costs me 750GB in storage space lost to parity, and means that 2.25TB costs me $800, just for disk space and then I'm *really* limited with expansion options if I outgrow that. Which I get the feeling I might.

Does anyone know of anything out there that might be a fit for what I want to do, that won't cost more than a cheap new car? Obviously, I don't need a high performance solution, because it's just going to be one user streaming large video files, I just need the multiple TB storage with low overhead redundancy.

I bumped into this tempting array, but I don't know if I can make one 12 disk RAID5 array with it, or if the three channels on the back restrict me to three 4 drive arrays or not. Also, I couldn't find an affordable RAID controller card that supported 8 or more SATA drives and didn't suck or only had internal connections, etc. $650 for the drive array is great, but if I need to buy an $800 card, and then four figures worth of drives to put in it, that's a little too expensive. :( 

Any advice, comments or product recommendations would be greatly appreciated.
August 9, 2007 8:04:01 AM

Why don't you just build a NAS instead? Just get a cheap Mobo/CPU/RAM and stick them into a Full size ATX tower. Then add any additional SATA cables and probably a cheap SATA controller card or two (since most Motherboards only come with 4 or 6 SATA ports) and all you have left to decided is how much space to get.

So, and example of a system like this could be...

Thermaltake Armor Series (8 x 3.5" & 10 x 5.25" drive bays) $150
http://www.newegg.com/Product/Product.aspx?Item=N82E16811133154

AMD Sempron 64 3000+ $26
http://www.newegg.com/Product/Product.aspx?Item=N82E16819104305

ABIT NF-M2S Socket AM2 NVIDIA GeForce 6100 Micro ATX AMD Motherboard $60
http://www.newegg.com/Product/Product.aspx?Item=N82E16813127016

WINTEC AMPO 512MB 240-Pin DDR2 SDRAM DDR2 533 (PC2 4200) Desktop Memory $19
http://www.newegg.com/Product/Product.aspx?Item=N82E16820161636

Antec earthwatts EA500 ATX12V v2.0 500W Power Supply 100 $80
http://www.newegg.com/Product/Product.aspx?Item=N82E16817371007

PROMISE SATA300 TX4 PCI SATA II Controller Card $60 x 2
http://www.newegg.com/Product/Product.aspx?Item=N82E16816102062

Which comes to a total of $445. And this is without any MIR's or special deals. Next add it the price of the hard drives you want and you are set.


You don't need a special RAID controller card to do RAID 5. I think Vista can do RAID 5 (but don't quote me on that) and I know that Linux can do it just fine. Since Linux is free, I think it is the better choice.

If you do go this route and do the Linux thing, then you will need one drive to be separate from the array for the OS. A cheap 40GB or 80 GB will do fine. Finally, you then need to choose an OS and you are fine. You can use something like freenas which would make it into a NAS and be simple and easy, or you can use something like Ubuntu that will take a lot more work but you can do whatever you want with (i.e. web hosting, automated backups, ect...).

If you do go the Ubuntu route (which I have done myself) the steps for setting up the OS and software are:
1. Download and burn and .iso of ubuntu from ubuntu.com. Just choose the 32 bit desktop version.
2. Boot your new file server from that CD and install the OS
3. Prepare the RAID (create array, partition, format, create file system, mount). Try searching for mdadm.
4. Share the RAID over your network with Samba
5. Plug file server into one of your home's router ethernet ports (if setup for DHCP. If don't know what DHCP is, then assume you are setup for it)


That list above is not very in depth, but if should point you in the right direction if you choose to go that way. Ther e are tutorials all over the web for each one of those topics. So if you have a question google it or ask in a forum. I have built myself two Ubuntu boxes for myself for the sole purpose of file sharing and backup, so if you have any questions I don't mind helping.

Finally, the parts I listed above were just ones that I quickly looked up at Newegg to get a estimate of how much it would cost. I didn't check for 100% compatibility or put too much thought into any of them.
August 9, 2007 2:48:46 PM

I just tackled this exact problem for my home use. Needed a high-capacity media server, wanted redundancy for the array, needed at least 8 drive capacity. I ran into the same problems as you - external drive arrays are too expensive, NAS units are too small/limited, and initial cost outlay for the drives is painful in any case.

I ended up building a computer to be a file server, and purchased components that allow me to expand the array later:

Case: Thermaltake Armor $150
PSU: Thermaltake Purepower W0100RU 500W $60
MB: Intel D945GNTLKR, on-board Video, Sound, Network, Socket 775 $115
Proc: Intel Celeron D 347 3.06GHz/533MHz $50
RAM: Corsair ValueSelect 1GB PC2-5300 $35
Optical: Pioneer DVR-112D $30
System HD: Seagate Barracuda 7200.10 80GB SATA $43
Data HD: Seagate Barracuda 7200.10 750GB SATA $210
RAID Controller: 3Ware 9650SE-8LPML PCIe x4 $515

Total about $1200. Add 2 Thermaltake iCage units ($17 each) to make the case hold the 8 drives.

Yes, the RAID controller I selected is overkill. The reason I chose it is because I've owned 5 other 3Ware cards over the last 5 years, and they operate flawlessly, and they never have problems when it comes time to rebuild a degraded array (something that I've seen other RAID cards and NAS's have problems with. :pfff:  How's that for a nice surprise? Your RAID-5 NAS works flawlessly until the day a drive goes down, then you find out that some obscure firmware bug hoses the array during the rebuild. Glad I paid for that "RAID-5 protection".) The other thing that the 3Ware controller can do is Online Capacity Expansion and Online Migration, so you can start with the single 750GB drive, later buy another one and migrate to RAID 1, later buy a 3rd one and migrate to RAID 5 and expand the array, and keep adding drives later until you get to 8 drives, expanding the array each time you add a drive.

I used Windows Server 2003 on this machine (I have a volume license for it) because it can use GPT disks, thus it will handle >2TB arrays (another thing that many RAID controllers and NAS's won't do). But you could also use any flavor of Linux, especially since 3Ware provides nice robust Linux drivers.

The only thing about this setup that's a bit non-standard is that the 3Ware card is a PCI Express x4 card. The motherboard I selected has only one slot that this will work in, that's the PCIe x16 slot intended for a video card. The 3Ware card works fine in this slot, but the motherboard BIOS gets confused when the machine goes through the boot process. The BIOS is evidently programmed that if it sees any card in the PCIe x16 slot, it assumes it's a video card, and attempts to initialize VGA output. The 3Ware card obviously doesn't respond to these requests from the BIOS, so the BIOS has to time out these initialization attempts before it continues the boot process. The delay is about 120 seconds on each bootup.

Nevertheless, I got the machine set up, and it works great. I can copy media to it at gigabit Ethernet speeds (getting around 31-33 MB/sec transfer rates from mt desktop computers, which is far faster than any NAS out there), plus I have a full server OS to run FTP & HTTP for other purposes.

Later, I'll expand the array as necessary. 750GB drives keep coming down in price, so in the end when I'm putting those last few drives in, I'll end up saving a decent amount of money (drives are $209 right now, it's conceivable that before I need to get the last 2-3 drives of the array they'll be down to $149 or lower, and the cost savings there will offset some of the expense of the 3Ware card).
Related resources
August 9, 2007 5:34:51 PM

One more thing...

Remember that the point of RAID isn't as much to protect your data as it is to remove downtime. There are many things that RAID won't protect you from which is why you should still have a backup outside of the main storage on another machine.

Also having a second copy of all the data ensures you that in the event that you do have any problems growing your array or problems with the OS, you will have the luxury of saying the heck with it and starting your array from scratch without losing any of your data.

If you do build your own, look into making sure you get yourself an energy efficient power supply since these machines are the type you leave running 24/7. This could it may save you a few bucks in the long run.
August 10, 2007 12:05:32 AM

Ok, thanks for the input, guys. Looks like I can do what I want for less than $1500, so I guess I'll go ahead and do it. I think I'm going to go with that drive array I posted. I think. It's more appealing to me than a cheap computer setup as a fileserver/NAS, and for only a couple hundred dollars more. Seems a reasonable price to pay for the LED indicators, hot swapability and so on. I think I need some clarification, though.

1) If I do RAID through my OS, the major drawback is that if the OS is destroyed, so is the container information. (That and cross-platform or system migration is painful.) Correct? If I use a proper hardware RAID controller, the container information gets stored on the controller card (as well as the drives), so it's very easy to transfer the container between systems and/or controller cards. Right? So, in theory, with hardware RAID, I could hook this up to an XP desktop one day, a 2k3 server the next, a Linux box after that and then a <shudder> Mac OS X server - just by swinging the cables between platforms. (And rebooting first, naturally, since SATA isn't hot swappable).

2) This bay is actually set up as a 5/5/1/1. That means I could make an up to 10 drive RAID container if I had a card with 2 eSATA ports, or up to 12 drive RAID container with 4 eSATA ports. Correct? I'm not limited to two 5 drive containers, because of the port multiplier setup. The issue with this config (as opposed to a 3/3/3/3 or 4/4/4) is going to be raw throughput performance - because I'll be limited by having a single 3.0 Gbps SATAII service 5 physical drives. Right?

zyberwoof said:
One more thing...

Remember that the point of RAID isn't as much to protect your data as it is to remove downtime. There are many things that RAID won't protect you from which is why you should still have a backup outside of the main storage on another machine.

Also having a second copy of all the data ensures you that in the event that you do have any problems growing your array or problems with the OS, you will have the luxury of saying the heck with it and starting your array from scratch without losing any of your data.

If you do build your own, look into making sure you get yourself an energy efficient power supply since these machines are the type you leave running 24/7. This could it may save you a few bucks in the long run.


Given that this is just a media center for all my optical data, I'm not terribly worried about data protection. But the idea that RAID5 can allow for a drive failure with no loss of data is very appealing to me. No need to worry about going through all the effort of re-transferring the data to that drive (after I figure out what, exactly, I had on it in the first place), just drop another disk into the array.
August 10, 2007 12:43:42 AM

you must have a lot of movies, because when i compress my dvds to h.264 for my xbox 360 i get the files in the 500MB an hour range so my 600GB raid 0 stores 1200 hours, which is about 800 movies.
August 10, 2007 2:10:46 AM

darkangelism said:
you must have a lot of movies, because when i compress my dvds to h.264 for my xbox 360 i get the files in the 500MB an hour range so my 600GB raid 0 stores 1200 hours, which is about 800 movies.


TV shows. Consider that a 30 minute show with 25 episodes per season chews up 9-10 hours every season. Heck, if I had all the Simpsons episodes ever aired, that's 160 hours, right there, on one show.

Besides, 500MB/hour just isn't going to cut it for me - that works out to less than 150KB/s, which will look like crap on a high res big screen.
August 13, 2007 4:53:12 AM

Hm you might want to look at geom_raid5 - a software RAID5 solution for FreeNAS / FreeBSD. I'm running it for over a year now and many other FreeNAS people have. I've written capacity expansion support for it, though that still needs to be tested properly, in a few weeks you can expect the first beta which probably means it'll be ready in a few months. Which is nice if you desire such feature.
geom_raid5 knows many protections and has cool features. It can sense an unclean shutdown for example, does automatic rebuilding, request comining, write buffering, etc. I consider it to be safe, as long as you stick to the stable branch (there is stable, TNG and PP).

As to your first question, the metadata will be stored on the disks instead. So if you take your disks and insert into a clean FreeBSD system with geom_raid5 installed it will detect and activate your RAID5 array flawlessly. Virtually all RAID implementations use on-disk metadata storage, an exception being the vinum volume manager.

Please also look at the option of using the new Samsung F1 1TB disks, starting with 4 of them (2.6TiB of RAID5 storage) and gradually extending that using capacity expansion. Its just an alternative though, if you feel more confortable with a hardware solution then you might want to consider that instead. Just know that those too, can contain bugs. Hardware controllers do too, even controllers like Areca have or had datacorruption bugs. It's life, live with it. :) 
August 15, 2007 1:28:13 AM

enlightenment said:
Its just an alternative though, if you feel more confortable with a hardware solution then you might want to consider that instead. Just know that those too, can contain bugs. Hardware controllers do too, even controllers like Areca have or had datacorruption bugs. It's life, live with it. :) 


This is something I've been wary of for years. I've seen far more RAID controllers and hardware that have problems when something goes wrong than I've seen ones that can adequately protect your data from problems. Tom's has tested NAS units that the mere removal and re-insertion of a drive into the unit caused data loss.

A RAID controller's firmware and driver have to be rock solid, otherwise the data is at risk. In fact, sometimes you can consider it at more risk because the RAID controller lures you into a false sense of security, when all the while some firmware bug is ready to corrupt the array when it comes time to do a rebuild due to a failed drive.

I stick with hardware RAID controllers that have a track record of flawless implementations, like LSI and 3Ware.

On a related note, my home media server that I mentioned earlier in this thread has a 3Ware 9650SE-8LPML. This weekend, I expanded the array using the array migration feature, migrating from a single drive to a 3-drive RAID 5. I actually had a drive fail during the migration. The 3Ware card amazingly completed the migration, and ended up in a degraded (but operational) 2-drive RAID 5. I then replaced the failed drive, the card executed a rebuild, and a perfectly good 3-drive RAID 5 was the result. No data loss, the volume never went offline, the data was never unavailable. That's some solid drivers and firmware, and that's why the card is $514. :) 
August 15, 2007 1:46:50 AM

InfidelPimp said:
This is something I haven't been able to find any material on, but am wanting to do in the short term (before the new year, at the latest).

Basically, what I want to do is build a large RAID5 array, to be attached to my current PC (it was bleeding edge at the start of 2006 - so dual core, 4 gigs of DDR, etc) as a media dump - to be attached to my new large screen LCD TV. The idea being that I'm lazy and don't like changing DVDs or going through 6 DVDs per season of TV)\, and would much rather be able to play them on my TV without getting off the couch and wading through mounds of DVDs. With the computer attached, I'm hoping I can even use DiVX/XViD to compress my DVDs and fit my whole media library on there. At least, I think a large external drive array unit with a big RAID5 array

I'd like to do this for less than $1500, which is my problem. All the 8+ disk drive arrays I've been able to find online are about $3000 and up (even for SATA, nevermind SCSI), drives sold separately. Clearly aimed at the enterprise segment of the market. The 4 disk arrays aimed at consumers don't look like they'll cut it for me - 4 750GBs in RAID5 costs me 750GB in storage space lost to parity, and means that 2.25TB costs me $800, just for disk space and then I'm *really* limited with expansion options if I outgrow that. Which I get the feeling I might.

Does anyone know of anything out there that might be a fit for what I want to do, that won't cost more than a cheap new car? Obviously, I don't need a high performance solution, because it's just going to be one user streaming large video files, I just need the multiple TB storage with low overhead redundancy.

I bumped into this tempting array, but I don't know if I can make one 12 disk RAID5 array with it, or if the three channels on the back restrict me to three 4 drive arrays or not. Also, I couldn't find an affordable RAID controller card that supported 8 or more SATA drives and didn't suck or only had internal connections, etc. $650 for the drive array is great, but if I need to buy an $800 card, and then four figures worth of drives to put in it, that's a little too expensive. :( 

Any advice, comments or product recommendations would be greatly appreciated.

You can get a bigger case and use a raid card with inside ports do you have any free x4 or higher pci-e slots?
August 15, 2007 1:52:18 AM

SomeJoe7777 said:
I just tackled this exact problem for my home use. Needed a high-capacity media server, wanted redundancy for the array, needed at least 8 drive capacity. I ran into the same problems as you - external drive arrays are too expensive, NAS units are too small/limited, and initial cost outlay for the drives is painful in any case.

I ended up building a computer to be a file server, and purchased components that allow me to expand the array later:

Case: Thermaltake Armor $150
PSU: Thermaltake Purepower W0100RU 500W $60
MB: Intel D945GNTLKR, on-board Video, Sound, Network, Socket 775 $115
Proc: Intel Celeron D 347 3.06GHz/533MHz $50
RAM: Corsair ValueSelect 1GB PC2-5300 $35
Optical: Pioneer DVR-112D $30
System HD: Seagate Barracuda 7200.10 80GB SATA $43
Data HD: Seagate Barracuda 7200.10 750GB SATA $210
RAID Controller: 3Ware 9650SE-8LPML PCIe x4 $515

Total about $1200. Add 2 Thermaltake iCage units ($17 each) to make the case hold the 8 drives.

for the same price you could got a nforce 7025 board with and a lowend dual core amd cpu
August 15, 2007 1:55:24 AM

InfidelPimp said:
Ok, thanks for the input, guys. Looks like I can do what I want for less than $1500, so I guess I'll go ahead and do it. I think I'm going to go with that drive array I posted. I think. It's more appealing to me than a cheap computer setup as a fileserver/NAS, and for only a couple hundred dollars more. Seems a reasonable price to pay for the LED indicators, hot swapability and so on. I think I need some clarification, though.

1) If I do RAID through my OS, the major drawback is that if the OS is destroyed, so is the container information. (That and cross-platform or system migration is painful.) Correct? If I use a proper hardware RAID controller, the container information gets stored on the controller card (as well as the drives), so it's very easy to transfer the container between systems and/or controller cards. Right? So, in theory, with hardware RAID, I could hook this up to an XP desktop one day, a 2k3 server the next, a Linux box after that and then a <shudder> Mac OS X server - just by swinging the cables between platforms. (And rebooting first, naturally, since SATA isn't hot swappable).

You want hardware raid for things like raid 5 and higher. OS raid has more overhead and limits then on board software raid.
August 15, 2007 3:45:52 AM

Quote:
Remember: at 2TB and above, you'll need a 64-bit OS and device driver.


Windows Server 2003 32-bit with SP1 or higher can handle >2TB volumes. Vista 32-bit can do so as well.
August 15, 2007 8:36:57 AM

Correct me if I'm wrong.. isn't the 2GB limit for 32-bit CPUs for RAM. Maybe theres some secret NTFS limit that I don't know about. But I remember reading that NT4 server's NTFS format could do something like 16 exabytes of data, so I can't imagine what could possibly limit it to 2GB now. I have 1.75GB on my setup for this exact purpose, but my controller claims to be able to do more than 2GB, and nowhere does it ever mention that you'll need 64-bit.

Am I overlooking something?
August 15, 2007 3:35:34 PM

cyberjock said:
Correct me if I'm wrong.. isn't the 2GB limit for 32-bit CPUs for RAM. Maybe theres some secret NTFS limit that I don't know about. But I remember reading that NT4 server's NTFS format could do something like 16 exabytes of data, so I can't imagine what could possibly limit it to 2GB now. I have 1.75GB on my setup for this exact purpose, but my controller claims to be able to do more than 2GB, and nowhere does it ever mention that you'll need 64-bit.

Am I overlooking something?


First, we're talking about Terabytes (TB), not Gigabytes (GB). The 2TB limit for disk storage is what we're referring to.

The limitation is not the NTFS file system. Theoretically, NTFS can address up to 16EB of data, although current Windows implementations actually limit it to 256TB. The limit is how many blocks of disk storage the underlying storage system transport protocol can address.

Disks that are in the MBR format (which is what the PC BIOS addresses and what almost all standard disk storage drivers are built around) can address a maximum of 2^32 blocks. At 512 bytes per sector, this is 2TB. The underlying storage system drivers in Windows XP 32-bit, and all previous versions of Windows are also built around 32-bit block addressing, and cannot access blocks on a disk that are beyond 2TB.

To access blocks beyond 2TB, several things need to be in place. First, the storage transport system must support addressing blocks beyond 2^32, which means the protocol must support a larger number of bits for the LBA address. LBA48 for IDE/SATA does this, as does CDB16 for SCSI. You must have a disk controller that supports one of these standards. If your controller card claims to be able to do >2TB volumes, it probably implements one of these.

Second, the operating system must be able to address blocks past 2^32, so the entire disk subsystem of the OS needs the capability to use >32 bit LBA addressing. Windows XP x64, Windows Server 2003 SP1 or higher, and all versions of Windows Vista now have this capability.

Third, the disk must be in a partition format that supports addressing blocks past 2^32, which the MBR partitioning scheme doesn't support. The above listed versions of Windows have a new partitioning scheme called GPT that allows this. See the Windows and GPT FAQ.

Meeting those 3 requirements will enable you to use data disks larger than 2TB. To be able to boot from a volume that's larger than 2TB, a 4th requirement is necessary:

Fourth, since the PC BIOS is built around the MBR partitioning scheme, the PC BIOS inherently cannot address a disk larger than 2TB. A new BIOS standard, the Extensible Firmware Interface (EFI) is able to use plug-ins to address the storage subsystem. A system with an EFI BIOS and a GPT plug-in would be able to boot from a >2TB Windows disk. There are no such systems on the market right now. The Intel CPU-based Macintoshes have an EFI BIOS, but to my knowledge do not have a GPT plug-in. The Intel server systems that used to be built around the Itanium (IA64) processor had an EFI BIOS and a GPT plug-in, but all such systems are no longer on the market.
August 15, 2007 3:43:12 PM

Quote:
> I'd like to do this for less than $1500

1 x RocketRAID 2340 @ $460 + tax (www.newegg.com)
8 x WD1600YS @ $60 = $480 + tax

Subtotal: $940 + tax & shipping


8 x WD2500YS @ $75 = $600 + tax

Subtotal: $1,060 + tax & shipping

(keep scaling the HDD size upwards
until you exceed your budget)

Remember: at 2TB and above,
you'll need a 64-bit OS and device driver.


Sincerely yours,
/s/ Paul Andrew Mitchell
Webmaster, Supreme Law Library
http://www.supremelaw.org/

8 port sas/ sata2 card with 128 mb of ram pci-e x4
http://www.newegg.com/Product/Product.aspx?Item=N82E168...
$469.99

ht tp://www.newegg.com/Product/Product.aspx?Item=N82E1681611604...
8 port sata1 / 2 card with 256 mb of ram pci-e x4 and raid 6
$514.99

http://www.newegg.com/Product/Product.aspx?Item=N82E168...
8 port sata1 / sata2 256 MB of ram pci-e x8 and raid 6
$489.99

The $489 one is better then the $460 one with 0mb of ram
August 15, 2007 5:22:20 PM

SomeJoe7777 said:
First, we're talking about Terabytes (TB), not Gigabytes (GB). The 2TB limit for disk storage is what we're referring to.

The limitation is not the NTFS file system. Theoretically, NTFS can address up to 16EB of data, although current Windows implementations actually limit it to 256TB. The limit is how many blocks of disk storage the underlying storage system transport protocol can address.

Disks that are in the MBR format (which is what the PC BIOS addresses and what almost all standard disk storage drivers are built around) can address a maximum of 2^32 blocks. At 512 bytes per sector, this is 2TB. The underlying storage system drivers in Windows XP 32-bit, and all previous versions of Windows are also built around 32-bit block addressing, and cannot access blocks on a disk that are beyond 2TB.

To access blocks beyond 2TB, several things need to be in place. First, the storage transport system must support addressing blocks beyond 2^32, which means the protocol must support a larger number of bits for the LBA address. LBA48 for IDE/SATA does this, as does CDB16 for SCSI. You must have a disk controller that supports one of these standards. If your controller card claims to be able to do >2TB volumes, it probably implements one of these.

Second, the operating system must be able to address blocks past 2^32, so the entire disk subsystem of the OS needs the capability to use >32 bit LBA addressing. Windows XP x64, Windows Server 2003 SP1 or higher, and all versions of Windows Vista now have this capability.

Third, the disk must be in a partition format that supports addressing blocks past 2^32, which the MBR partitioning scheme doesn't support. The above listed versions of Windows have a new partitioning scheme called GPT that allows this. See the Windows and GPT FAQ.

Meeting those 3 requirements will enable you to use data disks larger than 2TB. To be able to boot from a volume that's larger than 2TB, a 4th requirement is necessary:

Fourth, since the PC BIOS is built around the MBR partitioning scheme, the PC BIOS inherently cannot address a disk larger than 2TB. A new BIOS standard, the Extensible Firmware Interface (EFI) is able to use plug-ins to address the storage subsystem. A system with an EFI BIOS and a GPT plug-in would be able to boot from a >2TB Windows disk. There are no such systems on the market right now. The Intel CPU-based Macintoshes have an EFI BIOS, but to my knowledge do not have a GPT plug-in. The Intel server systems that used to be built around the Itanium (IA64) processor had an EFI BIOS and a GPT plug-in, but all such systems are no longer on the market.

If the pc bios does cannot address disk larger than 2TB. But You have a raid card with is own bios that does will you be able to boot a 2tb> raid setup.
March 27, 2009 8:08:52 AM

Where does the EFI BIOS Live?

Is it the BIOS in for the disk controller (eg: 3ware RAID card) - or the BIOS for the PC the controller is in (eg: HP Proliant server)?
!