Raid 5 storage for Dell

jimruns919

Honorable
Aug 26, 2013
5
0
10,510
I had an old Dell Poweredge 700 which was installed with a SCSI card. It only had 2 SATA ports onboard so I installed a 4 port Sata II PCI card and 3 1TB hard drives. It took over 2 days to format the Raid 5 but when it did, it only sees 1.81TB. I do have a 4th drive I will be adding but I would expect to see more than what would be equal to a 2TB drive when there are 3TB formatted as NTFS. Is this a limitation? I cannot locate any documentation about storage capacities and I know the PCI card can handle the amount of space. Any help would be appreciated.
 
Solution
The way Raid 5 works, the amount of storage space you get equals the number of disks minus 1 times the capacity of 1 disk

(# of disks -1) x capacity = available storage

. The reason for this is that you lose 1 drive due to the Raid configuration's redundancy. The other .19TB you lost is due to formatting. Once you add the 4th 1TB drive to the Raid array, you will have ~3TB of storage space (again, minus the space required for formatting).

http://en.wikipedia.org/wiki/Standard_RAID_levels#Usable_capacity


Congratulations! You've discovered the famous 2.2TB limit! This is an old limitation resulting from the use of 32 bit LBA. LBA stands for Logical Block Addressing. It's a method of addressing storage space regardless of the mechanism of storage. Floppy disks, SSDs, Hard Disks, CD Drives, DVD Drives, BluRay drives, Tape Drives, and more can all be accessed in a consistent fashion using LBA. Prior to LBA it was necessary to perform access in a device specific manner such as the old fashioned Cylinder, Head, and Sector method used for hard drives.

The maximum amount of addressable space on the device can be derived by the following formula

Max = 2^<LBA address width> * <addressable block size>

32 bit LBA has an address width of 32 bits, and most hard drives use 512 byte sectors. This yields:

2^32 * 512 bytes = ~2.2 Terabytes

Hard Disk manufacturers switched to 48 bit LBA in 2003, long before 2TB hard drives were brought to market. However, motherboard manufacturers continued to use the BIOS firmware standard through at least 2010. Most have now switched to the new and shiny UEFI standard, but some motherboards can still be found with old BIOS based firmware. The BIOS standard is traditionally accompanied by the MBR partitioning scheme, which continued to use 32 bit LBA for partition sizing (only 4 bytes are allocated for this value) long after drive manufacturers had switched to 48 bit LBA. This means that any drive initialized with an MBR partitioning scheme cannot expose more than 2.2TB of storage space. The UEFI firmware standard is accompanied by a new partitioning scheme known as GPT which allocates 8 bytes for the LBA, allowing up to 64 bit LBA. Unfortunately, the GPT partitioning scheme is entirely foreign to the BIOS firmware standard, and any drive initialized with it will not be bootable. However, certain operating systems can read GPT initialized drives provided that the operating system is installed on a drive with an MBR partitioning scheme.

If you convert the RAID disk to a GPT partitioning scheme, you should be able to use all of the space on it. How you do this will depend on the operating system that you are running.

http://en.wikipedia.org/wiki/GUID_Partition_Table#Windows:_32-bit_versions

EDIT: I totally misread your post. RAID5 has the combined capacity of n-1 disks. If you have three 1 TB hard drives in RAID5, you will have a total addressable space of (3-1)*1TB = 2TB of addressable space. Adding a third 1TB drive will exand this to 3TB, at which point you will run into the limitation I mentioned above.

Microsoft exposes drive capacity in TiB or TibiBytes, whereas hard disk manufacturers market it in Terabytes.

1 TibiByte = 2^40
1 TeraByte = 10^12

Since a TibiByte is slightly larger than a TeraByte, it appears to be slightly smaller numerically.

2TB = 1.81TiB
 

Skeefers

Honorable
Aug 7, 2013
518
0
11,360
The way Raid 5 works, the amount of storage space you get equals the number of disks minus 1 times the capacity of 1 disk

(# of disks -1) x capacity = available storage

. The reason for this is that you lose 1 drive due to the Raid configuration's redundancy. The other .19TB you lost is due to formatting. Once you add the 4th 1TB drive to the Raid array, you will have ~3TB of storage space (again, minus the space required for formatting).

http://en.wikipedia.org/wiki/Standard_RAID_levels#Usable_capacity
 
Solution