Hi all,
I have been using the Buffalo Linkstation Pro Duo LS-WVLBF4 with 2x 1TB running on RAID1. Last week I bought 2x3TB WD Red to replace the 1TB raid. I have used mdadm successfully increasing the RAID size to around 3TB, and the info showing in /proc/mdstat and mdadm --examine are both correct,
but the actual usable size is still 1TB and df -h is still showing the raid size as 1TB only.
Can anyone tell me what I should do or anything I have missed in the process? Thanks!
I have been using the Buffalo Linkstation Pro Duo LS-WVLBF4 with 2x 1TB running on RAID1. Last week I bought 2x3TB WD Red to replace the 1TB raid. I have used mdadm successfully increasing the RAID size to around 3TB, and the info showing in /proc/mdstat and mdadm --examine are both correct,
/dev/sdb6:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 6e4dc691:2cc8406b:3936562c:b0e820cc
Name : LS-WVLBF4:2 (local to host LS-WVLBF4)
Creation Time : Thu Nov 1 22:44:10 2012
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 5829747952 (2779.84 GiB 2984.83 GB)
Array Size : 5829747952 (2779.84 GiB 2984.83 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : bb12925a:89a5f710:150d811b:6ea874bd
Update Time : Mon Oct 21 23:34:16 2013
Checksum : d9f990ec - correct
Events : 7910
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing)
but the actual usable size is still 1TB and df -h is still showing the raid size as 1TB only.
Filesystem Size Used Available Use% Mounted on
/dev/md1 4.7G 837.8M 3.6G 18% /
udev 10.0M 128.0k 9.9M 1% /dev
/dev/ram1 15.0M 188.0k 14.8M 1% /mnt/ram
/dev/md0 969.2M 33.1M 936.1M 3% /boot
/dev/md2 917.1G 724.2G 192.9G 79% /mnt/array1
tmpfs 8.0M 1.8M 6.2M 22% /mnt/ram/com.kernel.org
/dev/md2 917.1G 724.2G 192.9G 79% /opt
Can anyone tell me what I should do or anything I have missed in the process? Thanks!