Expanding a RAID

Quick overview more for me than anything else:

lsblk to see disks in system:

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 9.1T 0 disk
└─md3 9:3 0 27.3T 0 raid5
└─md3p1 259:0 0 27.3T 0 part /storage
sdb 8:16 0 9.1T 0 disk
└─md3 9:3 0 27.3T 0 raid5
└─md3p1 259:0 0 27.3T 0 part /storage
sdc 8:32 0 3.6T 0 disk
├─sdc1 8:33 0 1G 0 part
├─sdc2 8:34 0 1G 0 part
│ └─md0 9:0 0 1022M 0 raid1
│ └─md0p1 259:2 0 1020M 0 part /boot
├─sdc3 8:35 0 128G 0 part
│ └─md1 9:1 0 127.9G 0 raid1
│ └─md1p1 259:1 0 127.9G 0 part [SWAP]
└─sdc4 8:36 0 3.5T 0 part
└─md2 9:2 0 3.5T 0 raid1
└─md2p1 259:3 0 3.5T 0 part /
sdd 8:48 0 3.6T 0 disk
├─sdd1 8:49 0 1G 0 part /boot/efi
├─sdd2 8:50 0 1G 0 part
│ └─md0 9:0 0 1022M 0 raid1
│ └─md0p1 259:2 0 1020M 0 part /boot
├─sdd3 8:51 0 128G 0 part
│ └─md1 9:1 0 127.9G 0 raid1
│ └─md1p1 259:1 0 127.9G 0 part [SWAP]
└─sdd4 8:52 0 3.5T 0 part
└─md2 9:2 0 3.5T 0 raid1
└─md2p1 259:3 0 3.5T 0 part /
sde 8:64 0 9.1T 0 disk
└─md3 9:3 0 27.3T 0 raid5
└─md3p1 259:0 0 27.3T 0 part /storage
sdf 8:80 0 9.1T 0 disk
└─md3 9:3 0 27.3T 0 raid5
└─md3p1 259:0 0 27.3T 0 part /storage

add in new disk and lsblk again:

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 9.1T 0 disk
└─md3 9:3 0 27.3T 0 raid5
└─md3p1 259:0 0 27.3T 0 part /storage
sdb 8:16 0 9.1T 0 disk
└─md3 9:3 0 27.3T 0 raid5
└─md3p1 259:0 0 27.3T 0 part /storage
sdc 8:32 0 3.6T 0 disk
├─sdc1 8:33 0 1G 0 part
├─sdc2 8:34 0 1G 0 part
│ └─md0 9:0 0 1022M 0 raid1
│ └─md0p1 259:2 0 1020M 0 part /boot
├─sdc3 8:35 0 128G 0 part
│ └─md1 9:1 0 127.9G 0 raid1
│ └─md1p1 259:1 0 127.9G 0 part [SWAP]
└─sdc4 8:36 0 3.5T 0 part
└─md2 9:2 0 3.5T 0 raid1
└─md2p1 259:3 0 3.5T 0 part /
sdd 8:48 0 3.6T 0 disk
├─sdd1 8:49 0 1G 0 part /boot/efi
├─sdd2 8:50 0 1G 0 part
│ └─md0 9:0 0 1022M 0 raid1
│ └─md0p1 259:2 0 1020M 0 part /boot
├─sdd3 8:51 0 128G 0 part
│ └─md1 9:1 0 127.9G 0 raid1
│ └─md1p1 259:1 0 127.9G 0 part [SWAP]
└─sdd4 8:52 0 3.5T 0 part
└─md2 9:2 0 3.5T 0 raid1
└─md2p1 259:3 0 3.5T 0 part /
sde 8:64 0 9.1T 0 disk
└─md3 9:3 0 27.3T 0 raid5
└─md3p1 259:0 0 27.3T 0 part /storage
sdf 8:80 0 9.1T 0 disk
└─md3 9:3 0 27.3T 0 raid5
└─md3p1 259:0 0 27.3T 0 part /storage
sdg 8:96 0 9.1T 0 disk
├─sdg1 8:97 0 2G 0 part
│ └─md127 9:127 0 0B 0 md
├─sdg2 8:98 0 9.1T 0 part
│ └─md126 9:126 0 0B 0 md
├─sdg3 8:99 0 1G 0 part
└─sdg4 8:100 0 1G 0 part

/dev/sdg has been added

However, because sdg is showing as md127 we need to stop and remove the raid:

mdadm --stop /dev/md127
mdadm --remove /dev/md127

now we can partition the disk

use parted to wipe filesystem and add a new label to the disk

parted -s -a optimal /dev/sdg mklabel gpt

Use fdisk to check the new GTP Partition tables. Don’t bother to create any partitions.

fdisk -p to check out the unpartitioned disk:

$ sudo fdisk /dev/sdg
Welcome to fdisk (util-linux 2.39.3).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): p
Disk /dev/sdg: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors
Disk model: WDC WD100EFAX-68
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 56FF5A1F-FE90-4AB1-ABD6-DBB1E1D22042

Command (m for help): q

Just check to see which array you’re adding the disk to:

$ cat /prod/mdstat

Personalities : [raid6] [raid5] [raid4] [raid1] [raid0] [raid10]
md1 : active raid1 sdd3[1] sde3[0]
134085632 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md2 : active raid1 sdd4[1] sde4[0]
3770516480 blocks super 1.2 [2/2] [UU]
bitmap: 1/29 pages [4KB], 65536KB chunk

md0 : active raid1 sdd2[1] sde2[0]
1046528 blocks super 1.2 [2/2] [UU]

md3 : active raid5 sdf[2] sdb[1] sdg[4] sda[0]
29298914304 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 0/73 pages [0KB], 65536KB chunk

unused devices: <none>

md3 in this case. Add the disk into the array:

mdadm --add /dev/md3 /dev/sdg

Check the array:

$ sudo mdadm --add /dev/md3 /dev/sdg
mdadm: added /dev/sdg

sysadmin@tnas:~$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1] [raid0] [raid10]
md1 : active raid1 sdd3[1] sde3[0]
134085632 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md2 : active raid1 sdd4[1] sde4[0]
3770516480 blocks super 1.2 [2/2] [UU]
bitmap: 2/29 pages [8KB], 65536KB chunk

md0 : active raid1 sdd2[1] sde2[0]
1046528 blocks super 1.2 [2/2] [UU]

md3 : active raid5 sdc5 sdf[2] sdb[1] sdg[4] sda[0]
29298914304 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 0/73 pages [0KB], 65536KB chunk

The disk is now visible in the array: sdc5 but it’s marked as a spare disk (S).

Now we need to grow the array:

mdadm --grow /dev/md3 --raid-devices=5

where the number of raid devices is the number of disks we want in the raid.

Watch /proc/mdstat to see the progress:

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1] [raid0] [raid10]
md1 : active raid1 sdd3[1] sde3[0]
134085632 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md2 : active raid1 sdd4[1] sde4[0]
3770516480 blocks super 1.2 [2/2] [UU]
bitmap: 1/29 pages [4KB], 65536KB chunk

md0 : active raid1 sdd2[1] sde2[0]
1046528 blocks super 1.2 [2/2] [UU]

md3 : active raid5 sdc[5] sdf[2] sdb[1] sdg[4] sda[0]
29298914304 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
[>………………..] reshape = 0.0% (337408/9766304768) finish=3376.6min speed=48201K/sec
bitmap: 0/73 pages [0KB], 65536KB chunk

unused devices: <none>

Yep, this is going to take a while. Time to put the kettle on.

Helpful guides:

https://wpguru.co.uk/2021/01/expand-software-raid-mdadm/
https://www.itsfullofstars.de/2019/03/how-to-add-a-new-disk-to-raid5/
https://tomlankhorst.nl/setup-lvm-raid-array-mdadm-linux