One of my backup procedures involves mounting a RAID1 array built from two external drives. Today I ran the backup script and it failed to create the array properly - mdadm was refusing to start the array with all disks (in the following examples bpc ~ #
is the command prompt):
bpc ~ # mdadm --assemble /dev/md3 /dev/sdd1 /dev/sde1
mdadm: /dev/md3 has been started with 1 drive (out of 2).
Looking in /var/log/messages revealed that it was refusing to add one of the disks to the array because it was “non-fresh” (I believe this was due to a silly mistake I made the other day when I started my backups without assembling the array properly and data was written to one of the disks but not the other):
Jan 4 12:22:03 localhost kernel: [ 840.242175] md: bind<sdd1>
Jan 4 12:22:03 localhost kernel: [ 840.242949] md: bind<sde1>
Jan 4 12:22:03 localhost kernel: [ 840.242977] md: kicking non-fresh sdd1 from array!
Jan 4 12:22:03 localhost kernel: [ 840.242983] md: unbind<sdd1>
Jan 4 12:22:03 localhost kernel: [ 840.246293] md: export_rdev(sdd1)
Jan 4 12:22:03 localhost kernel: [ 840.247354] raid1: raid set md3 active with 1 out of 2 mirrors
To fix this problem you’ll need to assemble the array, manually add the missing drive, then wait while it fixes itself.
1. Assemble the array:
bpc ~ # mdadm --assemble /dev/md3 /dev/sdd1 /dev/sde1
mdadm: /dev/md3 has been started with 1 drive (out of 2).
2. Add the missing drive:
bpc ~ # mdadm /dev/md3 --add /dev/sdd1
mdadm: re-added /dev/sdd1
3. Wait while it fixes (rebuilds) itself:
bpc ~ # watch cat /proc/mdstat
Every 2.0s: cat /proc/mdstat
Personalities : [raid1]
md3 : active raid1 sde1[1] sdd1[2]
244195904 blocks [2/1] [_U]
[>....................] recovery = 0.1% (246656/244195904) finish=197.7min speed=20554K/sec
md1 : active raid1 sda1[0] sdb1[1] sdc1[2](S)
192640 blocks [2/2] [UU]
md2 : active raid1 sdc2[2](S) sda2[0] sdb2[1]
488191168 blocks [2/2] [UU]
unused devices: <none>