[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

mdadm and whole disk array members



I've spent a few days experimenting with using whole disks in a RAID 5 array and have come to the conclusion that it simply doesn't work well enough to be used.

The main problem I had was that mdadm seems to have problems assembling the array when it uses entire disks instead of partitions. Each time I restarted my computer, I would have to recreate the array. This causes the boot process to halt because /etc/mdadm/mdadm.conf and /etc/fstab both identify an array that should be started and mounted. Fortunately the create command was still in the bash history so I got the create parameters right.

However, after I added another disk to the array, that made the original create command obsolete. Plus the kernel assigned different drive letters to the drives once I plugged in a new drive, so that I couldn't simply add the new drive to the create command.

Fortunately I still had a decade-old script that would cycle through all combinations until it found one that would result in a mountable array (I had the script due to some problems I was having back in 2010). Unfortunately it didn't find any it could mount no matter what the order of the drives (which included one "missing").

I've found many other people complaining about similar issues when using whole disks to create mdadm RAID arrays. Some of these complaints go back many years, so this isn't new.

I suggest that, since it appears the developers can't get this work reliably, that the option to use the whole disk be removed and mdadm insist on using partitions. At the very least, mdadm --create should issue a warning that using a whole device instead of a partition may create problems.


Reply to: