[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Raid 1 borked



	This might be better handled on linux-raid@vger.kernel.org

On 10/26/2020 10:35 AM, Dan Ritter wrote:
Bill wrote:
So we're setting up a small server with a pair of 1 TB hard disks sectioned
into 5x100GB Raid 1 partition pairs for data,  with 400GB+ reserved for
future uses on each disk.

That's weird, but I expect you have a reason for it.

It does seem odd. I am curious what the reasons might be. Do you mean perhaps, rather than RAID 1 pairs on each disk, each partition is paired with the corresponding partition on the other drive?

	Also, why so small and so many?

I'm not sure what happened, we had the five pairs of disk partitions set up
properly through the installer without problems. However, now the Raid 1
pairs are not mounted as separate partitions but do show up as
subdirectories under /, ie /datab, and they do seem to work as part of the
regular / filesystem.  df -h does not show any md devices or sda/b devices,
neither does mount. (The system partitions are on an nvme ssd).

Mounts have to happen at mount points, and mount points are
directories. What you have is five mount points and nothing
mounted on them.


lsblk reveals sda and sdb with sda[1-5] and sdb[1-5] but no md[0-5]. blkid
reveals that sda[1-5] and sdb[1-5] are still listed as
TYPE="linux_raid_member".

So first of all I'd like to be able to diagnose what's going on. What
commands should I use for that? And secondly, I'd like to get the raid
arrays remounted as separate partitions. How to do that?

Well, you need to get them assembled and mounted. I'm assuming
you used mdadm.

Start by inspecting /proc/mdstat. Does it show 5 assembled MD
devices? If not:

mdadm -A /dev/md0
mdadm -A /dev/md1
mdadm -A /dev/md2
mdadm -A /dev/md3
mdadm -A /dev/md4

And tell us any errors.

Perhaps before that (or after), what are the contents of /etc/mdadm/mdadm.conf? Try:

grep -v "#" /etc/mdadm/mdadm.conf

Once they are assembled, mount them:

mount -a

if that doesn't work -- did you remember to list them in
/etc/fstab? Put them in there, something like:

/dev/md0    /dataa  ext4    defaults    0   0

and try again.

-dsr-



Fortunately, there is no data to worry about. However, I'd rather not
reinstall as we've put in a bit of work installing and configuring things.
I'd prefer not to loose that. Can someone help us out?

Don't fret. There is rarely, if ever, any need to re-install a system to accommodate updates in RAID facilities. Even if / or /boot are RAID arrays - which does not seem to be the case here - one can ordinarily manage RAID systems without resorting to a re-install. I cannot think of any reason why a re-install would be required in order to manage a mounted file system. Even if /home is part of a mounted file system (other than /, of course), the root user can handle any sort of changes to mounted file systems. This would be especially true in your case, where your systems aren't even mounted, yet. Even in the worst case - and yours is far from that - one should ordinarily be able to boot from a DVD or a USB drive and manage the system.


Reply to: