[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Where do I find the definitive man page for mdadm?



On Saturday 13 November 2021 22:37:19 Tom Dial wrote:

> On 11/13/21 14:57, Gene Heskett wrote:
[...]
> >>> It happened when I moved a drive from sda to sdd several years
> >>> ago.
> >>
> >> Barring some strange bug that only you have ever seen, it is not
> >> possible, so I believe you are mistaken. This is going to be
> >> another one of those things where you swear software behaved in
> >> incredibly improbable ways, you are asked to reproduce it and
> >> can't. I will gladly eat humble pie if you can reproduce this one
> >> and show us. I will be excited for the bug report we can make
> >> together, because that would be a real doozy.
> >>
> >> Like that time you said that having an IPv6 address configured
> >> prevented you from compiling some software, a claim you kept
> >> repeating in multiple unrelated threads any time IPv6 was
> >> mentioned, until you were asked to reproduce the issue and
> >> couldn't.

That I found. Getting rid of avahi and its git fixed that right up. I 
think in the intervening years, its gotten some TLC because its not been 
a problem since wheezy. The problem was that it was assigning an ipv6 
route to a system 200 miles from the nearest ipv6 socket.

> >> We all make mistakes from time to time but filling the archives
> >> with bold assertions like "filesystem UUIDs are volatile" I think
> >> would come under the category of an extraordinary claim that would
> >> require extraordinary proof.
> >>
> >>> Getting ready to switch to the next version of debian because I
> >>> always install to a new drive, which I installed wheezy on, then
> >>> put the old drive back in on a different sata port to get my
> >>> data copied to the new drive. No boot but single. It took me 3
> >>> days to build an fstab that mounted everything by Labels. When I
> >>> finally had a working system again, I ran blkid again, and with
> >>> the same drives except the boot drive re-arranged, every UUID
> >>> blkid reported was different from what it was in the now
> >>> commented out lines in fstab.
> >>
> >> "blkid" also reports things called PARTUUIDs, so I think this is
> >> explained by it doing that, and you being confused. Nothing you
> >> have described could cause a filesystem UUID to change.
> >>
> >>> The downside of now using mkfs to install a label, I didn't use
> >>> it then but something else, but mkfs also wipes the drive, so in
> >>> this case I hadn't moved anything to it, so I lost nothing
> >>> reformating to install the label. The utility, if it wasn't
> >>> journal-something or other I don't recall, but it could label a
> >>> partition that already had content, without losing that data.
> >>
> >> You have not once in this thread asked how to label an existing
> >> filesystem without re-creating it. Although you don't even need to
> >> ask us, because:
> >>
> >> https://lmgtfy.app/?q=how+do+I+label+an+ext4+filesystem
> >>
> >> So instead of doing a trivial search, or even asking, you just
> >> assume that it can't be done and have a nice old rant. Weird flex,
> >> but OK.
> >>
> >> The above is for ext* filesystems; other filesystems have their
> >> own tools for changing the label. A similar search will find them,
> >> too.
> >>
> >> People have been putting and changing labels on filesystem in
> >> Linux for decades. It's well understood and well documented. If you
> >> look. First you complain that fs UUIDs are volatile, now you
> >> complain that fs labelling is hard without even doing the most
> >> basic research. At least these topics have been adequately covered,
> >> so unwary searchers are unlikely to stumble upon this thread in
> >> future and be led down a very long garden path by the bizarre
> >> claims within.
> >>
> >>> Its simply too big a risk to do UUID mounts with something that
> >>> important.
> >>
> >> For you, maybe, but I guarantee this is down to some confusion on
> >> your part. Confusion is still a valid reason to shy away from
> >> something, especially when there is an alternate approach (mount
> >> by label) that works much better for you, but blaming it on
> >> mysteriously changing UUIDs and/or the mdadm man pages is not
> >> helping.
> >>
> >> Andy
> >
> > Wordwrap off. So which of these various UUID's is actually valid in
> > an fstab?
> >
> > root@coyote:etc$ blkid /dev/sda1: LABEL="stretchboot"
> > UUID="06aa3215-a6a6-4fbb-86ca-186c47e1334c" TYPE="ext4"
> > PARTUUID="6603f591-01" /dev/sda2:
> > UUID="8b675a91-5aa5-401b-bf50-d8afc3e8115a" TYPE="swap"
> > PARTUUID="6603f591-02" /dev/sda3: LABEL="stretchvar"
> > UUID="ee491e5c-7394-434f-b50a-f4354f6c9869" TYPE="ext4"
> > PARTUUID="6603f591-03" /dev/sda5:
> > UUID="0e698024-1cf3-4dbc-812d-10552c01caab" TYPE="ext4"
> > PARTUUID="6603f591-05" /dev/sdb1: LABEL="adumps"
> > UUID="4982ee4c-58c4-4d2b-b9d5-69344c3cb090" TYPE="ext4"
> > PARTUUID="3bb7fc74-01" /dev/sdc1: LABEL="amandatapes-2T"
> > UUID="3b6848c1-7b09-43be-a7aa-ae63d82f5f26" TYPE="ext4"
> > PARTUUID="5997197d-01" /dev/sde1:
> > UUID="3d5a3621-c0e3-2c8a-e3f7-ebb3318edbfb"
> > UUID_SUB="9cd6d3b5-6d13-8d46-a7e6-6f9847846d24" LABEL="coyote:0"
> > TYPE="linux_raid_member" /dev/sde2:
> > UUID="ddb6ffa2-e068-b701-f316-cc5f83938a13"
> > UUID_SUB="64609477-3041-8169-feab-73809dd337c6" LABEL="coyote:1"
> > TYPE="linux_raid_member" /dev/sdg1:
> > UUID="3d5a3621-c0e3-2c8a-e3f7-ebb3318edbfb"
> > UUID_SUB="38030389-42bc-f933-3945-8b22db9de87e" LABEL="coyote:0"
> > TYPE="linux_raid_member" /dev/sdg2:
> > UUID="ddb6ffa2-e068-b701-f316-cc5f83938a13"
> > UUID_SUB="dfd980a3-a155-4f50-f82d-02cbbe289891" LABEL="coyote:1"
> > TYPE="linux_raid_member" /dev/sdh1:
> > UUID="3d5a3621-c0e3-2c8a-e3f7-ebb3318edbfb"
> > UUID_SUB="ef0ffd69-5ce4-9629-ccd4-81b1f6431571" LABEL="coyote:0"
> > TYPE="linux_raid_member" /dev/sdh2:
> > UUID="ddb6ffa2-e068-b701-f316-cc5f83938a13"
> > UUID_SUB="b4d25ae6-fc68-1ce5-a08b-92df22c9030b" LABEL="coyote:1"
> > TYPE="linux_raid_member" /dev/sdf1:
> > UUID="3d5a3621-c0e3-2c8a-e3f7-ebb3318edbfb"
> > UUID_SUB="baca3a30-e9a5-f5e1-57e1-c197252c3500" LABEL="coyote:0"
> > TYPE="linux_raid_member" /dev/sdf2:
> > UUID="ddb6ffa2-e068-b701-f316-cc5f83938a13"
> > UUID_SUB="ff02585f-cafc-2d7a-3780-6cba4b48b0cb" LABEL="coyote:1"
> > TYPE="linux_raid_member" /dev/md1: LABEL="snapshot"
> > UUID="733718b2-e7f8-4b00-a390-264e5c73c453" TYPE="ext4" /dev/md0:
> > LABEL="home2" UUID="708320b3-10af-4c15-b5b1-a9ff7be06d99"
> > TYPE="ext4"
>
> From my experience, those identified by 'UUID=' and 'TYPE="ext4"' are
> the likely candidates. My presumption is that if something is reported
> to have a recognizable file system specifier, you probably can mount
> it. And you would use the value associated with 'UUID=', not
> 'PARTUUID=' or 'UUID_SUB=,' the meaning of which I do not know.
>
Thats easy, PARTUUID's are identifiers for the partition if the drive has 
more than 1. And I believe for partitioned drives, invalid

> As supporting evidence I present the fstab from a Gnu/Linux image that
> began life sometime earlier than 2002 on a dual Pentium Pro running
> Woody (3.0), was upgraded successively to Lenny (5.0), during which it
> was moved from 20 GiB disks to 120 GiB. I do not remember when I
> switched to UUID identifiers for the non-lvm devices, but someone else
> may recall when the contents of the /dev file system became dynamic
> and /dev/hda sometimes became /dev/hdb when both were present; it
> would be a bit later than that.
>
> # /etc/fstab: static file system information.
> #
> #<file system>	<mount>	<type>	<options>		<dump>	<pass>
> # /dev/hdb3	/	ext2	errors=remount-ro	0	1
> UUID=4b009940-2e38-4562-b135-b5e40b5f2546	/	ext2	errors=remount-ro	0	1
> # /dev/hda2	none	swap	sw			0	0
> UUID=b6f89718-730c-4d93-ba4d-8f001bcc300d	none	swap	sw			0	0
> # /dev/hdb2	none	swap	sw			0	0
> UUID=46cba318-7ecb-40f8-9815-2d2b5a3aff43	none	swap	sw			0	0
> proc		/proc	proc	defaults		0	0
> /dev/fd0	/floppy	auto	user,noauto		0	0
> /dev/cdrom	/cdrom	iso9660	ro,user,noauto		0	0
> # /dev/hdb1	/boot	ext2	defaults		0	2
> UUID=9d43af93-395f-4a6e-a65c-790efb755fac	/boot	ext2	defaults		0	2
> /dev/vg00/lvol0 /usr    jfs     defaults		0       2
> /dev/vg00/lvol1	/var	jfs	defaults		0	2
> /dev/vg00/lvol2	/tmp	jfs	defaults		0	2
> /dev/vg00/lvol3 /opt    jfs     defaults		0       2
> /dev/vg01/lvol1	/home	jfs	defaults		0	2
> #
> /dev/vg01/lvol0 /u01    jfs     defaults		0       2
> #
> /dev/vgmedia/lvol1 /backup jfs  noauto                  0       0
>
> (Apology for the ugly line wrapping.)
>
> The system was retired for a while when superseded by a larger system
> built around a Q6600 four-core. I resurected the disks a few months
> ago, copied them to new "disks" built from iSCSI exports from a NAS.
> The 120 GiB disks were copied using dd, and booted without significant
> issues in a VM, with all file systems mounted rw. The UUIDs, at least
> those used in the fstab, were copied without error, as Andy said.
>
> The (now virtual) system has since been upgraded successively to
> Jessie (Debian 8), and will, if things go well, be upgraded to
> Bullseye and beyond, after being moved again to larger disks. I expect
> the UUIDs will be copied correctly again.
>
> Device UUIDs are kind of ugly, but in my experience they are stable
> across both OS updates and physical movement of the file systems they
> contain as long as what is copied is the disk partition. I do not
> think copying a file system (e. g., with cp)  to a new (partitioned)
> disk would retain the block device UUID. My practice has been to copy
> raw partitions, to a new disk, then expand the file system as
> appropriate.
>
> Best regards,
> Tom
>
> > there are UUID's, PARTUUID's and UUID_SUB's in the above blkid
> > output.
> >
> > Thanks Andy.
> >
> > Cheers, Gene Heskett.


Cheers, Gene Heskett.
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page <http://geneslinuxbox.net:6309/gene>


Reply to: