[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Disks renamed after update to 'testing'...?



David Christensen <dpchrist@holgerdanske.com> writes:

> Thanks for the explanation.  It seems that pvcreate(8) places an LVM
> disk label and an LVM metadata area onto disks or partitions when
> creating a PV; including a unique UUID:
> 
> https://www.man7.org/linux/man-pages/man8/pvcreate.8.html

Yes, correct.  You can see the UUID with pvdisplay(8) or blkid(8):

# pvdisplay /dev/md0
  --- Physical volume ---
  PV Name               /dev/md0
  VG Name               vg0
  PV Size               1.82 TiB / not usable 3.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              476899
  Free PE               96653
  Allocated PE          380246
  PV UUID               uFHSzs-QpCa-GVIX-LKRZ-rIRV-KgfE-taQXQV
   
# blkid /dev/md0
/dev/md0: UUID="uFHSzs-QpCa-GVIX-LKRZ-rIRV-KgfE-taQXQV" TYPE="LVM2_member"

> When using a drive as backup media, are there likely use-cases that
> benefit from configuring the drive with no partition, a single PV,
> single VG, single LV, and single filesystem vs. configuring the drive
> with a single partition, single UUID fstab entry, and single
> filesystem?

You can use a partition or the whole disk for a physical volume, as
you can for a file system.  That is, you can

        mkfs /dev/sda    or    mkfs /dev/sda1

and likewise with LVM you can

        pvcreate /dev/sda    or    pvcreare /dev/sda1

Long ago I actually created PVs on the whole disk and didn't have
partition tables and therefore no partition on many of my drives.
Today, I prefer having a partition table with only one partition
covering the whole disk.  The partition table entry includes a type so
that there is less guessing about what the disk contains:

# fdisk -l /dev/sda | grep /dev
Disk /dev/sda: 1.8 TiB, 2000397852160 bytes, 3907027055 sectors
/dev/sda1        2048 3907026943 3907024896  1.8T fd Linux raid autodetect
# fdisk -l /dev/sdf | grep /dev
Disk /dev/sdf: 3.7 TiB, 4000787030016 bytes, 976754646 sectors
/dev/sdf1         256 976754645 976754390  3.7T 8e Linux LVM

If you then put a single LV into the VG which covers the whole VG you
don't benefit much from LVM's functionality, except that you can
easily change allocations later if you decide so.  Re-partitioning is
more complicated.  But even then you have nice and stable device
names.  You could even add or remove drives to the volume group to
extend it, spread logical volumes across the drives and still no LV
name would change.

I like having nice device names like /dev/vg0/root, /dev/vg0/usr,
/dev/vg0/var, /dev/vg0/home, /dev/vg0/swap, /dev/vg0/<host> for all of
my (currently 4) virtual machines.  And use it a lot, because it so
easy to add/delete/change:

# ls -l /dev/mapper | wc -l
27

For example if I want to test something with btrfs, I can run

        lvcreate -n btrfs-test -L 4G vg0

and I have a /dev/vg0/btrfs-test to work with.  No re-partitioning, no
problem with re-reading partition tables which are in use, etc.

urs


Reply to: