[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

zfs data partition, crypt loop mounts and newbie tutorials -- was Re: Suggestion for systemd and /usr on separate partition



Since I have to re-send this anyway, my current laptop setup for some time is
this:

 - there is one internal drive, ~750 MiB

 - Root partition, ~30 GiB, Debian default Ext4

 - (there's also a default sized EFI partition, may be ~1GiB from memory)

 - remainder, ~700 MiB is a single data partition, which is assigned to
   a single-partition zpool (zfs disk pool)

 - inside the ZFS "internal" data pool, I create a number of ZFS
   "filesystems", see tutorial below

 - one of these holds various crypt volumes (virtual/loop mounted FSes)

 - inside each crypt vol is another, nested, ZFS filesystem - snapshots
   are just SO nice, I could not resist this...

There's an earlier tute and example code (bash script) and commands for
setting up this combination, but it's a little more technical than the
tutorials below, which are designed for absolute ZFS beginners.  See
here:

   https://github.com/zenaan/quick-fixes-ftfw/tree/master/zfs

(If new to ZFS, perhaps read the zfs tutorial below first though.)

If I had a second internal drive, this is how I would use it:

 - as a single, full-drive ZFS pool

 - inside would be at least a "primary user" filesystem,

 - as well as a " 'primary drive data partition' backup filisystem", to
   which I would make regular backups of my primary drive data partition
   (in my case, my primary data backups are made to an external USB drive,
   but same diff...)

Good luck,



----- Forwarded message from Zenaan Harkness <zen@freedbms.net> -----

From: Zenaan Harkness <zen@freedbms.net>
To: debian-user@lists.debian.org
Date: Thu, 9 Jul 2020 19:00:06 +1000
Subject: Re: Suggestion for systemd and /usr on seperate partition

On Thu, Jul 09, 2020 at 10:56:26AM +0300, Andrei POPESCU wrote:
> On Mi, 08 iul 20, 10:20:45, tomas@tuxteam.de wrote:
> > On Wed, Jul 08, 2020 at 08:35:35AM +0300, Andrei POPESCU wrote:
> > 
> > [...]
> > 
> > > I was under the impression that LVM is used in particular for its 
> > > flexibility in adjusting your partitions.
> > 
> > But it won't make disappear a separate /usr partition "by magic".
> > 
> > > What prevents you from merging '/' and '/usr'?
> > 
> > This thread is talking about upgrades. Do you suggest an upgrade
> > copying the contents of the /usr partition over to the / partition
> > and dropping the separate /usr (perhaps recovering the space somehow)?
>  
> Or the other way around ('/usr' could be bigger than '/'). 
> 
> > Sounds pretty risky.
> 
> Sure. On the other hand, what is the point of using LVM if one is not 
> going to use it to adjust partitions when required?


A very good question.  I have been under the impression that expanding partitions is what is made easy, for example by adding an extra drive and assigning that drive to the appropriate Logical volume,

and that shrinking volumes has always been "lower in priority" and "possibly do-able if you get the incantations pricisely right, but backup everything first as it's definitely not guaranteed."

Expanding a full existing LVM Volume, is a useful progression of the "easy and supported new status quo" to be sure.

But for maximum flexibility, something like ZFS is required, where pools (volumes), disks (and partitions), snapshots and etc, are all primary "objects" so to speak and are each integrated at the FS level   but even in ZFS the "default supported" is limited to removing and/or replacing mirror or RAID drives and/or partitions (adding and replacing mirrors I attest is really trivial to do).

Due to license incompatibility though, ZFS is not distributable as pre-built Linux modules, so until someone convinces Oracle to relicense it, ZFS won't be in default InitRamFS images.

Having used ZFS on all data partitions and drives for about a year now, this for me is the end game - it is beyond nice to use, it's off the charts-ski ... seriously: snapshots (and clones), backups, adding a mirror drive "just because", scrubs, and every sector checksummed.  Nothing compares except BTRFS which still has a ways to go on the deployment stability front.

Very recently I wrote 3 short tutorials for those who might otherwise be timid in dipping their dainty toes in the clear crispy waters of ZFS :)

   https://github.com/zenaan/quick-fixes-ftfw/blob/master/zfs/zfs.md

If you use ultra cheap USB controllers that consistently fall over after ~580MiB (I have 4 which are now retired), you'll find they cannot cope with single-session transfers of more than around ~580MiB - I've yet to update that step in the backup tutorial, but a better USB Sata adapter solved the problem, and that particular data pool is now a humming mirror of two USB attached drives, my other data drive is about to get the same mirror treatment.  Knowing that such issues are put squarely "in your face" by ZFS is such a relief, it's like looking out over the plains from a mountain top.

----- End forwarded message -----


Reply to: