[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Two questions on update_initramfs (in Debian 9 amd64 with systemd)



Dear everyone,

I'm playing with NFSroot and overlayfs.
Which appears to work fine.
Based on other people's work, I've put together a script
(see it attached, including credits of my key "inspirations")
that goes into

   /etc/initramfs-tools/scripts/init-bottom/

Obviously this gets embedded in the initial ramdisk if I run

   update_initramfs -u -k my.kernel.version

and the point is that the script then runs at early stages of boot, 
where it can insert an overlayFS layer while the NFS mountpoint
stays "read only" all the time (if so desired).
There are a number of tangential notes that are off topic here, I may 
eventually leave a blog entry or something later on, when I'm happy 
with the result :-)

The NFS-booted system is a 64bit x86 Debian 9. The "source export" of 
the diskless NFS root lives in a subdirectory on a server (same 
distro). Say /var/NFSboot/stretch/. The NFS-booted Debian has been 
installed in the subdir using a pretty standard method based on 
debootstrap and some final touches in a chrooted shell.
No special sauce here.

I have two questions to ask:

Question #1:
It seems that my script, called overlay.sh, that I placed in
  /etc/initramfs-tools/scripts/init-bottom/
gets executed by update_initramfs "while building the initrd image" !
I.e., when I run update_initramfs in a chroot on my server, I end up 
with some erratic instances of overlayfs mounted on the server :-O
Does this have a rational explanation?
Apologies for being lazy to read the guts of update_initramfs.
If you've read the script, you already know my workaround for this.
(I test for the presence of an environment variable, that's only 
present "at build time" - in which case the interesting 
overlayfs-related stuff gets skipped.)

Question #2:
Anyone who's played with debootstrap or even just cloning an 
installed system to a different disk has probably reached a point
(based on numerous "Debian migration" howto's)
where you need to chroot into the target directory, but before it 
make sense to take that plunge, you first need to bind-mount a 
handful of system directories, served by the kernel via special 
filesystems. I have a tiny script to do the bind-mounts (and 
umounts!) for me, that goes like this:

   mount --bind --make-rslave /proc $DESTDIR/proc
   mount --bind --make-rslave /dev $DESTDIR/dev
   mount --bind --make-rslave /sys $DESTDIR/sys
   mount --bind --make-rslave /run $DESTDIR/run

   chroot $DESTDIR /bin/bash

   umount $DESTDIR/run
   umount $DESTDIR/sys
   umount $DESTDIR/dev
   umount $DESTDIR/proc

The older howtos use just --bind (or the equivalent "-o bind").
There's a Debian FAQ explaining that the "rslave" option is now 
needed, since the arrival of systemd.
  https://wiki.debian.org/systemd#Shared_bind_mounts
The "rslave" option serves to avoid unmounting both the "bind slave"
and the "bind source" mountpoints when just "the slave" mountpoint 
gets unmounted. And yes I have learned the hard way why the rslave 
option is needed, on my server when chrooting for update_initramfs in 
the NFSroot ;-)

So I now have the "rslave" option, and still, after I run 
update_initramfs, I get "side effects": the trailing part of my 
"chroot wrapper script" cannot unmount some of the mountpoints
on the grounds of some of them being busy. The error message from 
umount suggests fuser and one other command to reveal the culprits, 
but in my case those didn't help...

=> so the second question really is: any clues what gets launched by 
update_initramfs, that keeps some of the devices in the "chroot 
directory" open?

It's not a major problem, at least with --make-rslave my host server 
does not collapse anymore when umounting the bind-mounted
slave directories. And instead of the server-side chroot, 
alternatively I can maintain the diskless distro by mounting the 
NFSroot "rw" on a single diskless station... 
(direct RW mount of the NFS export, without an intermediate overlayfs 
layer).

Any clues to my questions would be welcome :-)

Frank Rysanek

The following section of this message contains a file attachment
prepared for transmission using the Internet MIME message format.
If you are using Pegasus Mail, or any other MIME-compliant system,
you should be able to save it or view it from within your mailer.
If you cannot, please ask your system administrator for assistance.

   ---- File information -----------
     File:  overlay.sh
     Date:  2 Sep 2019, 23:21
     Size:  2019 bytes.
     Type:  Unknown

Attachment: overlay.sh
Description: Binary data


Reply to: