[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Best practice for fresh install on UEFI with multiple disks?



Hello folks,

Just to follow up on this thread, here's how it played out:

-- I gave up on EFI, and use just BIOS boot
-- each drive has three partitions: a space for grub-install, am mdadm
md0 /boot and a btrfs raid1c3 root partition
-- grub-install on each dev manually, and I probably need an apt grub hook here

This setup was tested almost immediately, since one of the nvmes died
very soon after the install, and I was able to boot off each of the
three remaining devices.

About the only thing that I think could be different (besides the grub
apt hook) is I could probably lose the RAID /boot and do that directly
on the btrfs.

Either way, thanks to all the wiki contributors over the years, and cheers!

On Mon, Sep 30, 2024 at 10:14 AM Boyan Penkov <boyan.penkov@gmail.com> wrote:
>
> Hello folks,
>
> Thanks kindly -- and my apologies for picking this up after a while;
> fell sick here...
>
> A few questions:
>
> -- If I have multiple drives, do I modify the script to have multiple
> efi2, efi3, ..., efiX ?
>
> -- it seems that the script above privileges /boot/efi over /boot/efi2
> -- in this case, if /boot/efi becomes corrupted, won't this just copy
> the errors to /boot/efi2 and thus destroy it as well, on the next run?
>
> Cheers!
>
> On Fri, Sep 20, 2024 at 2:12 PM Tim Woodall <debianuser@woodall.me.uk> wrote:
> >
> > On Fri, 20 Sep 2024, Florent Rougon wrote:
> >
> > > Le 20/09/2024, Tim Woodall <debianuser@woodall.me.uk> a écrit:
> > >
> > >> Because the script will abort after the mount fails.
> > >>
> > >> root@dirac:~# cat test.sh
> > >> #!/bin/bash
> > >>
> > >> set -e
> > >>
> > >> mount /boot/efi2
> > >>
> > >> echo "do important stuff"
> > >>
> > >> root@dirac:~# ./test.sh
> > >> mount: /boot/efi2: /dev/sda2 already mounted on /boot/efi2.
> > >>        dmesg(1) may have more information after failed mount system call.
> > >>
> > >>
> > >> Note that do important stuff is never reached.
> > >
> > > That's interesting because my system doesn't behave the same. I had of
> > > course checked, before writing my first message, that 'mount /boot/efi2'
> > > returns exit status 0 even when /boot/efi2 is already mounted. With your
> > > script (called foo.sh here), here is what I get:
> > >
> > > # mount | grep efi2
> > > /dev/sda1 on /boot/efi2 type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
> > > # /tmp/foo.sh
> > > do important stuff
> > > # mount | grep efi2
> > > /dev/sda1 on /boot/efi2 type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
> > > /dev/sda1 on /boot/efi2 type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
> > > #
> > >
> > > Every invocation adds a new, duplicate entry in the output of 'mount'.
> > >
> > > This is Debian sid amd64; /usr/bin/mount is from 'mount' package version
> > > 2.40.2-8.
> > >
> >
> > That's very interesting and looks like it's probably a kernel change.
> >
> > Tim.
>
>
>
> --
> Boyan Penkov



-- 
Boyan Penkov


Reply to: