[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: installing Debian 10 to 3 hdds as one big system



Hi Pavel,

On Sat, Aug 10, 2019 at 01:03:10PM +0200, Pavel Vlček wrote:
> I have computer with 3 hdds. One is ssd, 2 others are hdd. I want to install
> Debian 10 to all 3 disks as one big system. What to use, raid or lvm?

Personally I would use the three devices as a RAID-10 which would
result in half the capacity of the total (768G) and you could
withstand the loss of any one device.

You could instead do RAID-5 but I do not like parity RAIDs. In this
case that would give you 2 devices worth of capacity and again any one
device could fail. It won't perform as well as RAID-10.

Other options that include redundancy would be btrfs or zfs.

I would not do the redundancy in LVM.

If not using btrfs or zfs, I would use LVM afterwards on the RAID
device for management purposes.

You can do all of this (except zfs) in the Debian installer.

There is an MD feature called "write-mostly" which you can set on
devices to tell the kernel that no reads should go to these devices
unless absolutely necessary. The usual use of this is in mixed
rotational and flash setups to try to encourage reads to come from
the much faster flash devices. This could be of real benefit to you
but sadly it doesn't work with anything except RAID-1.

Other interesting approaches could be:

- RAID-1 of the rotational devices then use the SSD in lvmcache. In
  writethrough mode it is safe to lose the (non-redundant) cache
  device:

    https://manpages.debian.org/buster/lvm2/lvmcache.7.en.html

- RAID-1 of the rotational devices then bcache on the SSD:

    https://bcache.evilpiepirate.org/

Personally I don't find bcache mature enough and while lvmcache I
did find to be safe, I didn't find that it improved performance that
much, probably not enough to dedicate a third of my total capacity
to it.

If performance was my overriding concern I might actually do a 3-way
RAID-1 with the two HDDs set to write-mostly. Only 512G capacity,
can lose any two devices, good read performance.

> I know, how to create the lvm with textual installer,
> but I have problem expanding it to next two hdds, /dev/sda and sdb

I would not do this, but…

# Mark the new device as an LVM PV
# pvcreate /dev/sda

# Extend your current volume group to use the new PV
# vgextend /dev/your_vg_name /dev/sda

At this point you have added the capacity of /dev/sda to the
existing volume group, but all your existing logical volumes still
reside on the original PV alone. You can now convert them to have
their extents mirrored:

# lvconvert -m1 /dev/your_vg_name/some_lv

The mirrored extents will be on /dev/sda because that is the only
PV with free extents. If you already added /dev/sdb then the extents
could be mirrored there instead. If you want to specify where the
mirrored extents should go then you can do so by appending the PV
device path to the above lvconvert command.

You can instead/also stripe, using the --stripes option to lvconvert
(or lvcreate, for new LVs). In this setup there would be no
redundancy though, which is too bad for me to consider.

Not doing anything special would leave your LVs being allocated
sequentially from whichever PV has capacity, in this setup resulting
in no redundancy and max performance of one device, so that would be
the worst setup.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting


Reply to: