[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Raid 1



Andy Smith writes:

Hi Pankaj,

Not wishing to put words in Linux-Fan's mouth, but my own views
are…

On Mon, Jan 25, 2021 at 11:04:09AM +0530, Pankaj Jangid wrote:
> Linux-Fan <Ma_Sys.ma@web.de> writes:
>
> > * OS data bitrot is not covered, but OS single HDD failure is.
> >   I achieve this by having OS and Swap on MDADM RAID 1
> >   i.e. mirrored but without ZFS.
>
> I am still learning.
>
> 1. By "by having OS and Swap on MDADM", did you mean the /boot partition
>    and swap.

When people say, "I put OS and Swap on MDADM" they typically mean
the entire installed system before user/service data is put on it.
So that's / and all its usual sub-directories, and swap, possibly
with things later split off after install.

Yes, that is exactly how I meant it :)

My current setup has two disks each partitioned as follows:

* first   partition ESP          for /boot/efi (does not support RAID)
* sencond partition MDADM RAID 1 for / (including /boot and /home)
* third   partition MDADM RAID 1 for swap
* fourth  partition ZFS mirror   for virtual machines and containers

Some may like to have /home separately. I personally prefer to store all my user-created data outside of the /home tree because many programs are using /home structures for cache and configuration files that are automatically generated and should (IMHO) not be mixed with what I consider important data.

> 2. Why did you put Swap on RAID? What is the advantage?

If you have swap used, and the device behind it goes away, your
system will likely crash.

The point of RAID is to increase availability. If you have the OS
itself in RAID and you have swap, the swap should be in RAID too.

That was exactly my reasoning, too. I can add that I did not use a ZFS volume for the swap mostly because of
https://github.com/openzfs/zfs/issues/7734
and I did not use it for the OS (/, /boot, /home) mainly because I wanted to avoid getting a non-booting system in case anything fails with the ZFS module DKMS build. The added benefit was a less complex installation procedure i.e. using Debian installer was possible and all ZFS stuff could be done from the installed and running system.

I would advise against replicating my setup for first-time RAID users because restore after a failed disk will require invoking the respective restoration procedures of both technologies.

There are use cases where the software itself provides the
availability. For example, there is Ceph, which typically uses
simple block devices from multiple hosts and distributes the data
around.

Yes.

[...]

> How do you decide which partition to cover and which not?

For each of the storage devices in your system, ask yourself:

- Would your system still run if that device suddenly went away?

- Would your application(s) still run if that device suddenly went
  away?

- Could finding a replacement device and restoring your data from
  backups be done in a time span that you consider reasonable?

If the answer to those questions are not what you could tolerate,
add some redundancy in order to reduce unavailability. If you decide
you can tolerate the possible unavailability then so be it.

[...]

My rule of thumb: RAID 1 whenever possible i.e. on all actively relied-upon computers that are not laptops or other special form factors with tightly limited HDD/SSD options.

The replacement drive considerations are important for RAID setups, too. I used to have a "cold spare" HDD but given the rate at which the capacity/price ratio rises I thought it to be overly cautious/expensive to keep that scheme.

HTH
Linux-Fan

öö

Attachment: pgpPkGijU3_O7.pgp
Description: PGP signature


Reply to: