[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))



On Wed, 2022-11-09 at 19:17 +0100, Linux-Fan wrote:
> hw writes:
> 
> > On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> > > Le 09/11/2022 à 12:41, hw a écrit :
> 
> [...]
> 
> > > I am really not so well aware of ZFS state but my impression was that:
> > > - FUSE implementation of ZoL (ZFS on Linux) is deprecated and that,
> > > Ubuntu excepted (classic module?), ZFS is now integrated by a DKMS module
> > 
> > Hm that could be.  Debian doesn't seem to have it as a module.
> 
> As already mentioned by others, zfs-dkms is readily available in the contrib  
> section along with zfsutils-linux. Here is what I noted down back when I  
> installed it:
> 
> https://masysma.net/37/zfs_commands_shortref.xhtml

Thanks, that's good information.

> I have been using ZFS on Linux on Debian since end of 2020 without any  
> issues. In fact, the dkms-based approach has run much more reliably than  
> my previous experiences with out-of-tree modules would have suggested...

Hm, issues?  I have one:


ls -la
insgesamt 5
drwxr-xr-x  3 namefoo namefoo    3 16. Aug 22:36 .
drwxr-xr-x 24 root    root    4096  1. Nov 2017  ..
drwxr-xr-x  2 namefoo namefoo    2 21. Jan 2020  ?
namefoo@host /srv/datadir $ ls -la '?'
ls: Zugriff auf ? nicht möglich: Datei oder Verzeichnis nicht gefunden
namefoo@host /srv/datadir $ 


This directory named ? appeared on a ZFS volume for no reason and I can't access
it and can't delete it.  A scrub doesn't repair it.  It doesn't seem to do any
harm yet, but it's annoying.

Any idea how to fix that?

> Nvidia drivers have been working for me in all releases from Debian 6 to 10
> both  
> inclusive. I did not have any need for them on Debian 11 yet, since I have  
> switched to an AMD card for my most recent system.
> 

Maybe it was longer ago.  I recently switched to AMD, too.  NVIDIA remains
uncooperative and their drivers are a hassle, so why would I support NVIDIA by
buying their products.  It was a good choice and it just works out of the box.

I can't get the 2nd monitor to work, but that's probably not an AMD issue.

> > However, Debian has apparently bad ZFS support (apparently still only Gentoo
> > actually supports it), so I'd go with btrfs.  Now that's gona suck because
> 
> You can use ZFS on Debian (see link above). Of course it remains your choice  
> whether you want to trust your data to the older, but less-well-integrated  
> technology (ZFS) or to the newer, but more easily integrated technology  
> (BTRFS).
> 
> 

It's fine when using the kernel module.  This isn't about newer, and ZFS seems
more mature than btrfs.  Somehow, development of btrfs is excruciatingly slow.

If it doesn't work out, I can always do something else and make a new backup.

> > I'd
> > have to use mdadm to create a RAID5 (or use the hardware RAID but that isn't
> 
> AFAIK BTRFS also includes some integrated RAID support such that you do not  
> necessarily need to pair it with mdadm.

Yes, but RAID56 is broken in btrfs.

>  It is advised against using for RAID  
> 5 or 6 even in most recent Linux kernels, though:
> 
> https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#raid56-status-and-recommended-practices
> 

Yes, that's why I would have to use btrfs on mdadm when I want to make a RAID5.
That kinda sucks.

> RAID 5 and 6 have their own issues you should be aware of even when running  
> them with the time-proven and reliable mdadm stack. You can find a lot of  
> interesting results by searching for “RAID5 considered harmful” online. This  
> one is the classic that does not seem to make it to the top results, though:

Hm, really?  The only time that RAID5 gave me trouble was when the hardware RAID
controller steadfastly refused to rebuild the array after a failed disk was
replaced.  How often does that happen?

So yes, there are ppl saying that RAID5 is so bad, and I think it's exaggerated.
At at the end of the day, for all I know lightning could strike the server and
burn out all the disks and no alternative to RAID5 could prevent that.  So all
variants of RAID are bad and ZFS and btrfs and whatever are all just as bad and
any way of storing data is bad because something could happen to the data. 
Gathering data is actually bad to begin with and getting worse all the time. 
The less data you have, the better, because less data is less unwieldy.

There is a write hole with RAID5?  Well, I have an UPS and the controllers have
backup batteries.  So is there really gona be a write hole?  When I use mdadm, I
don't have a backup battery.  Then what?  Do JBOD controllers have backup
batteries or are you forced to use file systems that make them unnecessary? 
Bits can flip and maybe whatever controls the RAID may not be able to tell which
copy is the one to use.  The checksums ZFS and btrfs use may be insufficient and
then what.  ZFS and btrfs may not be a good idea to use because the software,
like Centos 7, is too old and prefers xfs instead.  Now what?  Rebuild the
server like every year or so to use the latest and greatest?  Oh no, the latest
and greatest may be unstable ...

More than one disk can fail?  Sure can, and it's one of the reasons why I make
backups.

You also have to consider costs.  How much do you want to spend on storage and
and on backups?  And do you want make yourself crazy worrying about your data?

> https://www.baarf.dk/BAARF/RAID5_versus_RAID10.txt
> 
> If you want to go with mdadm (irrespective of RAID level), you might also  
> consider running ext4 and trade the complexity and features of the advanced  
> file systems for a good combination of stability and support.
> 

Is anyone still using ext4?  I'm not saying it's bad or anything, it only seems
that it has gone out of fashion.

I'm considering using snapshots.  Ext4 didn't have those last time I checked.


Reply to: