[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: definiing deduplication (was: Re: deduplicating file systems: VDO with Debian?)



On Wed, 2022-11-09 at 11:05 +0100, didier gaumet wrote:
> Le 09/11/2022 à 10:27, hw a écrit :
> [...]
> > Yes, I've seen those.  I can only wonder how much performance impact VDO
> > would
> > have for backups.  And I wonder why it doesn't require as much memory as ZFS
> > seems to need for deduplication.
> 
> It's *only* an hypothesis, but I would suppose that ZFS was designed 
> (originally by Sun, hardware vendor) primarily with performances in 
> mind,

I don't think it was, see https://docs.freebsd.org/en/books/handbook/zfs/

I does mention performance, but I remember other statements saying that was
designed for arrays with 40+ disks and, besides data integrity, with ease of use
in mind.  Performance doesn't seem paramount.  Also see
https://wiki.gentoo.org/wiki/ZFS

>  at the expense of strong hardware needs, while RedHat (primarily 
> software editor before its acquisition by IBM) designed VDO more with 
> TCO and integration of already existant customer infrastructure in mind, 
> at the expense of pure performances.

Well, the question is what you mean by performance.  Maybe ZFS can deduplicate
faster than VDO, but eating tons of RAM and/or having to replace all the
hardware may not be a kind of performance one would be looking for.


Reply to: