[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: does btrfs have a future? (was: feature)



On Wed, Aug 15, 2018 at 02:50:18PM +0000, Matthew Crews wrote:
> On 8/15/18 2:25 AM, Stefan K wrote:
> > Did you think that "only" the RAID5/6 problem is the reason why
> > btrfs is not so common? what is with the performance? and some
> > (important) featrures (not futures ;) ) are missing to catch up
> > ZFS.
> > 
> > best regards
> > Stefan
> > (sorry for my bad english)
> > 
> 
> Your English is fine. Not perfect (no one ever is), but I know plenty of
> native speakers who speak it worse than you.
> 
> In my opinion btrfs has a bad rap partially because of the RAID5/6
> situation, but also because for a long time it was marked as
> experimental, and there are some situations where data loss has occured
> (I'm guessing because of RAID5/6). But as long as you avoid RAID5/6 and
> stick to RAID1/10, you should be fine.

Very important to know. Makes BTRFS RAID5/6 a complete non-starter
for me.  And to be clear, ZFS does in fact properly handle the RAID
write hole problem:
  https://serverfault.com/questions/844791/write-hole-which-raid-levels-are-affected
  http://www.raid-recovery-guide.com/raid5-write-hole.aspx
  http://www.enterprisenetworkingplanet.com/linux_unix/article.php/3842741/10-Reasons-You-Need-to-Look-at-ZFS.htm


> Ignoring Raid5/6 and similar, I don't know what features btrfs is
> lacking that make ZFS more attractive. Btrfs does have *nice* features
> that ZFS currently lacks, like adding and removing disks to the array
> on-the-fly and intelligent data balancing while the array is mounted.
> 
> Btrfs's killer feature, imo, is its Copy-On-Write features, which you
> can read about on the Arch Wiki:
> 
> https://wiki.archlinux.org/index.php/Btrfs#Copy-on-Write_.28CoW.29
> 
> Btrfs also corrects read errors on-the-fly, something ZFS doesn't do,
> but only if you are using a RAID with some level of redundancy.

This was presumed to be the case about ZFS on Linux, and a bug,
investigations and resolution arrived at which is that ZFS/ZoL does
in fact auto repair, and that the status output which made it look
like there was a problem, was the actual problem (just the status) -
and that ZFS does not in fact have the problem. See here:

  Reallocate on read error #1256
  https://github.com/zfsonlinux/zfs/issues/1256

  “TL;DR Your data is safe. It is getting re-written on every
  checksum and read error. ZFS just doesn't report the errors if it
  fixed them without issue.”


Note also that that ZoL issue is now listed as "Feature, No one
assigned", but that is for the change in intention to improve the
output of the ZFS/ZoL error reporting to properly let folks know when
ZoL DID do it's auto-correction after read error thing:

   “I think it's up for debate whether these self-healing IOs should
   be included in the `zpool status` read/checksum counts.”


Enjoy your rock solid ZoL experience, just (in general) rather than
using ZFS de-dup, use any (or any combination of):

 - ZFS compression
 - ZFS snapshots (like Git branches)
 - ZFS bookmarks (very lightweight, analogous to Git tags)

(Unless you know what you're doing and have the hardware,
particularly RAM, to properly satisfy ZFS dedup table (DDT) needs.)


Reply to: