[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Disks renamed after update to 'testing'...?



On 2020-08-20 08:32, rhkramer@gmail.com wrote:
On Thursday, August 20, 2020 03:43:55 AM tomas@tuxteam.de wrote:
Contraty to the other (very valid) points, my backups are always on
a LUKS drive, no partition table. Rationale is, should I lose it, the
less visible information the better. Best if it looks like a broken
USB stick. No partition table looks (nearly) broken :-)

I always use a partition table, to reduce the chance of confusing myself. ;-)


I have two questions:

    * I suppose that means you create the LUKS drive on, e.g., /dev/sdc rather
than, for example, /dev/sdc<n>?  (I suppose that should be easy to do.)

    * But, I'm wondering, how much bit rot would it take to make the entire
backup unusable, and what kind of precautions do you take (or could be taken)
to avoid that?

I have been pondering bit-rot mitigation on non-checksumming filesystems.


Some people have mentioned md RAID. tomas has mentioned LUKS. I believe both of them add checksums to the contained contents. So, bit-rot within a container should be caught by the container driver. In the case of md RAID, the driver should respond by fetching the data from another drive and then dealing with the bad block(s); the application should not see any error (?). I assume LVM RAID would respond like md RAID (?). In the case of LUKS, the driver has no redundant data (?) and will have no choice but to report an error to the application (?). I would guess LVM non-RAID would behave similarly (?).


For all three -- md, LUKS, LVM -- I don't know what happens for bit rot outside the container (e.g. in the container metadata).


David


Reply to: