[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: backup archive format saved to disk



Thanks Mike,

If I can attempt to summarize a portion of what you said:

	If the issue is resistance to data block errors, it doesn't
	matter if I use a file system or not so I may as well use a file
	system then if have difficulty, rip multiple copies of the file
	system bit by bit and do majority rules.

		There's a package (forget the name) that will do this
		with files: take multiple damaged copies and make one
		good copy if possible.


Does the kernel software-raid in raid1 do this?  Would there be any
advantage/disadvantage to putting three partitions on the drive and
setting them up as raid1? (and record the partition table [sfdisk -d]
separately)?


Googling this topic, I find sporatic posts on different forums whishing
for something like this but there doesn't seem to be anything
off-the-shelf for linux.  It seems to be what the data security
companies get paid for (e.g. the veritas filesystem).  Do you know of
anything?

I understand you description of FEC and I guess that's what we're
talking about.  In the absence of a filesystem that does it, I want a
program that takes a data stream (e.g. a tar.bz2 archive) and imbeds FEC
data in it so it can be stored, then later can take that data and
generate the origional data stream.

Do I understand you correctly that the FEC-embedded data stream to be
effective will be three times the size of the input stream?

Does it matter if this FEC data is embeddded with the data or appended?

If this doesn't exist for linux, do you know of any open-source
non-linux implementations that just need some type of porting?  I've
found a couple of technical papers discussing the algorithms
(Reed-Solomon) used in the par2 archive that I'll study.

Thanks,

Doug.



Reply to: