[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: apps Re: Question on backups using rsync



On Wed, Dec 21, 2005 at 06:36:23PM -0800, Alvin Oga wrote:

> if you don't trust find|tar ... you have major problems with the machine's
> reliability and these brand new commands nobody used for 30 yrs :-)
> 
> using any other "favorite backup programs" will suffer the same fate of
> losing "huge amounts of data", and more importantly, is there a way to
> recover the lost data and/or alternative apps that doesn't have the "bug"
> or just simply fix the hardware ..
> 
> - there is nothing sw can to fix flaky hardware .... and unreliable
>   hardware cannot be used as a means to invalidate "methodology"
> 
> 	- good methdologies would already acocunt for the various hundred
> 	ways that it can fail in the first place

That's exactly what I'm saying: your tar | gpg methodology has not accounted
for the chance of a few flipped bits, because if it had, it wouldn't lead to
massive data loss, which it does.  Compressing/encrypting after archiving is
inferior to compressing/encrypting before archiving when considering
robustness.  I just can't comprehend how you could dispute that.

> > I'm open to hearing any advantages of tar over afio for backups, because I
> > don't know of even one.
> 
> :-)
> 
> i will bet any amount of $$$ and data .. that find | tar is better than
> the average "backup specific apps" that meets all my backup requirements
> 
> my backup specs
> 	- it will NOT corrupt my prev backups, say going back 5 years
> 	- it is fast and is live with the simple change of an ip#
> 	and untar as needed depending on the purpose of that tar files
> 	- confidential data is encrypted and root read only
> 
> 	- i can restore to any random data and random time at any
> 	time somebody says "prove that it can be done"
> 
> 	- it can support 20Terabytes of data in a 4U chassis ... and
> 	obvisously, that data is also backed up ... i keep at leaast 
> 	3 copies of everything in various state of readiness
> 
> 	- it doesn't costs more than the bare costs of the hw in both
> 	labor to write or test the "program" and methodology
> 
> 	- it must survive a failure of 2 successive full backups
> 	( ie have a work around backup failures )
> 
> 	- bare metal restore should be done in a matter of few minutes
> 	except that "restore" of 10TB sized data will take a FEW seconds
> 
> 	- backup system must also be flexible and extensible and
> 	can support 180degree methodology changes
> 	( managers are known to change directions ya know and budgets
> 	  come and go randomly )
> 
> 	- and it obviously has to be searchable
> 
> 	- some people like gui's... but i think gui's is for windoze kids
> 	
> 	- more detailed specs... and semi endless list of major points
> 
> 	- find | tar meets all those specs above ...
> 
> 	and trivially scriptable and anybody can maintain it since
> 	it's not wirtten in martian code, even if it might loook like it
> 	after a few dozen people add their $0.01 to it

afio is no more of a "backup specific app" than tar is.  afio has had no more
code changes than tar in the last 5 years.  I'm guessing you don't know
anything about it based on your comments.

I still don't see anything in that list that tar has but afio doesn't.  I *do*
know one thing that afio has that tar doesn't: much greater robustness in the
case of corruption.  Whether you "trust" your hardware or not, it doesn't make
sense to me to choose a less robust solution over a more robust solution.



Reply to: