[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

apps Re: Question on backups using rsync




On Wed, 21 Dec 2005, Daniel Webb wrote:

> On Wed, Dec 21, 2005 at 02:16:29AM -0800, Alvin Oga wrote:
> 
> One nit to pick here:
> 
> > - find | tar | gpg  meeets all of my requirements for most all possible 
> >   potential disasters and recovery
> 
> As I describe on my backup page, that's a terrible idea.  One corrupt bit and
> you lose *huge* amounts of data.

if you don't trust find|tar ... you have major problems with the machine's
reliability and these brand new commands nobody used for 30 yrs :-)

using any other "favorite backup programs" will suffer the same fate of
losing "huge amounts of data", and more importantly, is there a way to
recover the lost data and/or alternative apps that doesn't have the "bug"
or just simply fix the hardware ..

- there is nothing sw can to fix flaky hardware .... and unreliable
  hardware cannot be used as a means to invalidate "methodology"

	- good methdologies would already acocunt for the various hundred
	ways that it can fail in the first place

> I'm open to hearing any advantages of tar over afio for backups, because I
> don't know of even one.

:-)

i will bet any amount of $$$ and data .. that find | tar is better than
the average "backup specific apps" that meets all my backup requirements

my backup specs
	- it will NOT corrupt my prev backups, say going back 5 years
	- it is fast and is live with the simple change of an ip#
	and untar as needed depending on the purpose of that tar files
	- confidential data is encrypted and root read only

	- i can restore to any random data and random time at any
	time somebody says "prove that it can be done"

	- it can support 20Terabytes of data in a 4U chassis ... and
	obvisously, that data is also backed up ... i keep at leaast 
	3 copies of everything in various state of readiness

	- it doesn't costs more than the bare costs of the hw in both
	labor to write or test the "program" and methodology

	- it must survive a failure of 2 successive full backups
	( ie have a work around backup failures )

	- bare metal restore should be done in a matter of few minutes
	except that "restore" of 10TB sized data will take a FEW seconds

	- backup system must also be flexible and extensible and
	can support 180degree methodology changes
	( managers are known to change directions ya know and budgets
	  come and go randomly )

	- and it obviously has to be searchable

	- some people like gui's... but i think gui's is for windoze kids
	
	- more detailed specs... and semi endless list of major points

	- find | tar meets all those specs above ...

	and trivially scriptable and anybody can maintain it since
	it's not wirtten in martian code, even if it might loook like it
	after a few dozen people add their $0.01 to it

c ya
alvin 



Reply to: