[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Problems with making hardlink-based backups



On Mon, Aug 17, 2009 at 10:59:20AM +0200, David wrote:
> Thanks for the replies.
[...]
> 
> Basically, the problem isn't that I don't know how to use rsync, cp,
> etc to make the backups, manage generations,  etc... the problem is an
> incredibly large filesystem (as in number of hardlinks, and actual
> directories, to a smaller extent), resulting from the hardlink
> snapshot-based approach (as opposed to something like rdiff-backup,
> which only stores the differences between the generations).
[...]

ah. well, that is a problem isn't it. I can see why you'd like to
stick with a diff based backup then. Is there someway you can control
the number of files by tarring up sections of the filesystem prior to
backup? If you have a lot of high churn files, then you'll likely be
duplicating them anyway, so tarring up the whole lot might make
sense. Then you backup the tarballs instead. 

Here's another question: what is stored in all these millions of
files? And what is their purpose. Is it a case of using a filesystem
when a database might be a better option? Perhaps the whole problem
you're facing on the backend could be better solved by looking at the
front end. Of course, you'll want to avoid the tail wagging the
dog... 

just a couple of thoughts.

good luck.

A

Attachment: signature.asc
Description: Digital signature


Reply to: