[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Problems with making hardlink-based backups



Err.. and another post on backuppc, sorry.

I think that backuppc is actually going to have the same problem (with
massive filesystems causing du and locate, etc to become next to
unusable for the backup storage directories). The reason for this:

"Therefore, every file in the pool will have at least 2 hard links
(one for the pool file and one for the backup file below
__TOPDIR__/pc). Identical files from different backups or PCs will all
be linked to the same file. When old backups are deleted, some files
in the pool might only have one link. BackupPC_nightly checks the
entire pool and removes all files that have only a single link,
thereby recovering the storage for that file."

ie, there are actually hardlinks for every file for every server for
every backup generation.  Still going to have a bazillion files for du
and locate to go through, even if they are stored in a nice pool
system.

BackupPC has some nice features, but it's not going to fix my problem :-(.

Ideally I would have kept using rdiff-backup, but for now I'm going to
go with hardlink snapshots & pruning (with text file restore info)
details.

Is my use case really that unusual? (wanting to run 'du' and 'locate'
on a backup server, which has a lot of generations of data from other
servers that contain a huge number of files themself).

Going to ask about this general problem over at the backuppc mailing
list, maybe people there have more ideas :-)

David.


Reply to: