[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Backup solutions without reinventig the wheel these days



Hello,

I'm looking for recommendations for backup solutions that don't reinvent the wheel and are reliable and used. I want to backup two servers to a backup server. The main data content is several hundred GB in many very small files.

I really like the idea behind backupninja, because it provides a centralized solution to the cron + ssh transfer (rsync) + mail paradigm and elevates the need to write one's own elaborate scripts. It also provides the most common backup helper scripts with sensible defaults. The mail reporting part isn't that great (does not offer consistent logging of data transfer solutions), but that can be fixed with a few custom shell scripts.

However, I found that for my use-case rdiff-backup runs out of memory on the backup server (1GB RAM + 1GB swap) and duplicity creates an over 50 GB signature file. I could use just simple rsync, but incremental + compression would be a nice feature as data corruption may not become apparent immediately. 

I've also looked at the new kids on the block like obnam, attick and borgbackup. They look interesting, but I prefer time-tested SW for backups.
After realizing that these new backup programs pretty much try to replicate features of btrfs or ZFS (incremental snapshots, block-level compression and deduplication) I started thinking that I could perhaps just send the data to the backup server via rsync and save them to a btrfs or ZFS (but the backup server may not have enough RAM for ZFS) and create daily snaphosts on the server. If memory will permit (if I optimize it), I'd go with ZFS as it should be more reliable. Does anybody use such a solution?

I also had a look at Bacula, but it seemed that it does not offer block-level deduplication and compression at the moment.

I'm looking forward to your recommendations.

Kind regards,
Ondřej Grover

Reply to: