Re: How to design a centralized cross-platform BACKUP solution ?
On Tue, Feb 14, 2012 at 10:31:22AM +0100, Davide Mirtillo wrote:
> Il 14/02/2012 09:50, J. Bakshi ha scritto:
> > On Tue, 14 Feb 2012 09:43:39 +0100
> > Davide Mirtillo <firstname.lastname@example.org> wrote:
> >> Il 14/02/2012 08:00, J. Bakshi ha scritto:
> >>> Hello list,
> >>> I like to implement a debian server with plenty of HDD space to work as a centralized net backup
> >>> server. The idea is both Linux and windows users can keep their backup at this server ( may be just folders;
> >>> or a complete partition or two); can browse their backups through web interface, can easily restore
> >>> and the main thing minimal client configuration.
> >>> I have already "apt-cache search" in debian and I have found close to my scenario is backuppc.
> >>> Though amanda is also there but I don't know how much it comes closer to my requirement.
> >>> Hence I am in a phase to just listen to you; about your experience and implementation.
> >>> Waiting to hear you...
> >>> Thanks
> >> Hi,
> >> Had your same issue some time ago and was undecided between backuppc and
> >> bacula. I eventually ended up using backuppc because bacula turned out
> >> to be too complex for my case and i have to admit, backuppc is a really
> >> well done software, if you set it up correctly.
> > Nope, I have never tried bacula. My focus is on backuppc and amanda.
> > Nice to get a positive feedback for backuppc. How does backuppc initiate
> > backups ? through cron or manual call from client ?
> There's no "client" per se, the software wakes up at certain intervals,
> which you will be able to define (default is every hour), checks the job
> list and acts accordingly. You can configure it to run incremental
> backups everyday during work hours and full backups every week after
> work hours, it will start the tasks. AFAIK it also has a queuing system
> - say you don't want to run more than 1 backup simultaneously, it will
> queue the task and try to start it again at the next wakeup.
> Of course you can still force backups through the web interface. They'll
> ignore the time limitations for execution but will still count towards
> the number of backups you configured the software to execute in a
> certain period of time.
> It supports a lot of file sharing protocols and it's a pretty flexible
> solution overall, since you can configure pretty much anything.
+1 for backuppc (although I've never used amanda). Its scheduler is
fairly smart. If a machine (backuppc calls them hosts) is not available
for backup, it tries again later (default is every hour). You can
configure blackout periods when no backups will occur, and you can
configure rules to ignore the blackout periods (say, if a host hasn't
been backed up in 3 days).
It pools common files, so if you back up /etc on 10 identical machines,
it's only going to save a single copy. But during restore, the file
pooling is transparent to the user.
I've used backuppc to back up locally and off-site (over the internet,
using backuppc's rsync support). It works really well.
You can give users access to certain hosts, allowing them to manuall
trigger backups and restores. Or you can keep that as an