One time deal: Backup of raid5 to set of removeable hard drives on another machine?
Hi,
I have a (remotely stationed) server with a raid5 with about 630 GB of data
that I would like to backup to a set of removeable hard drives on another
pc at the same location.
I will use 250GB hard drives as removeables. Based upon experimentation with
data I find that using tar zcvf I will get the size of the data stored to
about 330GB of storage.
the removeable drive is mounted on /mnt/backup at 192.168.0.1.
So I thought to do
tar zcf - directory|ssh backmeup@192.168.0.1 "cat >/mnt/backup/backup.tgz"
but that would be too big for the destination drive.
I thought of trying to use split at the destination side, but
tar zcf - directory|ssh backmeup@192.168.0.1 "split -b \
230000m /mnt/backup/backup"
but this wouldnt work either.
while this would appropriately split the target into 2 files, it would still
run out of space and would not enable me to umount and remount the hard
drive...
then maybe something like
tar zcf - directory| split -b 23000m | ssh backmeup@192.168.0.1 " cat
> /mnt/backup/backup"
but that doesnt work
Is there a cool unix tool (or an idea for a perl script) to use combined with
split on the server side, that will then pause after split finishes creating
the first file so that I can umount the first drive remotely and mount the
second drive to receive the rest of the data?
I guess I could replace/recreate split on the server side
as a perl script which would count the data as it sent it and stop sending
when it got to 220G or so and then I could write a second script that would
throw away the first 220Gb and start from there.
my $buff;
for (my $count = 0; $count < 220000000; $count++) {
while (read(STDIN, $buff, 1024) {
print STDOUT , $buff;
}
}
I suppose I could calculate in advance a split of the directory into 2 roughly
equal parts, but that would be less fun than having a way of splitting and
then pausing...
Is there a better way?
Mitchell
Reply to: