[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: debian disk is dying



On Sat, Oct 17, 2009 at 7:40 PM, Jude DaShiell <jdashiel@shellworld.net> wrote:
> The hardware needs to be returned to the factory for a warranty-covered
> replacement.  I have an esata docking station and an esata hard drive I can
> put this system on though.  I'm using the command line and figure I'll

Two things you might to to help diagnose the problem:

First attempt to move the drive to some other known, working system
that allows someone with a bit more experience to diagnose the
problem, and since you already have esata, then of course you want to
be able to ensure compatibility with another system. This allows more
diagnosis without other factors creeping in. Of course, if it is your
primary boot (/ root partition) then that is more difficult.

First you should back up your data, but maybe cpio is not the right
tool. Some people are so used to cpio (in fact the fist time I
encountered it was as a rank niewbie to Unix,;in fact I knew Bill and
Lynne Jolitz). I've just never been that comfortable using it as a
primary method even though I know it is a critical system tool that
other things use (deb package infrastructure being one of the most
prominent.) Of course learning about how to use cpio may be important,
but if your data is that critical, you want to know how to backup all
your data using comfortable tools. One of the constructs I've use in
the past basically go like:

1) mount the media to be backed up, if it hasn't been done already
(step one is a given)

2) take the known good receiving partition, preferably a spare place
that has enough available space)

3) run something like this - and getting all the parameters exact will
be problematic, so I start with a few files first, see if that does
the job, then I adjust until I kinow the process will work. \

$ tar -cf . | (cd /mountpoint ; tar -xf -)

This is basically stuff, and the parameters need to be salted to
taste, basic concerns are:

1) do you have write access to your receiving place's filesystem?
2) will the permissions, ownerships etc, be preserved?
3) will it not screw up my attempt?

All are crucial. One of the most common mistakes that I still hafve
issues with from time to time is relative vs. absolute, and knowing
well in advance that the data I plop down elsewhere are where they
should reside.

To answer the other part - don't even try and backup directories like
/tmp, /dev/, /proc /sys, and go single user if you want to backup
/var, because that content may or (perhaps will) be different from the
time you start the backup until you finish. The only reason I
mentioned this is that it can trip you up for two reasons:

1) those directories (with the exception of /var, and maybe two others
that I've forgotten to mention) are not real directories, they are
created and maintained by external, or kernel processes.

Corollary to 1): It saves space on the backup medium, and (more
importantly) tar and other tools have some way to create a list of
filenames/pathnames to automatically skip.

Whether another tool is more appropriate is another issue. One reason
for suggesting tar over other methods is that it is basically
guaranteed to exist on every system. What could be worse than saying
you need a special tool or suite that is really really good (TM) to
backup from if it is not going to allow you to restore your data?

Apologies for rambling.



-- 
thanks for letting me change the magnetic patterns on your hard disk.


Reply to: