[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: A question about deleting a big file structure from a big disk in Jessie: Why does this work? I'm really worried.

Paul E Condon wrote:
> On rereading my message, I can see why you are unhappy and offended.

I was neither unhappy nor offended.  I am sorry if my responses
indicated any such thing.  I went back and read what I wrote and I am
at a loss to know where I went wrong.  Please let me know so that I
can avoid causing this misunderstanding in the future.

> Between making ill advised posts here, I have been searching the web.
> I found two sources that I wish had known about, but didn't.
> One is the Backblaze.com web site. Their marketing actually contains
> some real technical information on modern HD technology.
> And an article in Wikipedia on the history of disk storage:
> (http://en.wikipedia.org/wiki/Hard_disk_drive) and other articles that
> are linked from there. I learned things that you might think everyone
> knows, but I didn't.

Good stuff there.

> And, the newer high information density drives all have a supply of
> reserve sectors which they use to automatically replace sectors that
> are showing signs of incipient failure.

Yes.  Drives in the last decade have reserve sectors.  Blocks are
mapped internally in the controller between the logical address and a
physical address.  On the outside we only see logical block
addressing.  Those addresses are logical and the controller may put
them anywhere on the physical disk.  On the inside it could be
anywhere.  There is no one-to-one mapping anymore.  This is taken the
toe extreme with newer SSDs that *continuously* remap blocks.

> All of the disks in USB packaging that I have had are ones for which
> these facts apply. If one is gathering the right data while using
> them, one can predictably when they cannot continue to serve, that is
> when, for each disk individually, its supply of reserve sectors runs
> out. Other random failures can shorten the life to something less can
> cause failure when there is still a supply of 'reserve' sectors.

I am skeptical about being able to accurately predict a drive failing.
However I try to replace drives in single drive systems when the
reserve blocks are exhausted.  At that point the next consumption
would result in data loss.  The drive hasn't failed yet.  But if it
continues then it will.  Sometimes that might be yet years later in
the future.  I use those drives in victim systems for installation
testing and other non-critical uses.  The drives are still useful.
But I costs being inexpensive for replacements I don't want to hassle
and panic with a problem on a critical system.  But all of my critical
systems are in a RAID configuration to avoid any single disk fault.

> The technical basis of the Backblaze business monitoring all the spinning
> reserve (which is a borrowed technical term from the electrical power
> industry where it means a dynamo that is already spinning but is not
> actually delivering power to the grid).
> I certainly wasn't keeping records of HD performance the way Backblase
> says they do. I am rethinking. I think I need to be quiet for awhile.

In RAID systems I replace drives when the drive actually fails.  But I
do so as quickly as practical!  It would be bad if the other drive
failed too before the failed one was replaced.  I know some hosting
providers simply replace drives based upon age since there is no
really accurate predictor.  For my systems monitoring the raid is


Attachment: signature.asc
Description: Digital signature

Reply to: