[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: A Very Bad umount



On Wednesday 12 September 2018 13:12:43 Étienne Mollier wrote:

> Good Day Gene,
>
> Gene Heskett <gheskett@shentel.net> 2018-09-12T03:14 +0200 :
> > On Tuesday 11 September 2018 15:28:30 Martin McCormick wrote:
> >
> > [...]
> >
> > >        Any constructive ideas are appreciated.  If I left
> > > the drives mounted all the time, there would be no spew but
> > > since these are backup drives, having them mounted all the
> > > time is quite risky.
> > >
> > > Martin McCormick WB5AGZ
> >
> > Why should you call that risky? I have been using amanda for
> > my backups with quite a menagerie of media since 1998. On 4
> > different boxes as I built newer, faster ones over the years.
>
> Should a badly placed “rm” command occur on the system, the
> system and both of its backup disks would be wiped clean.  I
> don't believe the risk mentioned here over was related to disk
> decay.  It was more about minimizing the time frame when this
> catastrophe could happen.
>
> I wouldn't do both backups at the same time personally, If
> something very wrong occurs to the system at backup time, I'd
> still have the secondary backup available for restore.
>
> Things are a bit different when centralizing backup policies
> with tools like Amanda.
>
> > IMO the power savings from spinning down when not in active
> > use, do not compensate for the increased failure rate you'll
> > get under stop and start conditions.
>
> Interesting opinion, it could be worth verifying.
> 
True, but actually has 2 identical systems, doing the same things to 
prove it.  So its difficult at best.

OTOH, someone like google that runs thousands of machines 24/7 is in the 
opposite camp, they have machines that are only down for disk 
replacements,  which they do in pallet qty's, at least for those 
machines facing the public. But just guessing, based on my own 
experience, I'd say they have records going back to the beginning of 
their search engine that would confirm to a high degree of certainty 
that letting them spin 24/7 till they do die is the most important point 
of their longevity. That drive I just pulled out at 77,000+ spinning 
hours had just under 50 powerdowns while it was in this machine since I 
built this one in 2007. My ups used to shut things off at about 7 or 8 
minutes, but its now been 3+ years since I had a 20kw generac with 
autostart and autotransfer put in, (the missus has end stage copd and a 
prolonged failure would probably finish her) so as far as this ups is 
concerned, there has only been one powerdown since as the powerfails 
have been in the 6 second territory, the startup and transfer delays. 
That leaves a hdwe failure, which there hasn't been except for bad sata 
cables, and its semi-annual shutdown and wheeled out to the front deck 
for a dusting and cleaning with an 80 lb air hose.

The rest of the 1T drives in this machine except for the 2T I just 
installed, have 40,000+ hours of spin time on them.

> Keeping a 
> machine running for BOINC, I only had a disk issue once since
> the beginning of the decade.  Building disks has energy costs
> too indeed.

True, but thats hidden in what you pay for them at the gitten place. ;-)

> Kind Regards,

Back to you, Étienne Mollier.

-- 
Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page <http://geneslinuxbox.net:6309/gene>


Reply to: