[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: RAID-1 and disk I/O



On Sun, 18 Jul 2021 at 07:03, David Christensen
<dpchrist@holgerdanske.com> wrote:
> On 7/17/21 5:34 AM, Urs Thuermann wrote:

> > On my server running Debian stretch,
> > the storage setup is as follows:
> > Two identical SATA disks with 1 partition on each drive spanning the
> > whole drive, i.e. /dev/sda1 and /dev/sdb1.  Then, /dev/sda1 and
> > /dev/sdb1 form a RAID-1 /dev/md0 with LVM on top of it.

> > ------------------------------------------------------------------------------
> > # diff -U20 <(smartctl -x /dev/sda) <(smartctl -x /dev/sdb)

> > -  9 Power_On_Hours          -O--CK   042   042   000    -    51289
> > +  9 Power_On_Hours          -O--CK   051   051   000    -    43740

> >   SMART Extended Self-test Log Version: 1 (1 sectors)
> >   Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
> > -# 1  Short offline       Completed without error       00%     21808         -
> > +# 1  Short offline       Completed without error       00%     14254         -

sda was last self-tested at 21808 hours and is now at 51289.
sdb was last self-tested at 14254 hours and is now at 43740.
And those were short (a couple of minutes) self-tests only.
So these drives have apparently only ever run one short self-test.

I am a home user, and I run long self-tests regularly using
# smartctl -t long <device>
In my opinion these drives are due for a long self-test.
I have no idea if this will add any useful information,
but there's an obvious way to find out :)

A bit more info on self-tests:
https://serverfault.com/questions/732423/what-does-smart-testing-do-and-how-does-it-work

The 'smartctl' manpage explains how to run and abort self-tests.
It also says that a running test can degrade the performance of the drive.


Reply to: