[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: RAID-1 and disk I/O



On 7/17/21 5:34 AM, Urs Thuermann wrote:
On my server running Debian stretch,


You should consider upgrading to Debian 10 -- more people run that and you will get better support.


I migrated to FreeBSD.


the storage setup is as follows:
Two identical SATA disks with 1 partition on each drive spanning the
whole drive, i.e. /dev/sda1 and /dev/sdb1.  Then, /dev/sda1 and
/dev/sdb1 form a RAID-1 /dev/md0 with LVM on top of it.


ext4?  That lacks integrity checking.


btrfs?  That has integrity checking, but requires periodic balancing.


I use ZFS. That has integrity checking. It is wise to do periodic scrubs to check for problems.


Are both your operating system and your data on this array? I always use a single, small solid-state device for the system drive, configure my hardware so that it is /dev/sda, and use separate drive(s) for data (/dev/sdb, /dev/sdc, etc.). Separating these concerns simplifies system administration and disaster preparedness/ recovery.


The disk I/O shows very different usage of the two SATA disks:

     # iostat | grep -E '^[amDL ]|^sd[ab]'
     Linux 5.13.1 (bit)      07/17/21        _x86_64_        (2 CPU)
     avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                3.78    0.00    2.27    0.86    0.00   93.10
     Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
     sdb               4.54        72.16        61.25   54869901   46577068
     sda               3.72        35.53        61.25   27014254   46577068
     md0               5.53       107.19        57.37   81504323   43624519
The data written to the SATA disks is about 7% = (47 GB - 44 GB) / 44 GB
more than to the RAID device /dev/md0.  Is that the expected overhead
for RAID-1 meta data?

But much more noticable is the difference of data reads of the two
disks, i.e. 55 GB and 27 GB, i.e. roughly twice as much data is read
from /dev/sdb compared to /dev/sda.  Trying to figure out the reason
for this, dmesg didn't give me anything


Getting meaningful information from system monitoring tools is non-trivial. Perhaps 'iostat 600' concurrent with a run of bonnie++. Or, 'iostat 3600 24' during normal operations. Or, 'iostat' dumped to a time-stamped output file run once an hour by a cron job. Beware of using multiple system monitoring tools at the same time -- they may access the same kernel data structures and step on each other.


but I found the following with
smartctl:

------------------------------------------------------------------------------
# diff -U20 <(smartctl -x /dev/sda) <(smartctl -x /dev/sdb)


Why limit unified context to 20 lines? You may be missing information (I have not counted the differences, below). I suggest '-U' alone.


--- /dev/fd/63	2021-07-17 12:09:00.425352672 +0200
+++ /dev/fd/62	2021-07-17 12:09:00.425352672 +0200
@@ -1,165 +1,164 @@
  smartctl 6.6 2016-05-31 r4324 [x86_64-linux-5.13.1] (local build)
  Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
  Model Family:     Seagate Barracuda 7200.14 (AF)


I burned up both old desktop drives and new enterprise drives when I put them into a server (Samba, CVS) for my SOHO network and ran them 24x7. As my arrays had only one redundant drive (e.g. two drives in RAID1, three drives in RAID5), I had the terrorifying realization that I was at risk of losing everything when a drive failed and I had not replaced it yet. I upgraded to all enterprise drives, bought a spare enterprise drive and put it on the shelf, built another server, replicate periodically to the second server, and replicate periodically to tray-mounted old desktop drives used like backup tapes (and rotated on/off site). I should probably put the spare drive into the live server and set it up as a hot spare.


  Device Model:     ST2000DM001-1ER164
-Serial Number:    W4Z171HL
-LU WWN Device Id: 5 000c50 07d3ebd67
+Serial Number:    Z4Z2M4T1
+LU WWN Device Id: 5 000c50 07b21e7db
  Firmware Version: CC25
  User Capacity:    2,000,397,852,160 bytes [2.00 TB]
  Sector Sizes:     512 bytes logical, 4096 bytes physical
  Rotation Rate:    7200 rpm
  Form Factor:      3.5 inches
  Device is:        In smartctl database [for details use: -P show]
  ATA Version is:   ACS-2, ACS-3 T13/2161-D revision 3b
  SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)


You have a SATA transfer speed mismatch -- 6.0 Gbps drives running at 3.0 Gbps. If your ports are 3 Gbps, fine. If your ports are 6 Gbps, you have bad ports, cables, racks, docks, trays, etc..


  Local Time is:    Sat Jul 17 12:09:00 2021 CEST
  SMART support is: Available - device has SMART capability.
  SMART support is: Enabled
  AAM feature is:   Unavailable
  APM level is:     254 (maximum performance)
  Rd look-ahead is: Enabled
  Write cache is:   Enabled
  ATA Security is:  Disabled, NOT FROZEN [SEC1]
  Wt Cache Reorder: Unavailable
=== START OF READ SMART DATA SECTION ===
  SMART overall-health self-assessment test result: PASSED
General SMART Values:
  Offline data collection status:  (0x82)	Offline data collection activity
  					was completed without error.
  					Auto Offline Data Collection: Enabled.
  Self-test execution status:      (   0)	The previous self-test routine completed
  					without error or no self-test has ever
  					been run.
  Total time to complete Offline
-data collection: 		(   89) seconds.
+data collection: 		(   80) seconds.
  Offline data collection
  capabilities: 			 (0x7b) SMART execute Offline immediate.
  					Auto Offline data collection on/off support.
  					Suspend Offline collection upon new
  					command.
  					Offline surface scan supported.
  					Self-test supported.
  					Conveyance Self-test supported.
  					Selective Self-test supported.
  SMART capabilities:            (0x0003)	Saves SMART data before entering
  					power-saving mode.
  					Supports SMART auto save timer.
  Error logging capability:        (0x01)	Error logging supported.
  					General Purpose Logging supported.
  Short self-test routine
  recommended polling time: 	 (   1) minutes.
  Extended self-test routine
-recommended polling time: 	 ( 213) minutes.
+recommended polling time: 	 ( 211) minutes.
  Conveyance self-test routine
  recommended polling time: 	 (   2) minutes.
  SCT capabilities: 	       (0x1085)	SCT Status supported.
SMART Attributes Data Structure revision number: 10
  Vendor Specific SMART Attributes with Thresholds:
  ID# ATTRIBUTE_NAME          FLAGS    VALUE WORST THRESH FAIL RAW_VALUE
-  1 Raw_Read_Error_Rate     POSR--   119   099   006    -    208245592
-  3 Spin_Up_Time            PO----   097   096   000    -    0
-  4 Start_Stop_Count        -O--CK   100   100   020    -    71
+  1 Raw_Read_Error_Rate     POSR--   117   099   006    -    117642848
+  3 Spin_Up_Time            PO----   096   096   000    -    0
+  4 Start_Stop_Count        -O--CK   100   100   020    -    647
    5 Reallocated_Sector_Ct   PO--CK   100   100   010    -    0
-  7 Seek_Error_Rate         POSR--   087   060   030    -    471403407
-  9 Power_On_Hours          -O--CK   042   042   000    -    51289
+  7 Seek_Error_Rate         POSR--   086   060   030    -    450781243
+  9 Power_On_Hours          -O--CK   051   051   000    -    43740


Seek_Error_Rate indicates those drives have seen better days, but are doing their job.


Power_On_Hours indicates those drives have seen lots of use.


   10 Spin_Retry_Count        PO--C-   100   100   097    -    0
- 12 Power_Cycle_Count       -O--CK   100   100   020    -    36
-183 Runtime_Bad_Block       -O--CK   093   093   000    -    7
+ 12 Power_Cycle_Count       -O--CK   100   100   020    -    29
+183 Runtime_Bad_Block       -O--CK   097   097   000    -    3


Power_Cycle_Count indicates that the machine runs 24x7 for long periods without rebooting.


Runtime_Bad_Block looks acceptable.


  184 End-to-End_Error        -O--CK   100   100   099    -    0
  187 Reported_Uncorrect      -O--CK   100   100   000    -    0


End-to-End_Error and Reported_Uncorrect look perfect. The drives should not have corrupted or lost any data (other hardware and/or events may have).


-188 Command_Timeout         -O--CK   100   094   000    -    8 14 17
-189 High_Fly_Writes         -O-RCK   098   098   000    -    2
-190 Airflow_Temperature_Cel -O---K   056   049   045    -    44 (Min/Max 43/45)
+188 Command_Timeout         -O--CK   100   100   000    -    0 0 0
+189 High_Fly_Writes         -O-RCK   097   097   000    -    3
+190 Airflow_Temperature_Cel -O---K   057   050   045    -    43 (Min/Max 42/44)
  191 G-Sense_Error_Rate      -O--CK   100   100   000    -    0
-192 Power-Off_Retract_Count -O--CK   100   100   000    -    68
-193 Load_Cycle_Count        -O--CK   100   100   000    -    1508
-194 Temperature_Celsius     -O---K   044   051   000    -    44 (0 17 0 0 0)
+192 Power-Off_Retract_Count -O--CK   100   100   000    -    647
+193 Load_Cycle_Count        -O--CK   100   100   000    -    1222
+194 Temperature_Celsius     -O---K   043   050   000    -    43 (0 17 0 0 0)


Airflow_Temperature_Cel and Temperature_Celsius are higher than I like. I suggest that you dress cables, add fans, etc., to improve cooling.


  197 Current_Pending_Sector  -O--C-   100   100   000    -    0
  198 Offline_Uncorrectable   ----C-   100   100   000    -    0
-199 UDMA_CRC_Error_Count    -OSRCK   200   197   000    -    11058
-240 Head_Flying_Hours       ------   100   253   000    -    51241h+51m+36.964s
-241 Total_LBAs_Written      ------   100   253   000    -    48056776364
-242 Total_LBAs_Read         ------   100   253   000    -    311423095933
+199 UDMA_CRC_Error_Count    -OSRCK   200   200   000    -    29
+240 Head_Flying_Hours       ------   100   253   000    -    43708h+01m+21.667s
+241 Total_LBAs_Written      ------   100   253   000    -    28889348871
+242 Total_LBAs_Read         ------   100   253   000    -    329548763597
                              ||||||_ K auto-keep
                              |||||__ C event count
                              ||||___ R error rate
                              |||____ S speed/performance
                              ||_____ O updated online
                              |______ P prefailure warning


UDMA_CRC_Error_Count for /dev/sda looks worrisome, both compared to /dev/sdb and compared to reports for my drives.


Total_LBAs_Written for /dev/sda is almost double that of /dev/sdb. Where those drives both new when put into RAID1?


  General Purpose Log Directory Version 1
  SMART           Log Directory Version 1 [multi-sector log support]
  Address    Access  R/W   Size  Description
  0x00       GPL,SL  R/O      1  Log Directory
  0x01           SL  R/O      1  Summary SMART error log
  0x02           SL  R/O      5  Comprehensive SMART error log
  0x03       GPL     R/O      5  Ext. Comprehensive SMART error log
  0x06           SL  R/O      1  SMART self-test log
  0x07       GPL     R/O      1  Extended self-test log
  0x09           SL  R/W      1  Selective self-test log
  0x10       GPL     R/O      1  SATA NCQ Queued Error log
  0x11       GPL     R/O      1  SATA Phy Event Counters log
  0x21       GPL     R/O      1  Write stream error log
  0x22       GPL     R/O      1  Read stream error log
  0x30       GPL,SL  R/O      9  IDENTIFY DEVICE data log
  0x80-0x9f  GPL,SL  R/W     16  Host vendor specific log
  0xa1       GPL,SL  VS      20  Device vendor specific log
  0xa2       GPL     VS    4496  Device vendor specific log
  0xa8       GPL,SL  VS     129  Device vendor specific log
  0xa9       GPL,SL  VS       1  Device vendor specific log
  0xab       GPL     VS       1  Device vendor specific log
  0xb0       GPL     VS    5176  Device vendor specific log
  0xbe-0xbf  GPL     VS   65535  Device vendor specific log
  0xc0       GPL,SL  VS       1  Device vendor specific log
  0xc1       GPL,SL  VS      10  Device vendor specific log
-0xc3       GPL,SL  VS       8  Device vendor specific log
  0xe0       GPL,SL  R/W      1  SCT Command/Status
  0xe1       GPL,SL  R/W      1  SCT Data Transfer
SMART Extended Comprehensive Error Log Version: 1 (5 sectors)
  No Errors Logged
SMART Extended Self-test Log Version: 1 (1 sectors)
  Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
-# 1  Short offline       Completed without error       00%     21808         -
+# 1  Short offline       Completed without error       00%     14254         -


LifeTime for /dev/sda is ~50% higher than /dev/sdb. So, those drives were not both new when put into RAID1?


  SMART Selective self-test log data structure revision number 1
   SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
      1        0        0  Not_testing
      2        0        0  Not_testing
      3        0        0  Not_testing
      4        0        0  Not_testing
      5        0        0  Not_testing
  Selective self-test flags (0x0):
    After scanning selected spans, do NOT read-scan remainder of disk.
  If Selective self-test is pending on power-up, resume after 0 minute delay.
SCT Status Version: 3
  SCT Version (vendor specific):       522 (0x020a)
  SCT Support Level:                   1
  Device State:                        Active (0)
-Current Temperature:                    44 Celsius
-Power Cycle Min/Max Temperature:     43/45 Celsius
-Lifetime    Min/Max Temperature:     16/51 Celsius
+Current Temperature:                    43 Celsius
+Power Cycle Min/Max Temperature:     42/44 Celsius
+Lifetime    Min/Max Temperature:     16/50 Celsius
  Under/Over Temperature Limit Count:   0/0
SCT Data Table command not supported SCT Error Recovery Control command not supported Device Statistics (GP/SMART Log 0x04) not supported SATA Phy Event Counters (GP Log 0x11)
  ID      Size     Value  Description
  0x000a  2            8  Device-to-host register FISes sent due to a COMRESET
  0x0001  2            0  Command failed due to ICRC error
  0x0003  2            0  R_ERR response for device-to-host data FIS
  0x0004  2            0  R_ERR response for host-to-device data FIS
  0x0006  2            0  R_ERR response for device-to-host non-data FIS
  0x0007  2            0  R_ERR response for host-to-device non-data FIS
------------------------------------------------------------------------------


Here, the noticable lines are IMHO

     Raw_Read_Error_Rate     (208245592 vs. 117642848)


The smartctl(8) RAW_VALUE column is tough to read. Sometimes it looks like an integer. Other times, it looks like a bitmap or big-endian/ little-endian mix-up. The VALUE column is easier. Both 119 and 117 are greater than 100, so I would not worry.


     Command_Timeout         (8 14 17 vs. 0 0 0)


I do not know how to read those numbers. /dev/sda has non-zero values and /dev/sdb has zero values. This supports a theory of communications problems for /dev/sda.


     UDMA_CRC_Error_Count    (11058 vs. 29)


Agreed.


Do these numbers indicate a serious problem with my /dev/sda drive?


I'd say "problem" at this point, but not yet "serious". Run reports regularly and watch for growth of problematic statistics -- Raw_Read_Error_Rate, Seek_Error_Rate, Command_Timeout, UDMA_CRC_Error_Count, etc..


And is it a disk problem or a transmission problem?
UDMA_CRC_Error_Count sounds like a cable problem for me, right?


A/B testing (swap the SATA cables at the drives) and periodic testing/ tracking over an extended period with everything else known good (racks, cables, ports, HBA's, etc.) could tell you if there is a drive problem.


BTW, for a year so I had problems with /dev/sda every couple of month,
where the kernel set the drive status in the RAID array to failed.  I
could always fix the problem by hot-plugging out the drive, wiggling
the SATA cable, re-inserting and re-adding the drive (without any
impact on the running server).


I replaced all of my SATA I/II/III cables in all of my computers about a year ago with Cable Matters black 6G cables with locking straight and/or 90 degree connectors. Life got much better.


Now, I haven't seen the problem for
quite a while.  My suspect is that the cable is still not working very
good, but failures are not often enough to set the drive to "failed"
status.


I try to run drive diagnostics on a monthly basis and save the reports in a version control system. The data is available if/when I put in the effort to analyze it. (There may be FOSS to automate one or more of these chores.)


David


Reply to: