[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Speed Problem Copying Files



Am 09.05.19 um 14:43 schrieb Lothar Schilling:
> Am 09.05.2019 um 13:27 schrieb Martin:
>> [..]
>>> hdparm -tT /dev/sda
>>> /dev/sda:
>>>  Timing cached reads:   13348 MB in  2.00 seconds = 6683.42 MB/sec
>>>  Timing buffered disk reads: 1014 MB in  3.00 seconds = 337.72 MB/sec
>>>
>>> iotop -o (for rsync and cp)
>>> Total DISK READ :       0.00 B/s | Total DISK WRITE :     476.15 K/s
>>> Actual DISK READ:       0.00 B/s | Actual DISK WRITE:     487.86 K/s
>>>   TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
>>> 19531 be/4 root        0.00 B/s  476.15 K/s  0.00 % 99.24 % rsync
>>> --info=progress2 /daten/testfile /daten/testfile2
>>>
>>> iotop -o (for dd)
>>> Total DISK READ :       0.00 B/s | Total DISK WRITE :     297.68 M/s
>>> Actual DISK READ:       0.00 B/s | Actual DISK WRITE:     297.68 M/s
>>>   TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
>>> 19557 be/4 root        0.00 B/s  297.68 M/s  0.00 % 99.99 % dd
>>> if=/dev/zero of=/daten/testfile bs=1G count=10 oflag=direct
>> Show us the 'dd if=/daten/testfile bs=1G oflag=direct of=/dev/null', please.
>> If this is as slow as this ~480k/s above, check your disk's health status. Like with smartmontools or some disk-utility software.
>>
>> Martin
>>
> Fast enough...
> 
> dd if=/daten/testfile bs=1G oflag=direct of=/daten/testfile2
> 10+0 Datensätze ein
> 10+0 Datensätze aus
> 10737418240 Bytes (11 GB, 10 GiB) kopiert, 72,7297 s, 148 MB/s
> 
> dd if=/daten/testfile of=/dev/null
> 20971520+0 Datensätze ein
> 20971520+0 Datensätze aus
> 10737418240 Bytes (11 GB, 10 GiB) kopiert, 36,6887 s, 293 MB/s
> 

Well, this looks quite consistent.
How does your system load look, like in top? Do you see any high 'wait' or 'system' numbers?

Then, I have seen a thing called 'systemd.resource-control'. Which I have close to zero knowledge about. Can systemd throttle a copy command? May be some one from this list can tell.


Reply to: