[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Increased read IO wait times after Bullseye upgrade



On Fri, Nov 11, 2022, 7:27 PM Gareth Evans <donotspam@fastmail.fm> wrote:


On 11 Nov 2022, at 16:59, Vukovics Mihály <vm@informatik.hu> wrote:



Hi Gareth,

dmesg is "clean", there disks are not shared in any way and there is no virtualization layer installed.

Hello, but the message was from Nicholas :)

Looking at your first graph, I noticed the upgrade seems to introduce a broad correlation between read and write iowait.  Write wait before the uptick is fairly consistent and apparently unrelated to read wait.

Does that suggest anything to anyone?


What I see in the first graph that's odd is that only read latency really increased. The other wait times remained pretty stable, just a small uptick with greater variance.That graph is only the sda drive apparently. 

What could bring about a jump in only read latency? Yet not in write latency or device wait time. Seems to me it must be some buffer, filesystem parameter or device queue changed size at the upgrade.

In your atop stats, one drive (sdc) is ~50% more "busy" than the others, has double the number of writes, a higher avq value and lower avio time.  Is it normal for raid devices to vary this much?

Is this degree of difference consistent over time?  Might atop stats during some eg. fio tests be useful?

Have you done any filesystem checks? 

Thanks,
Gareth





Reply to: