[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#540012: Low perf again.



Having running 2.6.30 for 6 days and usual load:

$ dd if=/dev/zero of=./test.img bs=572041216 count=1
1+0 records in
1+0 records out
572041216 bytes (572 MB) copied, 22.033 seconds, 26.0 MB/s

During dd all cpu cores (I have q6600) are busy doing io (according to top).
Seems like this happens when RAM starvation. I have 2 virtulabox
running, (firefox and xorg also eats memory).
Here are memory reports:

Mem:   4061884k total,  3509840k used,   552044k free,     2992k buffers
Swap:  9936160k total,   796940k used,  9139220k free,   405868k cached


$ free
             total       used       free     shared    buffers     cached
Mem:       4061884    3571860     490024          0       4516     430576
-/+ buffers/cache:    3136768     925116
Swap:      9936160     762060    9174100
$ vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  1 761984 476016   5456 443268    1    2    21    24    5   16  4  4 91  1

During test no other cpu/io applications was running. You can see
%util is 0,00, then it grows to 100% as I run dd.

$ iostat -xdk 1
Linux 2.6.30-1-amd64 (pc-paul)  12.08.2009      _x86_64_        (4 CPU)

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               0,00     0,00    0,00    0,00     0,00     0,00
0,00     0,00    0,00   0,00   0,00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               0,00  1797,00    4,00   48,00    16,00  4300,00
166,00     4,13   12,92   1,54   8,00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               0,00 10039,00    1,00  208,00     4,00 20232,00
193,65   111,99  289,97   4,57  95,60

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               3,00  7305,00    7,00   95,00    40,00 31848,00
625,25   141,63  668,04   9,80 100,00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               0,00 11273,00    0,00  131,00     0,00 46428,00
708,82   145,53  559,66   7,79 102,00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               0,00  3414,00    5,00  121,00    52,00 25248,00
401,59   148,50 1014,38   7,78  98,00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               0,00  3300,00    6,00  184,00    84,00 14244,00
150,82   150,96 1271,14   5,26 100,00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               0,00  5441,00    0,00  190,00     0,00 21544,00
226,78   146,50  790,40   5,26 100,00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               0,00 11737,00    0,00  243,00     0,00 27644,00
227,52   145,88  907,36   4,12 100,00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               0,00 11278,00    0,00  160,00     0,00 42060,00
525,75   142,84  751,67   6,25 100,00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               0,00  2029,00   22,00  103,00   576,00 31096,00
506,75   139,27  889,86   8,00 100,00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda              10,00  5433,00  132,00  108,00  2180,00 12148,00
119,40   131,40  585,70   4,17 100,00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               3,00  2583,00   41,00   71,00   548,00 14648,00
271,36   145,98  637,57   8,93 100,00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               0,00 12295,00   30,00  133,00   400,00 26524,00
330,36   142,97 1167,46   6,13 100,00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               0,00 15302,00   28,00  245,00   552,00 70736,00
522,26   263,49 1192,06   6,81 186,00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               0,00  2500,00    0,00    8,00     0,00  3936,00
984,00    22,10 1179,00  17,50  14,00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               0,00  7938,00   15,00  112,00   152,00 28560,00
452,16   143,15  835,56   7,87 100,00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               3,00   642,00  113,00   32,00  1468,00  6196,00
105,71   121,22  372,39   6,90 100,00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
sda               0,00    15,00   58,00   89,00   692,00 28660,00
399,35   103,12 1148,60   6,80 100,00


$ cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq]

Can't reproduce on 2.6.31 (need memory hog applications to be running).
-- 
rmrfchik.



Reply to: