[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#551786: marked as done (linux-image-2.6.26-2-686-bigmem: disk write performance regression)



Your message dated Mon, 1 Feb 2010 15:40:00 +0100
with message-id <20100201144000.GY9849@stro.at>
and subject line Re: Bug#551786: linux-image-2.6.26-2-686-bigmem: disk write performance regression
has caused the Debian Bug report #551786,
regarding linux-image-2.6.26-2-686-bigmem: disk write performance regression
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact owner@bugs.debian.org
immediately.)


-- 
551786: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=551786
Debian Bug Tracking System
Contact owner@bugs.debian.org with problems
--- Begin Message ---
Package: linux-image-2.6.26-2-686-bigmem
Version: 2.6.26-19
Severity: normal
File: /boot/vmlinuz-2.6.26-2-686-bigmem
Tags: patch

We encountered a serious disk write performance regression after
upgrading from the etch kernel (2.6.18) to the etchnhalf kernel
(2.6.24).  Co-workers of mine stumbled over the issue using OpenLDAP
(with the Berkeley DB backend) while restoring about 60,000 medium-sized
records with slapadd.  As soon as they define an "index" with "sub" in
their slapd.conf, the restore performance goes south and the load
increases dramatically.  The restore won't be finished after half an
hour, while it's done within a few minutes when runnning 2.6.18.  We
were able to reproduce the problem on systems running lenny (with the
stock 2.6.26 kernel), on different hardware, with different file
systems, and with different OpenLDAP versions (we tested this with
OpenLDAP 2.3.30, 2.4.13, 2.4.16, 2.4.18, and 2.4.19).  However, I'm not
aware of another way to trigger the issue, sorry.

The problem is not Debian-specific.  It was introduced with the kernel
Git commit v2.6.21-1-g07db59b, which lowered the default dirty-writeback
limits; and has been fixed with commit v2.6.29-4-g1b5e62b, which raised
these settings again (though not to the original values).  Backporting
the new settings from v2.6.29-4-g1b5e62b fixes the problem for us.  The
attached patch should be applicable to both 2.6.26 from lenny and 2.6.24
from etchnhalf.
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -66,7 +66,7 @@ static inline long sync_writeback_pages(void)
 /*
  * Start background writeback (via pdflush) at this percentage
  */
-int dirty_background_ratio = 5;
+int dirty_background_ratio = 10;
 
 /*
  * free highmem will not be subtracted from the total free memory
@@ -77,7 +77,7 @@ int vm_highmem_is_dirtyable;
 /*
  * The generator of dirty data starts writeback at this percentage
  */
-int vm_dirty_ratio = 10;
+int vm_dirty_ratio = 20;
 
 /*
  * The interval between `kupdate'-style writebacks, in jiffies

--- End Message ---
--- Begin Message ---
On Wed, 21 Oct 2009, Ben Hutchings wrote:

> On Tue, 2009-10-20 at 19:03 +0200, Holger Weiss wrote:
> > Package: linux-image-2.6.26-2-686-bigmem
> > Version: 2.6.26-19
> > Severity: normal
> > File: /boot/vmlinuz-2.6.26-2-686-bigmem
> > Tags: patch
> > 
> > We encountered a serious disk write performance regression after
> > upgrading from the etch kernel (2.6.18) to the etchnhalf kernel
> > (2.6.24).
> [...]
> > The attached patch should be applicable to both 2.6.26 from lenny and
> > 2.6.24 from etchnhalf.
> 
> Although it is apparently beneficial for your workload, it might have
> negative effects on others.  This change has not been applied to the
> 2.6.27 stable series so I'm not convinced that it is suitable material
> for a Debian stable update either.
> 
> These variables can be changed by sysctl, so I recommend that instead of
> patching your kernel you add these lines to /etc/sysctl.conf:
> 
> vm.dirty_background_ratio = 10
> vm.dirty_ratio = 20

thanks for you report, closing as no further testings or benchmarks,
gave a strong hint that this patch is indeed backportable.


--- End Message ---

Reply to: