[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#394392: marked as done (msync() in recent kernels fails LSB)



Your message dated Thu, 23 Nov 2006 19:31:53 +0000
with message-id <E1GnKIb-0006gs-Jn@ries.debian.org>
and subject line Bug#394392: fixed in linux-2.6 2.6.18-6
has caused the attached Bug report to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what I am
talking about this indicates a serious mail system misconfiguration
somewhere.  Please contact me immediately.)

Debian bug tracking system administrator
(administrator, Debian Bugs database)

--- Begin Message ---
Package: linux-image-2.6.17-2-686
Version: 2.6.17-9
Severity: important

>From a recent run of the LSB 3.1 tests:

10|852 /tset/LSB.os/mfiles/msync_P/T.msync_P 22:58:49|TC Start, scenario ref 858-0
15|852 3.6-lite 9|TCM Start
400|852 7 1 22:59:13|IC Start
200|852 7 22:59:13|TP Start
520|852 7 00008662 1 1|msync() did not return -1, returned 0
220|852 7 1 22:59:13|FAIL
410|852 7 1 22:59:13|IC End
80|852 0 22:59:15|TC End, scenario ref 858-0

The test mmap()'s three pages from a large file read-write, munmap()'s
the middle page, and then tries to msync() the first two pages, both in
synchronous and asynchronous modes.  Both attempts should fail, because
one of the pages in the range is not mapped.  Starting with kernel
2.6.17, at least one of the msync() calls succeeded.  I've confirmed the
failure happens in 2.6.18 i386 kernels, and on powerpc and amd64 with
2.6.17 kernels.

I've been able to trace the bug to commit
707c21c848deeb0200ba3f07e4ba90e6dc419c2f in git.

FSG internal testing showed that Fedora Core 5's 2.6.18 kernel does not
fail in the same way.  I believe I've traced it to a backported change
from 2.6.19 development.  The specific commit touching msync() is
204ec841fbea3e5138168edbc3a76d46747cc987 in git; it relies on several
commits immediately preceding it.  I've built Linus's tree on amd64, and
it passes the test.  I have not, however, built a 2.6.18 kernel with
this patch and tested it, though it's the only patch in the Fedora
kernel which touches the msync() code.

The patch from the Fedora kernel is attached.  It is fairly high-impact,
though; if a less invasive patch is needed, please let me know.

Marked "important" because LSB 3.1 compatibility has been identified as
a release goal.

Date: Wed, 19 Jul 2006 00:03:33 +0200
From: Peter Zijlstra <pzijlstr@redhat.com>
Subject: Re: [RHEL5][PATCH 1/8] mm: tracking shared dirty pages

Respin against current Rawhide kernel.

The other patches apply with a little offset/fuzz but end up rightly.
It even compiles :-)

Don, is this enough, or would you like me to repost the whole series
(minus 8/8) fuzzless?

---


From: Peter Zijlstra <a.p.zijlstra@chello.nl>

Tracking of dirty pages in shared writeable mmap()s.

The idea is simple: write protect clean shared writeable pages, catch the
write-fault, make writeable and set dirty.  On page write-back clean all
the PTE dirty bits and write protect them once again.

The implementation is a tad harder, mainly because the default
backing_dev_info capabilities were too loosely maintained. Hence it is
not enough to test the backing_dev_info for cap_account_dirty.

The current heuristic is as follows, a VMA is eligible when:
 - its shared writeable
    (vm_flags & (VM_WRITE|VM_SHARED)) == (VM_WRITE|VM_SHARED)
 - it is not a 'special' mapping
    (vm_flags & (VM_PFNMAP|VM_INSERTPAGE)) == 0
 - the backing_dev_info is cap_account_dirty
    mapping_cap_account_dirty(vma->vm_file->f_mapping)
 - f_op->mmap() didn't change the default page protection

Page from remap_pfn_range() are explicitly excluded because their
COW semantics are already horrid enough (see vm_normal_page() in
do_wp_page()) and because they don't have a backing store anyway.

mprotect() is taught about the new behaviour as well. However it
fudges the last condition.

Cleaning the pages on write-back is done with page_mkclean() a new
rmap call. It cleans and wrprotects all PTEs of dirty accountable
pages.

Finally, in fs/buffers.c:try_to_free_buffers(); remove clear_page_dirty()
from under ->private_lock. This seems to be safe, since ->private_lock
is used to serialize access to the buffers, not the page itself.
This is needed because clear_page_dirty() will call into page_mkclean()
and would thereby violate locking order.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Hugh Dickins <hugh@veritas
Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 fs/buffer.c          |    2 -
 include/linux/mm.h   |   34 ++++++++++++++++++++++++++
 include/linux/rmap.h |    8 ++++++
 mm/memory.c          |   29 ++++++++++++++++++----
 mm/mmap.c            |   10 +++----
 mm/mprotect.c        |   21 ++++++----------
 mm/page-writeback.c  |   17 ++++++++++---
 mm/rmap.c            |   65 +++++++++++++++++++++++++++++++++++++++++++++++++++
 8 files changed, 156 insertions(+), 30 deletions(-)

Index: latest/fs/buffer.c
===================================================================
--- latest.orig/fs/buffer.c
+++ latest/fs/buffer.c
@@ -2984,6 +2984,7 @@ int try_to_free_buffers(struct page *pag
 
 	spin_lock(&mapping->private_lock);
 	ret = drop_buffers(page, &buffers_to_free);
+	spin_unlock(&mapping->private_lock);
 	if (ret) {
 		/*
 		 * If the filesystem writes its buffers by hand (eg ext3)
@@ -2995,7 +2996,6 @@ int try_to_free_buffers(struct page *pag
 		 */
 		clear_page_dirty(page);
 	}
-	spin_unlock(&mapping->private_lock);
 out:
 	if (buffers_to_free) {
 		struct buffer_head *bh = buffers_to_free;
Index: latest/include/linux/mm.h
===================================================================
--- latest.orig/include/linux/mm.h
+++ latest/include/linux/mm.h
@@ -15,6 +15,7 @@
 #include <linux/fs.h>
 #include <linux/mutex.h>
 #include <linux/debug_locks.h>
+#include <linux/backing-dev.h>
 
 struct mempolicy;
 struct anon_vma;
@@ -801,6 +802,39 @@ struct shrinker;
 extern struct shrinker *set_shrinker(int, shrinker_t);
 extern void remove_shrinker(struct shrinker *shrinker);
 
+/*
+ * Some shared mappigns will want the pages marked read-only
+ * to track write events. If so, we'll downgrade vm_page_prot
+ * to the private version (using protection_map[] without the
+ * VM_SHARED bit).
+ */
+static inline int vma_wants_writenotify(struct vm_area_struct *vma)
+{
+	unsigned int vm_flags = vma->vm_flags;
+
+	/* If it was private or non-writable, the write bit is already clear */
+	if ((vm_flags & (VM_WRITE|VM_SHARED)) != ((VM_WRITE|VM_SHARED)))
+		return 0;
+
+	/* The backer wishes to know when pages are first written to? */
+	if (vma->vm_ops && vma->vm_ops->page_mkwrite)
+		return 1;
+
+	/* The open routine did something to the protections already? */
+	if (pgprot_val(vma->vm_page_prot) !=
+	    pgprot_val(protection_map[vm_flags &
+		    (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]))
+		return 0;
+
+	/* Specialty mapping? */
+	if (vm_flags & (VM_PFNMAP|VM_INSERTPAGE))
+		return 0;
+
+	/* Can the mapping track the dirty pages? */
+	return vma->vm_file && vma->vm_file->f_mapping &&
+		mapping_cap_account_dirty(vma->vm_file->f_mapping);
+}
+
 extern pte_t *FASTCALL(get_locked_pte(struct mm_struct *mm, unsigned long addr, spinlock_t **ptl));
 
 int __pud_alloc(struct mm_struct *mm, pgd_t *pgd, unsigned long address);
Index: latest/include/linux/rmap.h
===================================================================
--- latest.orig/include/linux/rmap.h
+++ latest/include/linux/rmap.h
@@ -103,6 +103,14 @@ pte_t *page_check_address(struct page *,
  */
 unsigned long page_address_in_vma(struct page *, struct vm_area_struct *);
 
+/*
+ * Cleans the PTEs of shared mappings.
+ * (and since clean PTEs should also be readonly, write protects them too)
+ *
+ * returns the number of cleaned PTEs.
+ */
+int page_mkclean(struct page *);
+
 #else	/* !CONFIG_MMU */
 
 #define anon_vma_init()		do {} while (0)
Index: latest/mm/memory.c
===================================================================
--- latest.orig/mm/memory.c
+++ latest/mm/memory.c
@@ -1458,14 +1458,19 @@ static int do_wp_page(struct mm_struct *
 {
 	struct page *old_page, *new_page;
 	pte_t entry;
-	int reuse, ret = VM_FAULT_MINOR;
+	int reuse = 0, ret = VM_FAULT_MINOR;
+	struct page *dirty_page = NULL;
 
 	old_page = vm_normal_page(vma, address, orig_pte);
 	if (!old_page)
 		goto gotten;
 
-	if (unlikely((vma->vm_flags & (VM_SHARED|VM_WRITE)) ==
-				(VM_SHARED|VM_WRITE))) {
+	/*
+	 * Only catch write-faults on shared writable pages, read-only
+	 * shared pages can get COWed by get_user_pages(.write=1, .force=1).
+	 */
+	if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
+					(VM_WRITE|VM_SHARED))) {
 		if (vma->vm_ops && vma->vm_ops->page_mkwrite) {
 			/*
 			 * Notify the address space that the page is about to
@@ -1494,13 +1499,12 @@ static int do_wp_page(struct mm_struct *
 			if (!pte_same(*page_table, orig_pte))
 				goto unlock;
 		}
-
+		dirty_page = old_page;
+		get_page(dirty_page);
 		reuse = 1;
 	} else if (PageAnon(old_page) && !TestSetPageLocked(old_page)) {
 		reuse = can_share_swap_page(old_page);
 		unlock_page(old_page);
-	} else {
-		reuse = 0;
 	}
 
 	if (reuse) {
@@ -1566,6 +1570,10 @@ gotten:
 		page_cache_release(old_page);
 unlock:
 	pte_unmap_unlock(page_table, ptl);
+	if (dirty_page) {
+		set_page_dirty(dirty_page);
+		put_page(dirty_page);
+	}
 	return ret;
 oom:
 	if (old_page)
@@ -2098,6 +2106,7 @@ static int do_no_page(struct mm_struct *
 	unsigned int sequence = 0;
 	int ret = VM_FAULT_MINOR;
 	int anon = 0;
+	struct page *dirty_page = NULL;
 
 	pte_unmap(page_table);
 	BUG_ON(vma->vm_flags & VM_PFNMAP);
@@ -2192,6 +2201,10 @@ retry:
 		} else {
 			inc_mm_counter(mm, file_rss);
 			page_add_file_rmap(new_page);
+			if (write_access) {
+				dirty_page = new_page;
+				get_page(dirty_page);
+			}
 		}
 	} else {
 		/* One of our sibling threads was faster, back out. */
@@ -2204,6 +2217,10 @@ retry:
 	lazy_mmu_prot_update(entry);
 unlock:
 	pte_unmap_unlock(page_table, ptl);
+	if (dirty_page) {
+		set_page_dirty(dirty_page);
+		put_page(dirty_page);
+	}
 	return ret;
 oom:
 	page_cache_release(new_page);
Index: latest/mm/mmap.c
===================================================================
--- latest.orig/mm/mmap.c
+++ latest/mm/mmap.c
@@ -1097,12 +1097,6 @@ munmap_back:
 			goto free_vma;
 	}
 
-	/* Don't make the VMA automatically writable if it's shared, but the
-	 * backer wishes to know when pages are first written to */
-	if (vma->vm_ops && vma->vm_ops->page_mkwrite)
-		vma->vm_page_prot =
-			protection_map[vm_flags & (VM_READ|VM_WRITE|VM_EXEC)];
-
 	/* We set VM_ACCOUNT in a shared mapping's vm_flags, to inform
 	 * shmem_zero_setup (perhaps called through /dev/zero's ->mmap)
 	 * that memory reservation must be checked; but that reservation
@@ -1120,6 +1114,10 @@ munmap_back:
 	pgoff = vma->vm_pgoff;
 	vm_flags = vma->vm_flags;
 
+	if (vma_wants_writenotify(vma))
+		vma->vm_page_prot =
+			protection_map[vm_flags & (VM_READ|VM_WRITE|VM_EXEC)];
+
 	if (!file || !vma_merge(mm, prev, addr, vma->vm_end,
 			vma->vm_flags, NULL, file, pgoff, vma_policy(vma))) {
 		file = vma->vm_file;
Index: latest/mm/mprotect.c
===================================================================
--- latest.orig/mm/mprotect.c
+++ latest/mm/mprotect.c
@@ -124,8 +124,6 @@ mprotect_fixup(struct vm_area_struct *vm
 	unsigned long oldflags = vma->vm_flags;
 	long nrpages = (end - start) >> PAGE_SHIFT;
 	unsigned long charged = 0, old_end = vma->vm_end;
-	unsigned int mask;
-	pgprot_t newprot;
 	pgoff_t pgoff;
 	int error;
 
@@ -177,26 +175,23 @@ mprotect_fixup(struct vm_area_struct *vm
 	}
 
 success:
-	/* Don't make the VMA automatically writable if it's shared, but the
-	 * backer wishes to know when pages are first written to */
-	mask = VM_READ|VM_WRITE|VM_EXEC|VM_SHARED;
-	if (vma->vm_ops && vma->vm_ops->page_mkwrite)
-		mask &= ~VM_SHARED;
-
-	newprot = protection_map[newflags & mask];
-
 	/*
 	 * vm_flags and vm_page_prot are protected by the mmap_sem
 	 * held in write mode.
 	 */
 	vma->vm_flags = newflags;
-	vma->vm_page_prot = newprot;
 	if (oldflags & VM_EXEC)
 		arch_remove_exec_range(current->mm, old_end);
+	vma->vm_page_prot = protection_map[newflags &
+		(VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)];
+	if (vma_wants_writenotify(vma))
+		vma->vm_page_prot = protection_map[newflags &
+			(VM_READ|VM_WRITE|VM_EXEC)];
+
 	if (is_vm_hugetlb_page(vma))
-		hugetlb_change_protection(vma, start, end, newprot);
+		hugetlb_change_protection(vma, start, end, vma->vm_page_prot);
 	else
-		change_protection(vma, start, end, newprot);
+		change_protection(vma, start, end, vma->vm_page_prot);
 	vm_stat_account(mm, oldflags, vma->vm_file, -nrpages);
 	vm_stat_account(mm, newflags, vma->vm_file, nrpages);
 	return 0;
Index: latest/mm/page-writeback.c
===================================================================
--- latest.orig/mm/page-writeback.c
+++ latest/mm/page-writeback.c
@@ -29,6 +29,7 @@
 #include <linux/sysctl.h>
 #include <linux/cpu.h>
 #include <linux/syscalls.h>
+#include <linux/rmap.h>
 
 /*
  * The maximum number of pages to writeout in a single bdflush/kupdate
@@ -550,7 +551,7 @@ int do_writepages(struct address_space *
 		return 0;
 	wbc->for_writepages = 1;
 	if (mapping->a_ops->writepages)
-		ret =  mapping->a_ops->writepages(mapping, wbc);
+		ret = mapping->a_ops->writepages(mapping, wbc);
 	else
 		ret = generic_writepages(mapping, wbc);
 	wbc->for_writepages = 0;
@@ -712,9 +713,15 @@ int test_clear_page_dirty(struct page *p
 			radix_tree_tag_clear(&mapping->page_tree,
 						page_index(page),
 						PAGECACHE_TAG_DIRTY);
-			if (mapping_cap_account_dirty(mapping))
-				__dec_zone_page_state(page, NR_FILE_DIRTY);
 			write_unlock_irqrestore(&mapping->tree_lock, flags);
+			/*
+			 * We can continue to use `mapping' here because the
+			 * page is locked, which pins the address_space
+			 */
+			if (mapping_cap_account_dirty(mapping)) {
+				page_mkclean(page);
+				dec_zone_page_state(page, NR_FILE_DIRTY);
+			}
 			return 1;
 		}
 		write_unlock_irqrestore(&mapping->tree_lock, flags);
@@ -744,8 +751,10 @@ int clear_page_dirty_for_io(struct page 
 
 	if (mapping) {
 		if (TestClearPageDirty(page)) {
-			if (mapping_cap_account_dirty(mapping))
+			if (mapping_cap_account_dirty(mapping)) {
+				page_mkclean(page);
 				dec_zone_page_state(page, NR_FILE_DIRTY);
+			}
 			return 1;
 		}
 		return 0;
Index: latest/mm/rmap.c
===================================================================
--- latest.orig/mm/rmap.c
+++ latest/mm/rmap.c
@@ -434,6 +434,71 @@ int page_referenced(struct page *page, i
 	return referenced;
 }
 
+static int page_mkclean_one(struct page *page, struct vm_area_struct *vma)
+{
+	struct mm_struct *mm = vma->vm_mm;
+	unsigned long address;
+	pte_t *pte, entry;
+	spinlock_t *ptl;
+	int ret = 0;
+
+	address = vma_address(page, vma);
+	if (address == -EFAULT)
+		goto out;
+
+	pte = page_check_address(page, mm, address, &ptl);
+	if (!pte)
+		goto out;
+
+	if (!pte_dirty(*pte) && !pte_write(*pte))
+		goto unlock;
+
+	entry = ptep_get_and_clear(mm, address, pte);
+	entry = pte_mkclean(entry);
+	entry = pte_wrprotect(entry);
+	ptep_establish(vma, address, pte, entry);
+	lazy_mmu_prot_update(entry);
+	ret = 1;
+
+unlock:
+	pte_unmap_unlock(pte, ptl);
+out:
+	return ret;
+}
+
+static int page_mkclean_file(struct address_space *mapping, struct page *page)
+{
+	pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT);
+	struct vm_area_struct *vma;
+	struct prio_tree_iter iter;
+	int ret = 0;
+
+	BUG_ON(PageAnon(page));
+
+	spin_lock(&mapping->i_mmap_lock);
+	vma_prio_tree_foreach(vma, &iter, &mapping->i_mmap, pgoff, pgoff) {
+		if (vma->vm_flags & VM_SHARED)
+			ret += page_mkclean_one(page, vma);
+	}
+	spin_unlock(&mapping->i_mmap_lock);
+	return ret;
+}
+
+int page_mkclean(struct page *page)
+{
+	int ret = 0;
+
+	BUG_ON(!PageLocked(page));
+
+	if (page_mapped(page)) {
+		struct address_space *mapping = page_mapping(page);
+		if (mapping)
+			ret = page_mkclean_file(mapping, page);
+	}
+
+	return ret;
+}
+
 /**
  * page_set_anon_rmap - setup new anonymous rmap
  * @page:	the page to add the mapping to
Date: Mon, 17 Jul 2006 20:32:34 +0200
From: pzijlstr@redhat.com
Subject: [RHEL5][PATCH 2/8] mm: balance dirty pages


From: Peter Zijlstra <a.p.zijlstra@chello.nl>

Now that we can detect writers of shared mappings, throttle them.  Avoids OOM
by surprise.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 include/linux/writeback.h |    1 +
 mm/memory.c               |    5 +++--
 mm/page-writeback.c       |   10 ++++++++++
 3 files changed, 14 insertions(+), 2 deletions(-)

Index: latest/include/linux/writeback.h
===================================================================
--- latest.orig/include/linux/writeback.h
+++ latest/include/linux/writeback.h
@@ -115,6 +115,7 @@ int sync_page_range(struct inode *inode,
 			loff_t pos, loff_t count);
 int sync_page_range_nolock(struct inode *inode, struct address_space *mapping,
 			   loff_t pos, loff_t count);
+void set_page_dirty_balance(struct page *page);
 
 /* pdflush.c */
 extern int nr_pdflush_threads;	/* Global so it can be exported to sysctl
Index: latest/mm/memory.c
===================================================================
--- latest.orig/mm/memory.c
+++ latest/mm/memory.c
@@ -49,6 +49,7 @@
 #include <linux/module.h>
 #include <linux/delayacct.h>
 #include <linux/init.h>
+#include <linux/writeback.h>
 
 #include <asm/pgalloc.h>
 #include <asm/uaccess.h>
@@ -1571,7 +1572,7 @@ gotten:
 unlock:
 	pte_unmap_unlock(page_table, ptl);
 	if (dirty_page) {
-		set_page_dirty(dirty_page);
+		set_page_dirty_balance(dirty_page);
 		put_page(dirty_page);
 	}
 	return ret;
@@ -2218,7 +2219,7 @@ retry:
 unlock:
 	pte_unmap_unlock(page_table, ptl);
 	if (dirty_page) {
-		set_page_dirty(dirty_page);
+		set_page_dirty_balance(dirty_page);
 		put_page(dirty_page);
 	}
 	return ret;
Index: latest/mm/page-writeback.c
===================================================================
--- latest.orig/mm/page-writeback.c
+++ latest/mm/page-writeback.c
@@ -244,6 +244,16 @@ static void balance_dirty_pages(struct a
 		pdflush_operation(background_writeout, 0);
 }
 
+void set_page_dirty_balance(struct page *page)
+{
+	if (set_page_dirty(page)) {
+		struct address_space *mapping = page_mapping(page);
+
+		if (mapping)
+			balance_dirty_pages_ratelimited(mapping);
+	}
+}
+
 /**
  * balance_dirty_pages_ratelimited_nr - balance dirty memory state
  * @mapping: address_space which was dirtied
Date: Mon, 17 Jul 2006 20:32:41 +0200
From: pzijlstr@redhat.com
Subject: [RHEL5][PATCH 3/8] mm: optimize the new mprotect code a bit


From: Peter Zijlstra <a.p.zijlstra@chello.nl>

mprotect() resets the page protections, which could result in extra write
faults for those pages whose dirty state we track using write faults
and are dirty already.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 mm/mprotect.c |   31 ++++++++++++++++++++++---------
 1 file changed, 22 insertions(+), 9 deletions(-)

Index: latest/mm/mprotect.c
===================================================================
--- latest.orig/mm/mprotect.c
+++ latest/mm/mprotect.c
@@ -28,7 +28,8 @@
 #include <asm/tlbflush.h>
 
 static void change_pte_range(struct mm_struct *mm, pmd_t *pmd,
-		unsigned long addr, unsigned long end, pgprot_t newprot)
+		unsigned long addr, unsigned long end, pgprot_t newprot,
+		int dirty_accountable)
 {
 	pte_t *pte, oldpte;
 	spinlock_t *ptl;
@@ -43,7 +44,14 @@ static void change_pte_range(struct mm_s
 			 * bits by wiping the pte and then setting the new pte
 			 * into place.
 			 */
-			ptent = pte_modify(ptep_get_and_clear(mm, addr, pte), newprot);
+			ptent = ptep_get_and_clear(mm, addr, pte);
+			ptent = pte_modify(ptent, newprot);
+			/*
+			 * Avoid taking write faults for pages we know to be
+			 * dirty.
+			 */
+			if (dirty_accountable && pte_dirty(ptent))
+				ptent = pte_mkwrite(ptent);
 			set_pte_at(mm, addr, pte, ptent);
 			lazy_mmu_prot_update(ptent);
 #ifdef CONFIG_MIGRATION
@@ -67,7 +75,8 @@ static void change_pte_range(struct mm_s
 }
 
 static inline void change_pmd_range(struct mm_struct *mm, pud_t *pud,
-		unsigned long addr, unsigned long end, pgprot_t newprot)
+		unsigned long addr, unsigned long end, pgprot_t newprot,
+		int dirty_accountable)
 {
 	pmd_t *pmd;
 	unsigned long next;
@@ -77,12 +86,13 @@ static inline void change_pmd_range(stru
 		next = pmd_addr_end(addr, end);
 		if (pmd_none_or_clear_bad(pmd))
 			continue;
-		change_pte_range(mm, pmd, addr, next, newprot);
+		change_pte_range(mm, pmd, addr, next, newprot, dirty_accountable);
 	} while (pmd++, addr = next, addr != end);
 }
 
 static inline void change_pud_range(struct mm_struct *mm, pgd_t *pgd,
-		unsigned long addr, unsigned long end, pgprot_t newprot)
+		unsigned long addr, unsigned long end, pgprot_t newprot,
+		int dirty_accountable)
 {
 	pud_t *pud;
 	unsigned long next;
@@ -92,12 +102,13 @@ static inline void change_pud_range(stru
 		next = pud_addr_end(addr, end);
 		if (pud_none_or_clear_bad(pud))
 			continue;
-		change_pmd_range(mm, pud, addr, next, newprot);
+		change_pmd_range(mm, pud, addr, next, newprot, dirty_accountable);
 	} while (pud++, addr = next, addr != end);
 }
 
 static void change_protection(struct vm_area_struct *vma,
-		unsigned long addr, unsigned long end, pgprot_t newprot)
+		unsigned long addr, unsigned long end, pgprot_t newprot,
+		int dirty_accountable)
 {
 	struct mm_struct *mm = vma->vm_mm;
 	pgd_t *pgd;
@@ -111,7 +122,7 @@ static void change_protection(struct vm_
 		next = pgd_addr_end(addr, end);
 		if (pgd_none_or_clear_bad(pgd))
 			continue;
-		change_pud_range(mm, pgd, addr, next, newprot);
+		change_pud_range(mm, pgd, addr, next, newprot, dirty_accountable);
 	} while (pgd++, addr = next, addr != end);
 	flush_tlb_range(vma, start, end);
 }
@@ -126,6 +137,7 @@ mprotect_fixup(struct vm_area_struct *vm
 	unsigned long charged = 0, old_end = vma->vm_end;
 	pgoff_t pgoff;
 	int error;
+	int dirty_accountable = 0;
 
 	if (newflags == oldflags) {
 		*pprev = vma;
@@ -184,14 +196,16 @@ success:
 		arch_remove_exec_range(current->mm, old_end);
 	vma->vm_page_prot = protection_map[newflags &
 		(VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)];
-	if (vma_wants_writenotify(vma))
+	if (vma_wants_writenotify(vma)) {
 		vma->vm_page_prot = protection_map[newflags &
 			(VM_READ|VM_WRITE|VM_EXEC)];
+		dirty_accountable = 1;
+	}
 
 	if (is_vm_hugetlb_page(vma))
 		hugetlb_change_protection(vma, start, end, vma->vm_page_prot);
 	else
-		change_protection(vma, start, end, vma->vm_page_prot);
+		change_protection(vma, start, end, vma->vm_page_prot, dirty_accountable);
 	vm_stat_account(mm, oldflags, vma->vm_file, -nrpages);
 	vm_stat_account(mm, newflags, vma->vm_file, nrpages);
 	return 0;
Date: Mon, 17 Jul 2006 20:32:49 +0200
From: pzijlstr@redhat.com
Subject: [RHEL5][PATCH 4/8] mm: small cleanup of install_page()


From: Peter Zijlstra <a.p.zijlstra@chello.nl>

Smallish cleanup to install_page(), could save a memory read (haven't checked
the asm output) and sure looks nicer.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 mm/fremap.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

Index: latest/mm/fremap.c
===================================================================
--- latest.orig/mm/fremap.c
+++ latest/mm/fremap.c
@@ -81,9 +81,9 @@ int install_page(struct mm_struct *mm, s
 		inc_mm_counter(mm, file_rss);
 
 	flush_icache_page(vma, page);
-	set_pte_at(mm, addr, pte, mk_pte(page, prot));
+	pte_val = mk_pte(page, prot);
+	set_pte_at(mm, addr, pte, pte_val);
 	page_add_file_rmap(page);
-	pte_val = *pte;
 	update_mmu_cache(vma, addr, pte_val);
 	lazy_mmu_prot_update(pte_val);
 	err = 0;
Date: Mon, 17 Jul 2006 20:32:56 +0200
From: pzijlstr@redhat.com
Subject: [RHEL5][PATCH 5/8] mm: fixup do_wp_page()


From: Peter Zijlstra <a.p.zijlstra@chello.nl>

Wrt. the recent modifications in do_wp_page() Hugh Dickins pointed out:

"I now realize it's right to the first
order (normal case) and to the second order (ptrace poke), but not
to the third order (ptrace poke anon page here to be COWed -
perhaps can't occur without intervening mprotects)."

This patch restores the old COW behaviour for anonymous pages.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 mm/memory.c |   19 +++++++++++++------
 1 files changed, 13 insertions(+), 6 deletions(-)

Index: latest/mm/memory.c
===================================================================
--- latest.orig/mm/memory.c
+++ latest/mm/memory.c
@@ -1467,11 +1467,21 @@ static int do_wp_page(struct mm_struct *
 		goto gotten;
 
 	/*
-	 * Only catch write-faults on shared writable pages, read-only
-	 * shared pages can get COWed by get_user_pages(.write=1, .force=1).
+	 * Take out anonymous pages first, anonymous shared vmas are
+	 * not dirty accountable.
 	 */
-	if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
+	if (PageAnon(old_page)) {
+		if (!TestSetPageLocked(old_page)) {
+			reuse = can_share_swap_page(old_page);
+			unlock_page(old_page);
+		}
+	} else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
 					(VM_WRITE|VM_SHARED))) {
+		/*
+		 * Only catch write-faults on shared writable pages,
+		 * read-only shared pages can get COWed by
+		 * get_user_pages(.write=1, .force=1).
+		 */
 		if (vma->vm_ops && vma->vm_ops->page_mkwrite) {
 			/*
 			 * Notify the address space that the page is about to
@@ -1503,9 +1513,6 @@ static int do_wp_page(struct mm_struct *
 		dirty_page = old_page;
 		get_page(dirty_page);
 		reuse = 1;
-	} else if (PageAnon(old_page) && !TestSetPageLocked(old_page)) {
-		reuse = can_share_swap_page(old_page);
-		unlock_page(old_page);
 	}
 
 	if (reuse) {
Date: Mon, 17 Jul 2006 20:33:03 +0200
From: pzijlstr@redhat.com
Subject: [RHEL5][PATCH 6/8] mm: msync cleanup


From: Peter Zijlstra <a.p.zijlstra@chello.nl>

With the tracking of dirty pages properly done now, msync doesn't need to scan
the PTEs anymore to determine the dirty status.

From: Hugh Dickins <hugh@veritas.com>

In looking to do that, I made some other tidyups: can remove several
#includes, and sys_msync loop termination not quite right.

Most of those points are criticisms of the existing sys_msync, not of your
patch.  In particular, the loop termination errors were introduced in 2.6.17:
I did notice this shortly before it came out, but decided I was more likely to
get it wrong myself, and make matters worse if I tried to rush a last-minute
fix in.  And it's not terribly likely to go wrong, nor disastrous if it does
go wrong (may miss reporting an unmapped area; may also fsync file of a
following vma).

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 mm/msync.c |  196 ++++++++-------------------------------------------
 1 file changed, 33 insertions(+), 163 deletions(-)

Index: latest/mm/msync.c
===================================================================
--- latest.orig/mm/msync.c
+++ latest/mm/msync.c
@@ -7,149 +7,33 @@
 /*
  * The msync() system call.
  */
-#include <linux/slab.h>
-#include <linux/pagemap.h>
 #include <linux/fs.h>
 #include <linux/mm.h>
 #include <linux/mman.h>
-#include <linux/hugetlb.h>
-#include <linux/writeback.h>
 #include <linux/file.h>
 #include <linux/syscalls.h>
 
-#include <asm/pgtable.h>
-#include <asm/tlbflush.h>
-
-static unsigned long msync_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
-				unsigned long addr, unsigned long end)
-{
-	pte_t *pte;
-	spinlock_t *ptl;
-	int progress = 0;
-	unsigned long ret = 0;
-
-again:
-	pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
-	do {
-		struct page *page;
-
-		if (progress >= 64) {
-			progress = 0;
-			if (need_resched() || need_lockbreak(ptl))
-				break;
-		}
-		progress++;
-		if (!pte_present(*pte))
-			continue;
-		if (!pte_maybe_dirty(*pte))
-			continue;
-		page = vm_normal_page(vma, addr, *pte);
-		if (!page)
-			continue;
-		if (ptep_clear_flush_dirty(vma, addr, pte) ||
-				page_test_and_clear_dirty(page))
-			ret += set_page_dirty(page);
-		progress += 3;
-	} while (pte++, addr += PAGE_SIZE, addr != end);
-	pte_unmap_unlock(pte - 1, ptl);
-	cond_resched();
-	if (addr != end)
-		goto again;
-	return ret;
-}
-
-static inline unsigned long msync_pmd_range(struct vm_area_struct *vma,
-			pud_t *pud, unsigned long addr, unsigned long end)
-{
-	pmd_t *pmd;
-	unsigned long next;
-	unsigned long ret = 0;
-
-	pmd = pmd_offset(pud, addr);
-	do {
-		next = pmd_addr_end(addr, end);
-		if (pmd_none_or_clear_bad(pmd))
-			continue;
-		ret += msync_pte_range(vma, pmd, addr, next);
-	} while (pmd++, addr = next, addr != end);
-	return ret;
-}
-
-static inline unsigned long msync_pud_range(struct vm_area_struct *vma,
-			pgd_t *pgd, unsigned long addr, unsigned long end)
-{
-	pud_t *pud;
-	unsigned long next;
-	unsigned long ret = 0;
-
-	pud = pud_offset(pgd, addr);
-	do {
-		next = pud_addr_end(addr, end);
-		if (pud_none_or_clear_bad(pud))
-			continue;
-		ret += msync_pmd_range(vma, pud, addr, next);
-	} while (pud++, addr = next, addr != end);
-	return ret;
-}
-
-static unsigned long msync_page_range(struct vm_area_struct *vma,
-				unsigned long addr, unsigned long end)
-{
-	pgd_t *pgd;
-	unsigned long next;
-	unsigned long ret = 0;
-
-	/* For hugepages we can't go walking the page table normally,
-	 * but that's ok, hugetlbfs is memory based, so we don't need
-	 * to do anything more on an msync().
-	 */
-	if (vma->vm_flags & VM_HUGETLB)
-		return 0;
-
-	BUG_ON(addr >= end);
-	pgd = pgd_offset(vma->vm_mm, addr);
-	flush_cache_range(vma, addr, end);
-	do {
-		next = pgd_addr_end(addr, end);
-		if (pgd_none_or_clear_bad(pgd))
-			continue;
-		ret += msync_pud_range(vma, pgd, addr, next);
-	} while (pgd++, addr = next, addr != end);
-	return ret;
-}
-
 /*
  * MS_SYNC syncs the entire file - including mappings.
  *
- * MS_ASYNC does not start I/O (it used to, up to 2.5.67).  Instead, it just
- * marks the relevant pages dirty.  The application may now run fsync() to
+ * MS_ASYNC does not start I/O (it used to, up to 2.5.67).
+ * Nor does it marks the relevant pages dirty (it used to up to 2.6.17).
+ * Now it doesn't do anything, since dirty pages are properly tracked.
+ *
+ * The application may now run fsync() to
  * write out the dirty pages and wait on the writeout and check the result.
  * Or the application may run fadvise(FADV_DONTNEED) against the fd to start
  * async writeout immediately.
  * So by _not_ starting I/O in MS_ASYNC we provide complete flexibility to
  * applications.
  */
-static int msync_interval(struct vm_area_struct *vma, unsigned long addr,
-			unsigned long end, int flags,
-			unsigned long *nr_pages_dirtied)
-{
-	struct file *file = vma->vm_file;
-
-	if ((flags & MS_INVALIDATE) && (vma->vm_flags & VM_LOCKED))
-		return -EBUSY;
-
-	if (file && (vma->vm_flags & VM_SHARED))
-		*nr_pages_dirtied = msync_page_range(vma, addr, end);
-	return 0;
-}
-
 asmlinkage long sys_msync(unsigned long start, size_t len, int flags)
 {
 	unsigned long end;
+	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma;
 	int unmapped_error = 0;
 	int error = -EINVAL;
-	int done = 0;
 
 	if (flags & ~(MS_ASYNC | MS_INVALIDATE | MS_SYNC))
 		goto out;
@@ -169,64 +53,50 @@ asmlinkage long sys_msync(unsigned long 
 	 * If the interval [start,end) covers some unmapped address ranges,
 	 * just ignore them, but return -ENOMEM at the end.
 	 */
-	down_read(&current->mm->mmap_sem);
-	vma = find_vma(current->mm, start);
-	if (!vma) {
-		error = -ENOMEM;
-		goto out_unlock;
-	}
-	do {
-		unsigned long nr_pages_dirtied = 0;
+	down_read(&mm->mmap_sem);
+	vma = find_vma(mm, start);
+	for (;;) {
 		struct file *file;
 
+		/* Still start < end. */
+		error = -ENOMEM;
+		if (!vma)
+			goto out_unlock;
 		/* Here start < vma->vm_end. */
 		if (start < vma->vm_start) {
-			unmapped_error = -ENOMEM;
 			start = vma->vm_start;
+			if (start >= end)
+				goto out_unlock;
+			unmapped_error = -ENOMEM;
 		}
 		/* Here vma->vm_start <= start < vma->vm_end. */
-		if (end <= vma->vm_end) {
-			if (start < end) {
-				error = msync_interval(vma, start, end, flags,
-							&nr_pages_dirtied);
-				if (error)
-					goto out_unlock;
-			}
-			error = unmapped_error;
-			done = 1;
-		} else {
-			/* Here vma->vm_start <= start < vma->vm_end < end. */
-			error = msync_interval(vma, start, vma->vm_end, flags,
-						&nr_pages_dirtied);
-			if (error)
-				goto out_unlock;
+		if ((flags & MS_INVALIDATE) &&
+				(vma->vm_flags & VM_LOCKED)) {
+			error = -EBUSY;
+			goto out_unlock;
 		}
 		file = vma->vm_file;
 		start = vma->vm_end;
-		if ((flags & MS_ASYNC) && file && nr_pages_dirtied) {
-			get_file(file);
-			up_read(&current->mm->mmap_sem);
-			balance_dirty_pages_ratelimited_nr(file->f_mapping,
-							nr_pages_dirtied);
-			fput(file);
-			down_read(&current->mm->mmap_sem);
-			vma = find_vma(current->mm, start);
-		} else if ((flags & MS_SYNC) && file &&
+		if ((flags & MS_SYNC) && file &&
 				(vma->vm_flags & VM_SHARED)) {
 			get_file(file);
-			up_read(&current->mm->mmap_sem);
+			up_read(&mm->mmap_sem);
 			error = do_fsync(file, 0);
 			fput(file);
-			down_read(&current->mm->mmap_sem);
-			if (error)
-				goto out_unlock;
-			vma = find_vma(current->mm, start);
+			if (error || start >= end)
+				goto out;
+			down_read(&mm->mmap_sem);
+			vma = find_vma(mm, start);
 		} else {
+			if (start >= end) {
+				error = 0;
+				goto out_unlock;
+			}
 			vma = vma->vm_next;
 		}
-	} while (vma && !done);
+	}
 out_unlock:
-	up_read(&current->mm->mmap_sem);
+	up_read(&mm->mmap_sem);
 out:
-	return error;
+	return error ? : unmapped_error;
 }
Date: Mon, 17 Jul 2006 20:33:11 +0200
From: pzijlstr@redhat.com
Subject: [RHEL5][PATCH 7/8] mm: tracking shared dirty pages checks


From: Andrew Morton <akpm@osdl.org>

Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 mm/page-writeback.c |    2 ++
 1 files changed, 2 insertions(+)

Index: latest/mm/page-writeback.c
===================================================================
--- latest.orig/mm/page-writeback.c
+++ latest/mm/page-writeback.c
@@ -717,6 +717,7 @@ int test_clear_page_dirty(struct page *p
 	struct address_space *mapping = page_mapping(page);
 	unsigned long flags;
 
+	WARN_ON_ONCE(!PageLocked(page));
 	if (mapping) {
 		write_lock_irqsave(&mapping->tree_lock, flags);
 		if (TestClearPageDirty(page)) {
@@ -759,6 +760,7 @@ int clear_page_dirty_for_io(struct page 
 {
 	struct address_space *mapping = page_mapping(page);
 
+	WARN_ON_ONCE(!PageLocked(page));
 	if (mapping) {
 		if (TestClearPageDirty(page)) {
 			if (mapping_cap_account_dirty(mapping)) {
Date: Mon, 17 Jul 2006 20:33:18 +0200
From: pzijlstr@redhat.com
Subject: [RHEL5][PATCH 8/8] mm: tracking shared dirty pages wimp.patch


From: Andrew Morton <akpm@osdl.org>

I'm not so sure, and if this is wrong, we wreck an -mm release.  Please don't
wreck -mm releases.

Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 mm/rmap.c |    2 +-
 1 files changed, 1 insertion(+), 1 deletion(-)

Index: latest/mm/rmap.c
===================================================================
--- latest.orig/mm/rmap.c
+++ latest/mm/rmap.c
@@ -488,7 +488,7 @@ int page_mkclean(struct page *page)
 {
 	int ret = 0;
 
-	BUG_ON(!PageLocked(page));
+	WARN_ON_ONCE(!PageLocked(page));
 
 	if (page_mapped(page)) {
 		struct address_space *mapping = page_mapping(page);

--- End Message ---
--- Begin Message ---
Source: linux-2.6
Source-Version: 2.6.18-6

We believe that the bug you reported is fixed in the latest version of
linux-2.6, which is due to be installed in the Debian FTP archive:

linux-2.6_2.6.18-6.diff.gz
  to pool/main/l/linux-2.6/linux-2.6_2.6.18-6.diff.gz
linux-2.6_2.6.18-6.dsc
  to pool/main/l/linux-2.6/linux-2.6_2.6.18-6.dsc
linux-doc-2.6.18_2.6.18-6_all.deb
  to pool/main/l/linux-2.6/linux-doc-2.6.18_2.6.18-6_all.deb
linux-headers-2.6.18-3-all-powerpc_2.6.18-6_powerpc.deb
  to pool/main/l/linux-2.6/linux-headers-2.6.18-3-all-powerpc_2.6.18-6_powerpc.deb
linux-headers-2.6.18-3-all_2.6.18-6_powerpc.deb
  to pool/main/l/linux-2.6/linux-headers-2.6.18-3-all_2.6.18-6_powerpc.deb
linux-headers-2.6.18-3-powerpc-miboot_2.6.18-6_powerpc.deb
  to pool/main/l/linux-2.6/linux-headers-2.6.18-3-powerpc-miboot_2.6.18-6_powerpc.deb
linux-headers-2.6.18-3-powerpc-smp_2.6.18-6_powerpc.deb
  to pool/main/l/linux-2.6/linux-headers-2.6.18-3-powerpc-smp_2.6.18-6_powerpc.deb
linux-headers-2.6.18-3-powerpc64_2.6.18-6_powerpc.deb
  to pool/main/l/linux-2.6/linux-headers-2.6.18-3-powerpc64_2.6.18-6_powerpc.deb
linux-headers-2.6.18-3-powerpc_2.6.18-6_powerpc.deb
  to pool/main/l/linux-2.6/linux-headers-2.6.18-3-powerpc_2.6.18-6_powerpc.deb
linux-headers-2.6.18-3-prep_2.6.18-6_powerpc.deb
  to pool/main/l/linux-2.6/linux-headers-2.6.18-3-prep_2.6.18-6_powerpc.deb
linux-headers-2.6.18-3-vserver-powerpc64_2.6.18-6_powerpc.deb
  to pool/main/l/linux-2.6/linux-headers-2.6.18-3-vserver-powerpc64_2.6.18-6_powerpc.deb
linux-headers-2.6.18-3-vserver-powerpc_2.6.18-6_powerpc.deb
  to pool/main/l/linux-2.6/linux-headers-2.6.18-3-vserver-powerpc_2.6.18-6_powerpc.deb
linux-headers-2.6.18-3-vserver_2.6.18-6_powerpc.deb
  to pool/main/l/linux-2.6/linux-headers-2.6.18-3-vserver_2.6.18-6_powerpc.deb
linux-headers-2.6.18-3_2.6.18-6_powerpc.deb
  to pool/main/l/linux-2.6/linux-headers-2.6.18-3_2.6.18-6_powerpc.deb
linux-image-2.6.18-3-powerpc-miboot_2.6.18-6_powerpc.deb
  to pool/main/l/linux-2.6/linux-image-2.6.18-3-powerpc-miboot_2.6.18-6_powerpc.deb
linux-image-2.6.18-3-powerpc-smp_2.6.18-6_powerpc.deb
  to pool/main/l/linux-2.6/linux-image-2.6.18-3-powerpc-smp_2.6.18-6_powerpc.deb
linux-image-2.6.18-3-powerpc64_2.6.18-6_powerpc.deb
  to pool/main/l/linux-2.6/linux-image-2.6.18-3-powerpc64_2.6.18-6_powerpc.deb
linux-image-2.6.18-3-powerpc_2.6.18-6_powerpc.deb
  to pool/main/l/linux-2.6/linux-image-2.6.18-3-powerpc_2.6.18-6_powerpc.deb
linux-image-2.6.18-3-prep_2.6.18-6_powerpc.deb
  to pool/main/l/linux-2.6/linux-image-2.6.18-3-prep_2.6.18-6_powerpc.deb
linux-image-2.6.18-3-vserver-powerpc64_2.6.18-6_powerpc.deb
  to pool/main/l/linux-2.6/linux-image-2.6.18-3-vserver-powerpc64_2.6.18-6_powerpc.deb
linux-image-2.6.18-3-vserver-powerpc_2.6.18-6_powerpc.deb
  to pool/main/l/linux-2.6/linux-image-2.6.18-3-vserver-powerpc_2.6.18-6_powerpc.deb
linux-manual-2.6.18_2.6.18-6_all.deb
  to pool/main/l/linux-2.6/linux-manual-2.6.18_2.6.18-6_all.deb
linux-patch-debian-2.6.18_2.6.18-6_all.deb
  to pool/main/l/linux-2.6/linux-patch-debian-2.6.18_2.6.18-6_all.deb
linux-source-2.6.18_2.6.18-6_all.deb
  to pool/main/l/linux-2.6/linux-source-2.6.18_2.6.18-6_all.deb
linux-support-2.6.18-3_2.6.18-6_all.deb
  to pool/main/l/linux-2.6/linux-support-2.6.18-3_2.6.18-6_all.deb
linux-tree-2.6.18_2.6.18-6_all.deb
  to pool/main/l/linux-2.6/linux-tree-2.6.18_2.6.18-6_all.deb



A summary of the changes between this version and the previous one is
attached.

Thank you for reporting the bug, which will now be closed.  If you
have further comments please address them to 394392@bugs.debian.org,
and the maintainer will reopen the bug report if appropriate.

Debian distribution maintenance software
pp.
Bastian Blank <waldi@debian.org> (supplier of updated linux-2.6 package)

(This message was generated automatically at their request; if you
believe that there is a problem with it please contact the archive
administrators by mailing ftpmaster@debian.org)


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Format: 1.7
Date: Tue, 21 Nov 2006 11:28:09 +0100
Source: linux-2.6
Binary: linux-image-2.6.18-3-686-bigmem linux-modules-2.6.18-3-xen-k7 linux-headers-2.6.18-3-xen-vserver linux-headers-2.6.18-3-r5k-cobalt linux-headers-2.6.18-3-xen-k7 linux-image-2.6.18-3-mckinley linux-headers-2.6.18-3-r4k-ip22 linux-headers-2.6.18-3-all-i386 linux-headers-2.6.18-3-parisc64-smp linux-headers-2.6.18-3-footbridge linux-headers-2.6.18-3 linux-image-2.6.18-3-powerpc64 linux-headers-2.6.18-3-xen linux-image-2.6.18-3-vserver-amd64 linux-image-2.6.18-3-r4k-kn04 linux-headers-2.6.18-3-vserver-sparc64 linux-image-2.6.18-3-sparc64-smp linux-image-2.6.18-3-xen-686 linux-image-2.6.18-3-parisc64 linux-headers-2.6.18-3-alpha-legacy linux-image-2.6.18-3-ixp4xx linux-manual-2.6.18 linux-image-2.6.18-3-vserver-k7 linux-headers-2.6.18-3-powerpc linux-headers-2.6.18-3-686 linux-image-2.6.18-3-alpha-smp linux-headers-2.6.18-3-r5k-ip32 linux-image-2.6.18-3-vserver-686 linux-headers-2.6.18-3-sparc64 linux-image-2.6.18-3-r5k-cobalt linux-headers-2.6.18-3-sparc32 linux-image-2.6.18-3-s3c2410 linux-image-2.6.18-3-parisc64-smp linux-image-2.6.18-3-alpha-legacy linux-image-2.6.18-3-k7 linux-headers-2.6.18-3-xen-amd64 linux-headers-2.6.18-3-vserver-powerpc64 linux-image-2.6.18-3-xen-k7 linux-headers-2.6.18-3-all-arm linux-headers-2.6.18-3-mckinley linux-headers-2.6.18-3-r3k-kn02 linux-headers-2.6.18-3-mac linux-image-2.6.18-3-rpc linux-headers-2.6.18-3-vserver-s390x linux-image-2.6.18-3-powerpc-smp linux-headers-2.6.18-3-vserver-k7 linux-headers-2.6.18-3-iop32x linux-image-2.6.18-3-r3k-kn02 linux-headers-2.6.18-3-xen-686 linux-image-2.6.18-3-vserver-alpha linux-image-2.6.18-3-sb1-bcm91250a linux-headers-2.6.18-3-vserver-686 linux-image-2.6.18-3-r5k-ip32 linux-image-2.6.18-3-alpha-generic linux-image-2.6.18-3-486 linux-headers-2.6.18-3-qemu xen-linux-system-2.6.18-3-xen-k7 linux-image-2.6.18-3-sparc32 linux-headers-2.6.18-3-parisc64 linux-headers-2.6.18-3-vserver-amd64 linux-headers-2.6.18-3-amiga linux-headers-2.6.18-3-atari linux-image-2.6.18-3-amd64 linux-image-2.6.18-3-amiga linux-image-2.6.18-3-iop32x linux-image-2.6.18-3-xen-vserver-amd64 xen-linux-system-2.6.18-3-xen-vserver-amd64 linux-headers-2.6.18-3-parisc linux-headers-2.6.18-3-r4k-kn04 linux-image-2.6.18-3-s390-tape linux-headers-2.6.18-3-k7 linux-image-2.6.18-3-vserver-sparc64 linux-doc-2.6.18 linux-headers-2.6.18-3-powerpc-miboot linux-image-2.6.18-3-vserver-s390x linux-headers-2.6.18-3-sb1-bcm91250a xen-linux-system-2.6.18-3-xen-amd64 linux-image-2.6.18-3-powerpc linux-headers-2.6.18-3-xen-vserver-686 linux-image-2.6.18-3-s390 linux-image-2.6.18-3-sparc64 linux-headers-2.6.18-3-sparc64-smp linux-headers-2.6.18-3-686-bigmem linux-headers-2.6.18-3-s390x linux-headers-2.6.18-3-amd64 linux-image-2.6.18-3-parisc-smp linux-source-2.6.18 linux-headers-2.6.18-3-all-powerpc linux-headers-2.6.18-3-vserver linux-image-2.6.18-3-sb1a-bcm91480b linux-headers-2.6.18-3-vserver-powerpc linux-headers-2.6.18-3-alpha-generic linux-headers-2.6.18-3-parisc-smp linux-modules-2.6.18-3-xen-amd64 linux-image-2.6.18-3-r4k-ip22 linux-image-2.6.18-3-footbridge linux-headers-2.6.18-3-all-m68k linux-headers-2.6.18-3-powerpc-smp linux-image-2.6.18-3-xen-vserver-686 linux-image-2.6.18-3-prep linux-headers-2.6.18-3-all-mipsel linux-headers-2.6.18-3-all-sparc linux-headers-2.6.18-3-ixp4xx linux-headers-2.6.18-3-powerpc64 linux-modules-2.6.18-3-xen-vserver-686 linux-support-2.6.18-3 linux-image-2.6.18-3-mac linux-headers-2.6.18-3-all-alpha linux-headers-2.6.18-3-all-ia64 linux-image-2.6.18-3-686 linux-headers-2.6.18-3-itanium linux-headers-2.6.18-3-all-mips linux-image-2.6.18-3-vserver-powerpc linux-headers-2.6.18-3-all-s390 linux-headers-2.6.18-3-s390 linux-headers-2.6.18-3-all-hppa linux-image-2.6.18-3-xen-amd64 linux-image-2.6.18-3-powerpc-miboot linux-image-2.6.18-3-parisc linux-image-2.6.18-3-s390x linux-headers-2.6.18-3-prep linux-headers-2.6.18-3-s3c2410 linux-patch-debian-2.6.18 xen-linux-system-2.6.18-3-xen-686 linux-image-2.6.18-3-itanium linux-headers-2.6.18-3-rpc linux-image-2.6.18-3-vserver-powerpc64 linux-tree-2.6.18 linux-modules-2.6.18-3-xen-686 linux-image-2.6.18-3-atari linux-headers-2.6.18-3-vserver-alpha linux-modules-2.6.18-3-xen-vserver-amd64 linux-image-2.6.18-3-qemu linux-headers-2.6.18-3-all linux-headers-2.6.18-3-486 linux-headers-2.6.18-3-all-amd64 linux-headers-2.6.18-3-alpha-smp linux-headers-2.6.18-3-xen-vserver-amd64 linux-headers-2.6.18-3-sb1a-bcm91480b xen-linux-system-2.6.18-3-xen-vserver-686
Architecture: source powerpc all
Version: 2.6.18-6
Distribution: unstable
Urgency: low
Maintainer: Debian Kernel Team <debian-kernel@lists.debian.org>
Changed-By: Bastian Blank <waldi@debian.org>
Description: 
 linux-doc-2.6.18 - Linux kernel specific documentation for version 2.6.18
 linux-headers-2.6.18-3 - Common header files for Linux 2.6.18
 linux-headers-2.6.18-3-all - All header files for Linux 2.6.18
 linux-headers-2.6.18-3-all-powerpc - All header files for Linux 2.6.18
 linux-headers-2.6.18-3-powerpc - Header files for Linux 2.6.18 on uniprocessor 32-bit PowerPC
 linux-headers-2.6.18-3-powerpc-miboot - Header files for Linux 2.6.18 on 32-bit PowerPC for miboot floppy
 linux-headers-2.6.18-3-powerpc-smp - Header files for Linux 2.6.18 on multiprocessor 32-bit PowerPC
 linux-headers-2.6.18-3-powerpc64 - Header files for Linux 2.6.18 on 64-bit PowerPC
 linux-headers-2.6.18-3-prep - Header files for Linux 2.6.18 on PReP PowerPC
 linux-headers-2.6.18-3-vserver - Common header files for Linux 2.6.18
 linux-headers-2.6.18-3-vserver-powerpc - Header files for Linux 2.6.18 on uniprocessor 32-bit PowerPC
 linux-headers-2.6.18-3-vserver-powerpc64 - Header files for Linux 2.6.18 on 64-bit PowerPC
 linux-image-2.6.18-3-powerpc - Linux 2.6.18 image on uniprocessor 32-bit PowerPC
 linux-image-2.6.18-3-powerpc-miboot - Linux 2.6.18 image on 32-bit PowerPC for miboot floppy
 linux-image-2.6.18-3-powerpc-smp - Linux 2.6.18 image on multiprocessor 32-bit PowerPC
 linux-image-2.6.18-3-powerpc64 - Linux 2.6.18 image on 64-bit PowerPC
 linux-image-2.6.18-3-prep - Linux 2.6.18 image on PReP PowerPC
 linux-image-2.6.18-3-vserver-powerpc - Linux 2.6.18 image on uniprocessor 32-bit PowerPC
 linux-image-2.6.18-3-vserver-powerpc64 - Linux 2.6.18 image on 64-bit PowerPC
 linux-manual-2.6.18 - Linux kernel API manual pages for version 2.6.18
 linux-patch-debian-2.6.18 - Debian patches to version 2.6.18 of the Linux kernel
 linux-source-2.6.18 - Linux kernel source for version 2.6.18 with Debian patches
 linux-support-2.6.18-3 - Support files for Linux 2.6.18
 linux-tree-2.6.18 - Linux kernel source tree for building Debian kernel images
Closes: 353079 382298 386872 394392 394690 395882 396375 397946 398172
Changes: 
 linux-2.6 (2.6.18-6) unstable; urgency=low
 .
   [ maximilian attems ]
   * Enable the new ACT modules globally. They were already set for amd64, hppa
     and mips/mipsel - needed by newer iproute2. (closes: #395882, #398172)
   * Fix msync() for LSB 3.1 compliance, backport fedora patches from 2.6.19
    - mm: tracking shared dirty pages
    - mm: balance dirty pages
    - mm: optimize the new mprotect() code a bit
    - mm: small cleanup of install_page()
    - mm: fixup do_wp_page()
    - mm: msync() cleanup (closes: #394392)
   * [amd64,i386] Enable CONFIG_USB_APPLETOUCH=m (closes: #382298)
   * Add stable release 2.6.18.3:
     - x86_64: Fix FPU corruption
     - e1000: Fix regression: garbled stats and irq allocation during swsusp
     - POWERPC: Make alignment exception always check exception table
     - usbtouchscreen: use endpoint address from endpoint descriptor
     - fix via586 irq routing for pirq 5
     - init_reap_node() initialization fix
     - CPUFREQ: Make acpi-cpufreq unsticky again.
     - SPARC64: Fix futex_atomic_cmpxchg_inatomic implementation.
     - SPARC: Fix missed bump of NR_SYSCALLS.
     - NET: __alloc_pages() failures reported due to fragmentation
     - pci: don't try to remove sysfs files before they are setup.
     - fix UFS superblock alignment issues
     - NET: Set truesize in pskb_copy
     - block: Fix bad data direction in SG_IO (closes: #394690)
     - cpqarray: fix iostat
     - cciss: fix iostat
     - Char: isicom, fix close bug
     - TCP: Don't use highmem in tcp hash size calculation.
     - S390: user readable uninitialised kernel memory, take 2.
     - correct keymapping on Powerbook built-in USB ISO keyboards
     - USB: failure in usblp's error path
     - Input: psmouse - fix attribute access on 64-bit systems
     - Fix sys_move_pages when a NULL node list is passed.
     - CIFS: report rename failure when target file is locked by Windows
     - CIFS: New POSIX locking code not setting rc properly to zero on successful
     - Patch for nvidia divide by zero error for 7600 pci-express card
       (maybe fixes 398258)
     - ipmi_si_intf.c sets bad class_mask with PCI_DEVICE_CLASS
 .
   [ Steve Langasek ]
   * [alpha] new titan-video patch, for compatibility with TITAN and similar
     systems with non-standard VGA hose configs
   * [alpha] bugfix for srm_env module from upstream (Jan-Benedict Glaw),
     makes the module compatible with the current /proc interface so that
     reads no longer return EFAULT.  (closes: #353079)
   * Bump ABI to 3 for the msync fixes above.
 .
   [ Martin Michlmayr ]
   * arm: Set CONFIG_BINFMT_MISC=m
   * arm/ixp4xx: Set CONFIG_ATM=m (and related modules) so CONFIG_USB_ATM has
     an effect.
   * arm/iop32x: Likewise.
   * arm/s3c2410: Unset CONFIG_PM_LEGACY.
   * arm/versatile: Fix Versatile PCI config byte accesses
   * arm/ixp4xx: Swap the disk 1 and disk 2 LED definitions so they're right.
   * mipsel/r5k-cobalt: Unset CONFIG_SCSI_SYM53C8XX_2 because the timeout is
     just too long.
   * arm/ixp4xx: Enable more V4L USB devices.
 .
   [ dann frazier ]
   * Backport various SCTP changesets from 2.6.19, recommended by Vlad Yasevich
     (closes: #397946)
   * Add a "Scope of security support" section to README.Debian, recommended
     by Moritz Muehlenhoff
 .
   [ Thiemo Seufer ]
   * Enable raid456 for mips/mipsel qemu kernel.
 .
   [ dann frazier ]
   * The scope of the USR-61S2B unusual_dev entry was tightened, but too
     strictly. Loosen it to apply to additional devices with a smaller bcd.
     (closes: #396375)
 .
   [ Sven Luther ]
   * Added support for TI ez430 development tool ID in ti_usb.
     Thanks to Oleg Verych for providing the patch.
 .
   [ Christian T. Steigies ]
   * Added support for Atari EtherNEC, Aranym, video, keyboard, mouse, and serial
     by Michael Schmitz
 .
   [ Bastian Blank ]
   * [i386] Reenable AVM isdn card modules. (closes: #386872)
Files: 
 4d2534e8af13f95b16556ca2651c03e7 5749 devel optional linux-2.6_2.6.18-6.dsc
 ffc249a195c7b53d115e1e514a8bbb97 1637389 devel optional linux-2.6_2.6.18-6.diff.gz
 53a90aaf23ad800ef87afa74dae6a91f 3574738 doc optional linux-doc-2.6.18_2.6.18-6_all.deb
 e9ad9ad6ad529fffb97f4cad8542887d 1074296 doc optional linux-manual-2.6.18_2.6.18-6_all.deb
 c1e2ba1b8cc473494ebc44368dbfc89c 1242286 devel optional linux-patch-debian-2.6.18_2.6.18-6_all.deb
 ffd1e25be350f210036dea5b7e645b74 41905654 devel optional linux-source-2.6.18_2.6.18-6_all.deb
 916da1ab02eecf02b51537b68897cf81 252206 devel optional linux-support-2.6.18-3_2.6.18-6_all.deb
 687cfeba77ca18b6d75350474e8ec8b5 42632 devel optional linux-tree-2.6.18_2.6.18-6_all.deb
 29066b6cb1a41e5c37859ce360596254 42260 devel optional linux-headers-2.6.18-3-all_2.6.18-6_powerpc.deb
 89aa5eb27b5adfc098dc844f9e346279 42308 devel optional linux-headers-2.6.18-3-all-powerpc_2.6.18-6_powerpc.deb
 2eb0a21c78acb9b7f95effc610048244 3381120 devel optional linux-headers-2.6.18-3_2.6.18-6_powerpc.deb
 806398aa791db7b9397df0f1480299d4 16925142 admin optional linux-image-2.6.18-3-powerpc_2.6.18-6_powerpc.deb
 7be87357bd24fcdb8d6ae42c76035c9d 240688 devel optional linux-headers-2.6.18-3-powerpc_2.6.18-6_powerpc.deb
 0a29a6a2e14c6aff43e5f3a03e87b306 17271666 admin optional linux-image-2.6.18-3-powerpc-smp_2.6.18-6_powerpc.deb
 3aaa4af89fbe97bc722ace4adfb23c13 241348 devel optional linux-headers-2.6.18-3-powerpc-smp_2.6.18-6_powerpc.deb
 ed5d09c386b3bbea647bee4bfae3b5a6 15455448 admin optional linux-image-2.6.18-3-powerpc-miboot_2.6.18-6_powerpc.deb
 1830bc5ac03e17799bf84b80c03b25c5 218226 devel optional linux-headers-2.6.18-3-powerpc-miboot_2.6.18-6_powerpc.deb
 691409278dfc800f8242481a1e5b9c57 18594308 admin optional linux-image-2.6.18-3-powerpc64_2.6.18-6_powerpc.deb
 91a26902a42bde0592fc826f81361801 241950 devel optional linux-headers-2.6.18-3-powerpc64_2.6.18-6_powerpc.deb
 2264940cbf0762b0b952d4ee855067f9 16702478 admin optional linux-image-2.6.18-3-prep_2.6.18-6_powerpc.deb
 31562f1b0ca44a796e5bca855572830a 234690 devel optional linux-headers-2.6.18-3-prep_2.6.18-6_powerpc.deb
 110644e290eb7b6282bdbf4a5eb840f3 3403156 devel optional linux-headers-2.6.18-3-vserver_2.6.18-6_powerpc.deb
 6213e36586e27e26d2d435023708dd7e 17312808 admin optional linux-image-2.6.18-3-vserver-powerpc_2.6.18-6_powerpc.deb
 d71ab66a99a4ffa0f7def3a1646345da 241774 devel optional linux-headers-2.6.18-3-vserver-powerpc_2.6.18-6_powerpc.deb
 143908f0fb7aecc7cc27d8019d740b1d 18650586 admin optional linux-image-2.6.18-3-vserver-powerpc64_2.6.18-6_powerpc.deb
 896eacab58beab63da990e73917f8e82 242392 devel optional linux-headers-2.6.18-3-vserver-powerpc64_2.6.18-6_powerpc.deb

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.5 (GNU/Linux)

iEYEARECAAYFAkVjBOIACgkQLkAIIn9ODhGFdQCfc6XaOiy0izd6ScmwF7Hd2TyQ
ZMYAoPm7CYqJXkDbzA8FKR7KhTMY9Ao8
=EBzM
-----END PGP SIGNATURE-----


--- End Message ---

Reply to: