diff options
author | Nick Piggin <nickpiggin@yahoo.com.au> | 2005-09-27 21:45:18 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@g5.osdl.org> | 2005-09-28 07:46:40 -0700 |
commit | 8b1f3124618b54cf125dea3a074b9cf469117723 (patch) | |
tree | 19ef8a7fe9cc5b1c46dc973ea151edab4aba2b8a /include/asm-generic | |
parent | 95001ee9256df846e374f116c92ca8e0beec1527 (diff) | |
download | kernel_samsung_smdk4412-8b1f3124618b54cf125dea3a074b9cf469117723.zip kernel_samsung_smdk4412-8b1f3124618b54cf125dea3a074b9cf469117723.tar.gz kernel_samsung_smdk4412-8b1f3124618b54cf125dea3a074b9cf469117723.tar.bz2 |
[PATCH] mm: move_pte to remap ZERO_PAGE
Move the ZERO_PAGE remapping complexity to the move_pte macro in
asm-generic, have it conditionally depend on
__HAVE_ARCH_MULTIPLE_ZERO_PAGE, which gets defined for MIPS.
For architectures without __HAVE_ARCH_MULTIPLE_ZERO_PAGE, move_pte becomes
a noop.
From: Hugh Dickins <hugh@veritas.com>
Fix nasty little bug we've missed in Nick's mremap move ZERO_PAGE patch.
The "pte" at that point may be a swap entry or a pte_file entry: we must
check pte_present before perhaps corrupting such an entry.
Patch below against 2.6.14-rc2-mm1, but the same bug is in 2.6.14-rc2's
mm/mremap.c, and more dangerous there since it's affecting all arches: I
think the safest course is to send Nick's patch and Yoichi's build fix and
this fix (build tested) on to Linus - so only MIPS can be affected.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'include/asm-generic')
-rw-r--r-- | include/asm-generic/pgtable.h | 13 |
1 files changed, 13 insertions, 0 deletions
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index f86c1e5..ff28c8b 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -158,6 +158,19 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addres #define lazy_mmu_prot_update(pte) do { } while (0) #endif +#ifndef __HAVE_ARCH_MULTIPLE_ZERO_PAGE +#define move_pte(pte, prot, old_addr, new_addr) (pte) +#else +#define move_pte(pte, prot, old_addr, new_addr) \ +({ \ + pte_t newpte = (pte); \ + if (pte_present(pte) && pfn_valid(pte_pfn(pte)) && \ + pte_page(pte) == ZERO_PAGE(old_addr)) \ + newpte = mk_pte(ZERO_PAGE(new_addr), (prot)); \ + newpte; \ +}) +#endif + /* * When walking page tables, get the address of the next boundary, * or the end address of the range if that comes earlier. Although no |