diff options
author | Michel Lespinasse <walken@google.com> | 2011-01-13 15:46:09 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2011-01-13 17:32:35 -0800 |
commit | 5ecfda041e4b4bd858d25bbf5a16c2a6c06d7272 (patch) | |
tree | e6c3e7dac64a5e45b48ab7836318752202579a17 /mm | |
parent | 72ddc8f72270758951ccefb7d190f364d20215ab (diff) | |
download | kernel_samsung_tuna-5ecfda041e4b4bd858d25bbf5a16c2a6c06d7272.zip kernel_samsung_tuna-5ecfda041e4b4bd858d25bbf5a16c2a6c06d7272.tar.gz kernel_samsung_tuna-5ecfda041e4b4bd858d25bbf5a16c2a6c06d7272.tar.bz2 |
mlock: avoid dirtying pages and triggering writeback
When faulting in pages for mlock(), we want to break COW for anonymous or
file pages within VM_WRITABLE, non-VM_SHARED vmas. However, there is no
need to write-fault into VM_SHARED vmas since shared file pages can be
mlocked first and dirtied later, when/if they actually get written to.
Skipping the write fault is desirable, as we don't want to unnecessarily
cause these pages to be dirtied and queued for writeback.
Signed-off-by: Michel Lespinasse <walken@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Kosaki Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Nick Piggin <npiggin@kernel.dk>
Cc: Theodore Tso <tytso@google.com>
Cc: Michael Rubin <mrubin@google.com>
Cc: Suleiman Souhlal <suleiman@google.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/memory.c | 7 | ||||
-rw-r--r-- | mm/mlock.c | 7 |
2 files changed, 12 insertions, 2 deletions
diff --git a/mm/memory.c b/mm/memory.c index 9144fae..b8f97b8 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3299,7 +3299,12 @@ int make_pages_present(unsigned long addr, unsigned long end) vma = find_vma(current->mm, addr); if (!vma) return -ENOMEM; - write = (vma->vm_flags & VM_WRITE) != 0; + /* + * We want to touch writable mappings with a write fault in order + * to break COW, except for shared mappings because these don't COW + * and we would not want to dirty them for nothing. + */ + write = (vma->vm_flags & (VM_WRITE | VM_SHARED)) == VM_WRITE; BUG_ON(addr >= end); BUG_ON(end > vma->vm_end); len = DIV_ROUND_UP(end, PAGE_SIZE) - addr/PAGE_SIZE; @@ -171,7 +171,12 @@ static long __mlock_vma_pages_range(struct vm_area_struct *vma, VM_BUG_ON(!rwsem_is_locked(&mm->mmap_sem)); gup_flags = FOLL_TOUCH | FOLL_GET; - if (vma->vm_flags & VM_WRITE) + /* + * We want to touch writable mappings with a write fault in order + * to break COW, except for shared mappings because these don't COW + * and we would not want to dirty them for nothing. + */ + if ((vma->vm_flags & (VM_WRITE | VM_SHARED)) == VM_WRITE) gup_flags |= FOLL_WRITE; /* We don't try to access the guard page of a stack vma */ |