diff options
author | Michel Lespinasse <walken@google.com> | 2013-02-27 17:02:44 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2013-02-27 19:10:09 -0800 |
commit | ff6a6da60b894d008f704fbeb5bc596f9994b16e (patch) | |
tree | 84c0fd2850edcd836afee8f9c542d4d4d98602f4 /mm/internal.h | |
parent | c5a51053cf3b499ddba60a89ab067ea05ad15840 (diff) | |
download | kernel_goldelico_gta04-ff6a6da60b894d008f704fbeb5bc596f9994b16e.zip kernel_goldelico_gta04-ff6a6da60b894d008f704fbeb5bc596f9994b16e.tar.gz kernel_goldelico_gta04-ff6a6da60b894d008f704fbeb5bc596f9994b16e.tar.bz2 |
mm: accelerate munlock() treatment of THP pages
munlock_vma_pages_range() was always incrementing addresses by PAGE_SIZE
at a time. When munlocking THP pages (or the huge zero page), this
resulted in taking the mm->page_table_lock 512 times in a row.
We can do better by making use of the page_mask returned by
follow_page_mask (for the huge zero page case), or the size of the page
munlock_vma_page() operated on (for the true THP page case).
Signed-off-by: Michel Lespinasse <walken@google.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/internal.h')
-rw-r--r-- | mm/internal.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/internal.h b/mm/internal.h index 1c0c4cc..8562de0 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -195,7 +195,7 @@ static inline int mlocked_vma_newpage(struct vm_area_struct *vma, * must be called with vma's mmap_sem held for read or write, and page locked. */ extern void mlock_vma_page(struct page *page); -extern void munlock_vma_page(struct page *page); +extern unsigned int munlock_vma_page(struct page *page); /* * Clear the page's PageMlocked(). This can be useful in a situation where |