diff options
author | MinChan Kim <minchan.kim@gmail.com> | 2009-02-11 13:04:27 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2009-02-11 14:25:35 -0800 |
commit | 508b9f8efdad123b202b228f71f59feba51e4fb5 (patch) | |
tree | 0da0d842edb168cfb7b883a3148b147dc631c2b9 | |
parent | 02ac597c9b86af49b2016aa98aee20ab59dbf0d2 (diff) | |
download | kernel_samsung_tuna-508b9f8efdad123b202b228f71f59feba51e4fb5.zip kernel_samsung_tuna-508b9f8efdad123b202b228f71f59feba51e4fb5.tar.gz kernel_samsung_tuna-508b9f8efdad123b202b228f71f59feba51e4fb5.tar.bz2 |
mm: fix mlocked page counter mismatch
When I tested following program, I found that the mlocked counter
is strange. It cannot free some mlocked pages.
It is because try_to_unmap_file() doesn't check real
page mappings in vmas.
That is because the goal of an address_space for a file is to find all
processes into which the file's specific interval is mapped. It is
related to the file's interval, not to pages.
Even if the page isn't really mapped by the vma, it returns SWAP_MLOCK
since the vma has VM_LOCKED, then calls try_to_mlock_page. After this the
mlocked counter is increased again.
COWed anon page in a file-backed vma could be a such case. This patch
resolves it.
-- my test program --
int main()
{
mlockall(MCL_CURRENT);
return 0;
}
-- before --
root@barrios-target-linux:~# cat /proc/meminfo | egrep 'Mlo|Unev'
Unevictable: 0 kB
Mlocked: 0 kB
-- after --
root@barrios-target-linux:~# cat /proc/meminfo | egrep 'Mlo|Unev'
Unevictable: 8 kB
Mlocked: 8 kB
Signed-off-by: MinChan Kim <minchan.kim@gmail.com>
Acked-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Tested-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Cc: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r-- | mm/rmap.c | 3 |
1 files changed, 2 insertions, 1 deletions
@@ -1072,7 +1072,8 @@ static int try_to_unmap_file(struct page *page, int unlock, int migration) spin_lock(&mapping->i_mmap_lock); vma_prio_tree_foreach(vma, &iter, &mapping->i_mmap, pgoff, pgoff) { if (MLOCK_PAGES && unlikely(unlock)) { - if (!(vma->vm_flags & VM_LOCKED)) + if (!((vma->vm_flags & VM_LOCKED) && + page_mapped_in_vma(page, vma))) continue; /* must visit all vmas */ ret = SWAP_MLOCK; } else { |