aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorHugh Dickins <hugh.dickins@tiscali.co.uk>2009-12-14 17:59:30 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2009-12-15 08:53:19 -0800
commit407f9c8b0889ced1dbe2f9157e4e60c61329d5c9 (patch)
tree30c0c9ad224a73de621bc23909cd89cd81eda1aa /mm
parent80e148226028257ec0a1909d99b2c40d0ffe17f2 (diff)
downloadkernel_goldelico_gta04-407f9c8b0889ced1dbe2f9157e4e60c61329d5c9.zip
kernel_goldelico_gta04-407f9c8b0889ced1dbe2f9157e4e60c61329d5c9.tar.gz
kernel_goldelico_gta04-407f9c8b0889ced1dbe2f9157e4e60c61329d5c9.tar.bz2
ksm: mem cgroup charge swapin copy
But ksm swapping does require one small change in mem cgroup handling. When do_swap_page()'s call to ksm_might_need_to_copy() does indeed substitute a duplicate page to accommodate a different anon_vma (or a the !PageSwapCache check in mem_cgroup_try_charge_swapin(). That was returning success without charging, on the assumption that pte_same() would fail after, which is not the case here. Originally I proposed that success, so that an unshrinkable mem cgroup at its limit would not fail unnecessarily; but that's a minor point, and there are plenty of other places where we may fail an overallocation which might later prove unnecessary. So just go ahead and do what all the other exceptions do: proceed to charge current mm. Signed-off-by: Hugh Dickins <hugh.dickins@tiscali.co.uk> Cc: Izik Eidus <ieidus@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Chris Wright <chrisw@redhat.com> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/memcontrol.c7
1 files changed, 4 insertions, 3 deletions
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index c31a310..e0c2066 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1737,11 +1737,12 @@ int mem_cgroup_try_charge_swapin(struct mm_struct *mm,
goto charge_cur_mm;
/*
* A racing thread's fault, or swapoff, may have already updated
- * the pte, and even removed page from swap cache: return success
- * to go on to do_swap_page()'s pte_same() test, which should fail.
+ * the pte, and even removed page from swap cache: in those cases
+ * do_swap_page()'s pte_same() test will fail; but there's also a
+ * KSM case which does need to charge the page.
*/
if (!PageSwapCache(page))
- return 0;
+ goto charge_cur_mm;
mem = try_get_mem_cgroup_from_swapcache(page);
if (!mem)
goto charge_cur_mm;