aboutsummaryrefslogtreecommitdiffstats
path: root/mm/vmscan.c
diff options
context:
space:
mode:
authorKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>2009-01-07 18:08:17 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2009-01-08 08:31:07 -0800
commiteeee9a8cd1e93c8b94e7788790fa9e2f8910c779 (patch)
tree2ef0a61a4ce12410ecfa48014a0181c03e73a3cb /mm/vmscan.c
parentc9f299d9862deadf9fbee3ca28d915fdb006975a (diff)
downloadkernel_goldelico_gta04-eeee9a8cd1e93c8b94e7788790fa9e2f8910c779.zip
kernel_goldelico_gta04-eeee9a8cd1e93c8b94e7788790fa9e2f8910c779.tar.gz
kernel_goldelico_gta04-eeee9a8cd1e93c8b94e7788790fa9e2f8910c779.tar.bz2
mm: make get_scan_ratio() safe for memcg
Currently, get_scan_ratio() always calculate the balancing value for global reclaim and memcg reclaim doesn't use it. Therefore it doesn't have scan_global_lru() condition. However, we plan to expand get_scan_ratio() to be usable for memcg too, latter. Then, The dependency code of global reclaim in the get_scan_ratio() insert into scan_global_lru() condision explictly. This patch doesn't have any functional change. Acked-by: Rik van Riel <riel@redhat.com> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Balbir Singh <balbir@in.ibm.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Hugh Dickins <hugh@veritas.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/vmscan.c')
-rw-r--r--mm/vmscan.c15
1 files changed, 9 insertions, 6 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 6827d35..e2b31a5 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1376,13 +1376,16 @@ static void get_scan_ratio(struct zone *zone, struct scan_control *sc,
zone_nr_pages(zone, sc, LRU_INACTIVE_ANON);
file = zone_nr_pages(zone, sc, LRU_ACTIVE_FILE) +
zone_nr_pages(zone, sc, LRU_INACTIVE_FILE);
- free = zone_page_state(zone, NR_FREE_PAGES);
- /* If we have very few page cache pages, force-scan anon pages. */
- if (unlikely(file + free <= zone->pages_high)) {
- percent[0] = 100;
- percent[1] = 0;
- return;
+ if (scan_global_lru(sc)) {
+ free = zone_page_state(zone, NR_FREE_PAGES);
+ /* If we have very few page cache pages,
+ force-scan anon pages. */
+ if (unlikely(file + free <= zone->pages_high)) {
+ percent[0] = 100;
+ percent[1] = 0;
+ return;
+ }
}
/*