diff options
author | KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> | 2009-01-07 18:07:58 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2009-01-08 08:31:05 -0800 |
commit | 27a7faa0779dd13729196c1a818c294f44bbd1ee (patch) | |
tree | 30837689bf39eb734a8917f2c912e1b8ac0c28c0 /include/linux/page_cgroup.h | |
parent | c077719be8e9e6b55702117513d1b5f41d80404a (diff) | |
download | kernel_goldelico_gta04-27a7faa0779dd13729196c1a818c294f44bbd1ee.zip kernel_goldelico_gta04-27a7faa0779dd13729196c1a818c294f44bbd1ee.tar.gz kernel_goldelico_gta04-27a7faa0779dd13729196c1a818c294f44bbd1ee.tar.bz2 |
memcg: swap cgroup for remembering usage
For accounting swap, we need a record per swap entry, at least.
This patch adds following function.
- swap_cgroup_swapon() .... called from swapon
- swap_cgroup_swapoff() ... called at the end of swapoff.
- swap_cgroup_record() .... record information of swap entry.
- swap_cgroup_lookup() .... lookup information of swap entry.
This patch just implements "how to record information". No actual method
for limit the usage of swap. These routine uses flat table to record and
lookup. "wise" lookup system like radix-tree requires requires memory
allocation at new records but swap-out is usually called under memory
shortage (or memcg hits limit.) So, I used static allocation. (maybe
dynamic allocation is not very hard but it adds additional memory
allocation in memory shortage path.)
Note1: In this, we use pointer to record information and this means
8bytes per swap entry. I think we can reduce this when we
create "id of cgroup" in the range of 0-65535 or 0-255.
Reported-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Reviewed-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Tested-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Reported-by: Hugh Dickins <hugh@veritas.com>
Reported-by: Balbir Singh <balbir@linux.vnet.ibm.com>
Reported-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Pavel Emelianov <xemul@openvz.org>
Cc: Li Zefan <lizf@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include/linux/page_cgroup.h')
-rw-r--r-- | include/linux/page_cgroup.h | 35 |
1 files changed, 35 insertions, 0 deletions
diff --git a/include/linux/page_cgroup.h b/include/linux/page_cgroup.h index 1e6d34b..d754b2d 100644 --- a/include/linux/page_cgroup.h +++ b/include/linux/page_cgroup.h @@ -105,4 +105,39 @@ static inline void page_cgroup_init(void) } #endif + +#ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP +#include <linux/swap.h> +extern struct mem_cgroup * +swap_cgroup_record(swp_entry_t ent, struct mem_cgroup *mem); +extern struct mem_cgroup *lookup_swap_cgroup(swp_entry_t ent); +extern int swap_cgroup_swapon(int type, unsigned long max_pages); +extern void swap_cgroup_swapoff(int type); +#else +#include <linux/swap.h> + +static inline +struct mem_cgroup *swap_cgroup_record(swp_entry_t ent, struct mem_cgroup *mem) +{ + return NULL; +} + +static inline +struct mem_cgroup *lookup_swap_cgroup(swp_entry_t ent) +{ + return NULL; +} + +static inline int +swap_cgroup_swapon(int type, unsigned long max_pages) +{ + return 0; +} + +static inline void swap_cgroup_swapoff(int type) +{ + return; +} + +#endif #endif |