diff options
author | Dan Magenheimer <dan.magenheimer@oracle.com> | 2012-01-25 16:58:46 -0800 |
---|---|---|
committer | Andreas Blaesius <skate4life@gmx.de> | 2016-06-05 21:21:56 +0200 |
commit | 0b237f39e49571f6ae1ed65154858800eb1831b4 (patch) | |
tree | 87c56f2db8140ac88eee66de672133509206b5f1 /mm | |
parent | fe6357ae0e88cbe9453e30820151921e68f43c1c (diff) | |
download | kernel_samsung_espresso10-0b237f39e49571f6ae1ed65154858800eb1831b4.zip kernel_samsung_espresso10-0b237f39e49571f6ae1ed65154858800eb1831b4.tar.gz kernel_samsung_espresso10-0b237f39e49571f6ae1ed65154858800eb1831b4.tar.bz2 |
mm: implement WasActive page flag (for improving cleancache)
(Feedback welcome if there is a different/better way to do this
without using a page flag!)
Since about 2.6.27, the page replacement algorithm maintains
an "active" bit to help decide which pages are most eligible
to reclaim, see http://linux-mm.org/PageReplacementDesign
This "active' information is also useful to cleancache but is lost
by the time that cleancache has the opportunity to preserve the
pageful of data. This patch adds a new page flag "WasActive" to
retain the state. The flag may possibly be useful elsewhere.
It is up to each cleancache backend to utilize the bit as
it desires. The matching patch for zcache is included here
for clarification/discussion purposes, though it will need to
go through GregKH and the staging tree.
The patch resolves issues reported with cleancache which occur
especially during streaming workloads on older processors,
see https://lkml.org/lkml/2011/8/17/351
Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Conflicts:
include/linux/page-flags.h
Change-Id: I0fcb2302a7b9c5e66db005229f679baee90f262f
Conflicts:
include/linux/page-flags.h
Diffstat (limited to 'mm')
-rw-r--r-- | mm/vmscan.c | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c index 5bd0dcb..692bec9 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -627,6 +627,8 @@ void putback_lru_page(struct page *page) int was_unevictable = PageUnevictable(page); VM_BUG_ON(PageLRU(page)); + if (active) + SetPageWasActive(page); redo: ClearPageUnevictable(page); @@ -1284,6 +1286,7 @@ static unsigned long clear_active_flags(struct list_head *page_list, if (PageActive(page)) { lru += LRU_ACTIVE; ClearPageActive(page); + SetPageWasActive(page); nr_active += numpages; } if (count) @@ -1705,6 +1708,7 @@ static void shrink_active_list(unsigned long nr_pages, struct zone *zone, } ClearPageActive(page); /* we are de-activating */ + SetPageWasActive(page); list_add(&page->lru, &l_inactive); } |