aboutsummaryrefslogtreecommitdiffstats
path: root/mm/rmap.c
diff options
context:
space:
mode:
authorMartin Schwidefsky <schwidefsky@de.ibm.com>2008-08-01 16:39:12 +0200
committerMartin Schwidefsky <schwidefsky@de.ibm.com>2008-08-01 16:39:30 +0200
commita4b526b3ba6353cd89a38e41da48ed83b0ead16f (patch)
tree362842354bdcde59feede51cbeefc9b8833aacf7 /mm/rmap.c
parent934b2857cc576ae53c92a66e63fce7ddcfa74691 (diff)
downloadkernel_samsung_tuna-a4b526b3ba6353cd89a38e41da48ed83b0ead16f.zip
kernel_samsung_tuna-a4b526b3ba6353cd89a38e41da48ed83b0ead16f.tar.gz
kernel_samsung_tuna-a4b526b3ba6353cd89a38e41da48ed83b0ead16f.tar.bz2
[S390] Optimize storage key operations for anon pages
For anonymous pages without a swap cache backing the check in page_remove_rmap for the physical dirty bit in page_remove_rmap is unnecessary. The instructions that are used to check and reset the dirty bit are expensive. Removing the check noticably speeds up process exit. In addition the clearing of the dirty bit in __SetPageUptodate is pointless as well. With these two changes there is no storage key operation for an anonymous page anymore if it does not hit the swap space. The micro benchmark which repeatedly executes an empty shell script gets about 5% faster. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Diffstat (limited to 'mm/rmap.c')
-rw-r--r--mm/rmap.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/mm/rmap.c b/mm/rmap.c
index 99bc3f9..94a5246 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -667,7 +667,8 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma)
* Leaving it set also helps swapoff to reinstate ptes
* faster for those pages still in swapcache.
*/
- if (page_test_dirty(page)) {
+ if ((!PageAnon(page) || PageSwapCache(page)) &&
+ page_test_dirty(page)) {
page_clear_dirty(page);
set_page_dirty(page);
}