aboutsummaryrefslogtreecommitdiffstats
path: root/mm/filemap.c
diff options
context:
space:
mode:
authorAndi Kleen <ak@linux.intel.com>2011-05-24 17:12:29 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2011-05-25 08:39:26 -0700
commit207d04baa3591a354711e863dd90087fc75873b3 (patch)
tree17498d55af5b2a588e7e7111e927a099236ca770 /mm/filemap.c
parent275b12bf5486f6f531111fd3d7dbbf01df427cfe (diff)
downloadkernel_samsung_crespo-207d04baa3591a354711e863dd90087fc75873b3.zip
kernel_samsung_crespo-207d04baa3591a354711e863dd90087fc75873b3.tar.gz
kernel_samsung_crespo-207d04baa3591a354711e863dd90087fc75873b3.tar.bz2
readahead: reduce unnecessary mmap_miss increases
The original INT_MAX is too large, reduce it to - avoid unnecessarily dirtying/bouncing the cache line - restore mmap read-around faster on changed access pattern Background: in the mosbench exim benchmark which does multi-threaded page faults on shared struct file, the ra->mmap_miss updates are found to cause excessive cache line bouncing on tmpfs. The ra state updates are needless for tmpfs because it actually disabled readahead totally (shmem_backing_dev_info.ra_pages == 0). Tested-by: Tim Chen <tim.c.chen@intel.com> Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/filemap.c')
-rw-r--r--mm/filemap.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/mm/filemap.c b/mm/filemap.c
index c974a28..e513139 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1566,7 +1566,8 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma,
return;
}
- if (ra->mmap_miss < INT_MAX)
+ /* Avoid banging the cache line if not needed */
+ if (ra->mmap_miss < MMAP_LOTSAMISS * 10)
ra->mmap_miss++;
/*