diff options
author | Mark Fasheh <mfasheh@suse.com> | 2010-04-05 18:17:14 -0700 |
---|---|---|
committer | Joel Becker <joel.becker@oracle.com> | 2010-05-05 18:18:07 -0700 |
commit | 6b82021b9e91cd689fdffadbcdb9a42597bbe764 (patch) | |
tree | ac4235e792e74a2e60a41e95d62965b7ed4b3232 /fs/ocfs2/dcache.h | |
parent | 73c8a80003d13be54e2309865030404441075182 (diff) | |
download | kernel_samsung_tuna-6b82021b9e91cd689fdffadbcdb9a42597bbe764.zip kernel_samsung_tuna-6b82021b9e91cd689fdffadbcdb9a42597bbe764.tar.gz kernel_samsung_tuna-6b82021b9e91cd689fdffadbcdb9a42597bbe764.tar.bz2 |
ocfs2: increase the default size of local alloc windows
I have observed that the current size of 8M gives us pretty poor
fragmentation on multi-threaded workloads which do lots of writes.
Generally, I can increase the size of local alloc windows and observe a
marked decrease in fragmentation, even up and beyond window sizes of 512
megabytes. This makes sense for a couple reasons - larger local alloc means
more room for reservation windows. On multi-node workloads the larger local
alloc helps as well because we don't have to do window slides as often.
Also, I removed the OCFS2_DEFAULT_LOCAL_ALLOC_SIZE constant as it is no
longer used and the comment above it was out of date.
To test fragmentation, I used a workload which launched 4 threads that did
4k writes into a series of about 140 alternating files.
With resv_level=2, and a 4k/4k file system I observed the following average
fragmentation for various localalloc= parameters:
localalloc= avg. fragmentation
8 48
32 16
64 10
120 7
On larger cluster sizes, the difference is more dramatic.
The new default size top out at 256M, which we'll only get for cluster
sizes of 32K and above.
Signed-off-by: Mark Fasheh <mfasheh@suse.com>
Signed-off-by: Joel Becker <joel.becker@oracle.com>
Diffstat (limited to 'fs/ocfs2/dcache.h')
0 files changed, 0 insertions, 0 deletions