diff options
author | Dave Chinner <dchinner@redhat.com> | 2010-08-24 11:40:03 +1000 |
---|---|---|
committer | Dave Chinner <david@fromorbit.com> | 2010-08-24 11:40:03 +1000 |
commit | a44f13edf0ebb4e41942d0f16ca80489dcf6659d (patch) | |
tree | 42bcbee56a62851e969292033efd600cced80ca5 /fs/xfs/xfs_log.c | |
parent | 1a387d3be2b30c90f20d49a3497a8fc0693a9d18 (diff) | |
download | kernel_samsung_smdk4412-a44f13edf0ebb4e41942d0f16ca80489dcf6659d.zip kernel_samsung_smdk4412-a44f13edf0ebb4e41942d0f16ca80489dcf6659d.tar.gz kernel_samsung_smdk4412-a44f13edf0ebb4e41942d0f16ca80489dcf6659d.tar.bz2 |
xfs: Reduce log force overhead for delayed logging
Delayed logging adds some serialisation to the log force process to
ensure that it does not deference a bad commit context structure
when determining if a CIL push is necessary or not. It does this by
grabing the CIL context lock exclusively, then dropping it before
pushing the CIL if necessary. This causes serialisation of all log
forces and pushes regardless of whether a force is necessary or not.
As a result fsync heavy workloads (like dbench) can be significantly
slower with delayed logging than without.
To avoid this penalty, copy the current sequence from the context to
the CIL structure when they are swapped. This allows us to do
unlocked checks on the current sequence without having to worry
about dereferencing context structures that may have already been
freed. Hence we can remove the CIL context locking in the forcing
code and only call into the push code if the current context matches
the sequence we need to force.
By passing the sequence into the push code, we can check the
sequence again once we have the CIL lock held exclusive and abort if
the sequence has already been pushed. This avoids a lock round-trip
and unnecessary CIL pushes when we have racing push calls.
The result is that the regression in dbench performance goes away -
this change improves dbench performance on a ramdisk from ~2100MB/s
to ~2500MB/s. This compares favourably to not using delayed logging
which retuns ~2500MB/s for the same workload.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Diffstat (limited to 'fs/xfs/xfs_log.c')
-rw-r--r-- | fs/xfs/xfs_log.c | 7 |
1 files changed, 4 insertions, 3 deletions
diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c index 925d572..33f718f 100644 --- a/fs/xfs/xfs_log.c +++ b/fs/xfs/xfs_log.c @@ -3015,7 +3015,8 @@ _xfs_log_force( XFS_STATS_INC(xs_log_force); - xlog_cil_push(log, 1); + if (log->l_cilp) + xlog_cil_force(log); spin_lock(&log->l_icloglock); @@ -3167,7 +3168,7 @@ _xfs_log_force_lsn( XFS_STATS_INC(xs_log_force); if (log->l_cilp) { - lsn = xlog_cil_push_lsn(log, lsn); + lsn = xlog_cil_force_lsn(log, lsn); if (lsn == NULLCOMMITLSN) return 0; } @@ -3724,7 +3725,7 @@ xfs_log_force_umount( * call below. */ if (!logerror && (mp->m_flags & XFS_MOUNT_DELAYLOG)) - xlog_cil_push(log, 1); + xlog_cil_force(log); /* * We must hold both the GRANT lock and the LOG lock, |