diff options
author | Frederic Weisbecker <fweisbec@gmail.com> | 2009-05-16 18:12:08 +0200 |
---|---|---|
committer | Frederic Weisbecker <fweisbec@gmail.com> | 2009-09-14 07:18:24 +0200 |
commit | c72e05756b900b3be24cd73a16de52bab80984c0 (patch) | |
tree | 4fc35ad9efc1a6a9ca14baa3612e551fb4da793e /include/linux/reiserfs_fs.h | |
parent | 2ac626955ed62ee8596f00581f959cc86e6198d1 (diff) | |
download | kernel_samsung_crespo-c72e05756b900b3be24cd73a16de52bab80984c0.zip kernel_samsung_crespo-c72e05756b900b3be24cd73a16de52bab80984c0.tar.gz kernel_samsung_crespo-c72e05756b900b3be24cd73a16de52bab80984c0.tar.bz2 |
kill-the-bkl/reiserfs: acquire the inode mutex safely
While searching a pathname, an inode mutex can be acquired
in do_lookup() which calls reiserfs_lookup() which in turn
acquires the write lock.
On the other side reiserfs_fill_super() can acquire the write_lock
and then call reiserfs_lookup_privroot() which can acquire an
inode mutex (the root of the mount point).
So we theoretically risk an AB - BA lock inversion that could lead
to a deadlock.
As for other lock dependencies found since the bkl to mutex
conversion, the fix is to use reiserfs_mutex_lock_safe() which
drops the lock dependency to the write lock.
[ Impact: fix a possible deadlock with reiserfs ]
Cc: Jeff Mahoney <jeffm@suse.com>
Cc: Chris Mason <chris.mason@oracle.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Alexander Beregalov <a.beregalov@gmail.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Diffstat (limited to 'include/linux/reiserfs_fs.h')
-rw-r--r-- | include/linux/reiserfs_fs.h | 35 |
1 files changed, 35 insertions, 0 deletions
diff --git a/include/linux/reiserfs_fs.h b/include/linux/reiserfs_fs.h index 508fb52..a498d92 100644 --- a/include/linux/reiserfs_fs.h +++ b/include/linux/reiserfs_fs.h @@ -63,6 +63,41 @@ int reiserfs_write_lock_once(struct super_block *s); void reiserfs_write_unlock_once(struct super_block *s, int lock_depth); /* + * Several mutexes depend on the write lock. + * However sometimes we want to relax the write lock while we hold + * these mutexes, according to the release/reacquire on schedule() + * properties of the Bkl that were used. + * Reiserfs performances and locking were based on this scheme. + * Now that the write lock is a mutex and not the bkl anymore, doing so + * may result in a deadlock: + * + * A acquire write_lock + * A acquire j_commit_mutex + * A release write_lock and wait for something + * B acquire write_lock + * B can't acquire j_commit_mutex and sleep + * A can't acquire write lock anymore + * deadlock + * + * What we do here is avoiding such deadlock by playing the same game + * than the Bkl: if we can't acquire a mutex that depends on the write lock, + * we release the write lock, wait a bit and then retry. + * + * The mutexes concerned by this hack are: + * - The commit mutex of a journal list + * - The flush mutex + * - The journal lock + * - The inode mutex + */ +static inline void reiserfs_mutex_lock_safe(struct mutex *m, + struct super_block *s) +{ + reiserfs_write_unlock(s); + mutex_lock(m); + reiserfs_write_lock(s); +} + +/* * When we schedule, we usually want to also release the write lock, * according to the previous bkl based locking scheme of reiserfs. */ |