aboutsummaryrefslogtreecommitdiffstats
path: root/block/blk-core.c
diff options
context:
space:
mode:
authorfaux123 <reioux@gmail.com>2012-02-06 22:10:28 -0800
committerZiyan <jaraidaniel@gmail.com>2016-01-08 10:36:33 +0100
commit68f2ceff6a7971ac1be929b95d1eecc9e36600f0 (patch)
treec0156aaa5b1c19b57bcb55398cc43e92ddd47f45 /block/blk-core.c
parent8090d4df168b7a464576885e69e0c20834c47018 (diff)
downloadkernel_samsung_tuna-68f2ceff6a7971ac1be929b95d1eecc9e36600f0.zip
kernel_samsung_tuna-68f2ceff6a7971ac1be929b95d1eecc9e36600f0.tar.gz
kernel_samsung_tuna-68f2ceff6a7971ac1be929b95d1eecc9e36600f0.tar.bz2
block: document blk-plug
Thus spake Andrew Morton: "And I have the usual maintainability whine. If someone comes up to vmscan.c and sees it calling blk_start_plug(), how are they supposed to work out why that call is there? They go look at the blk_start_plug() definition and it is undocumented. I think we can do better than this?" Adapted from the LWN article - http://lwn.net/Articles/438256/ by Jens Axboe and from an earlier attempt by Shaohua Li to document blk-plug. [akpm@linux-foundation.org: grammatical and spelling tweaks] Signed-off-by: Suresh Jayaraman <sjayaraman@suse.de> Cc: Shaohua Li <shaohua.li@intel.com> Cc: Jonathan Corbet <corbet@lwn.net> Signed-off-by: Andrew Morton <akpm@google.com> Signed-off-by: Jens Axboe <axboe@kernel.dk> Conflicts: include/linux/blkdev.h modified by faux123
Diffstat (limited to 'block/blk-core.c')
-rw-r--r--block/blk-core.c14
1 files changed, 14 insertions, 0 deletions
diff --git a/block/blk-core.c b/block/blk-core.c
index 384586e..d2ac88b 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -2542,6 +2542,20 @@ EXPORT_SYMBOL(kblockd_schedule_delayed_work);
#define PLUG_MAGIC 0x91827364
+/**
+ * blk_start_plug - initialize blk_plug and track it inside the task_struct
+ * @plug: The &struct blk_plug that needs to be initialized
+ *
+ * Description:
+ * Tracking blk_plug inside the task_struct will help with auto-flushing the
+ * pending I/O should the task end up blocking between blk_start_plug() and
+ * blk_finish_plug(). This is important from a performance perspective, but
+ * also ensures that we don't deadlock. For instance, if the task is blocking
+ * for a memory allocation, memory reclaim could end up wanting to free a
+ * page belonging to that request that is currently residing in our private
+ * plug. By flushing the pending I/O when the process goes to sleep, we avoid
+ * this kind of deadlock.
+ */
void blk_start_plug(struct blk_plug *plug)
{
struct task_struct *tsk = current;