diff options
author | Jens Axboe <jaxboe@fusionio.com> | 2011-04-12 14:58:51 +0200 |
---|---|---|
committer | Jens Axboe <jaxboe@fusionio.com> | 2011-04-12 14:58:51 +0200 |
commit | f4af3c3d077a004762aaad052049c809fd8c6f0c (patch) | |
tree | c4cbbc37e357775fc8200e16c6eb9b3f14d30069 /block | |
parent | cf82c798394cd443eed7d91f998b79a63f341e91 (diff) | |
download | kernel_samsung_aries-f4af3c3d077a004762aaad052049c809fd8c6f0c.zip kernel_samsung_aries-f4af3c3d077a004762aaad052049c809fd8c6f0c.tar.gz kernel_samsung_aries-f4af3c3d077a004762aaad052049c809fd8c6f0c.tar.bz2 |
block: move queue run on unplug to kblockd
There are worries that we are now consuming a lot more stack in
some cases, since we potentially call into IO dispatch from
schedule() or io_schedule(). We can reduce this problem by moving
the running of the queue to kblockd, like the old plugging scheme
did as well.
This may or may not be a good idea from a performance perspective,
depending on how many tasks have queue plugs running at the same
time. For even the slightly contended case, doing just a single
queue run from kblockd instead of multiple runs directly from the
unpluggers will be faster.
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
Diffstat (limited to 'block')
-rw-r--r-- | block/blk-core.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/block/blk-core.c b/block/blk-core.c index c6eaa1f..36b1a75 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -2665,7 +2665,7 @@ static int plug_rq_cmp(void *priv, struct list_head *a, struct list_head *b) static void queue_unplugged(struct request_queue *q, unsigned int depth) { trace_block_unplug_io(q, depth); - __blk_run_queue(q, false); + __blk_run_queue(q, true); if (q->unplugged_fn) q->unplugged_fn(q); |