diff options
author | Jens Axboe <jaxboe@fusionio.com> | 2011-04-16 13:27:55 +0200 |
---|---|---|
committer | Jens Axboe <jaxboe@fusionio.com> | 2011-04-16 13:27:55 +0200 |
commit | a237c1c5bc5dc5c76a21be922dca4826f3eca8ca (patch) | |
tree | a216c9a6d9e870b84424938e9e0b4722dc8634cd /kernel/sched.c | |
parent | 5853b4f06f7b9b56f37f457d7923f7b96496074e (diff) | |
download | kernel_goldelico_gta04-a237c1c5bc5dc5c76a21be922dca4826f3eca8ca.zip kernel_goldelico_gta04-a237c1c5bc5dc5c76a21be922dca4826f3eca8ca.tar.gz kernel_goldelico_gta04-a237c1c5bc5dc5c76a21be922dca4826f3eca8ca.tar.bz2 |
block: let io_schedule() flush the plug inline
Linus correctly observes that the most important dispatch cases
are now done from kblockd, this isn't ideal for latency reasons.
The original reason for switching dispatches out-of-line was to
avoid too deep a stack, so by _only_ letting the "accidental"
flush directly in schedule() be guarded by offload to kblockd,
we should be able to get the best of both worlds.
So add a blk_schedule_flush_plug() that offloads to kblockd,
and only use that from the schedule() path.
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
Diffstat (limited to 'kernel/sched.c')
-rw-r--r-- | kernel/sched.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index a187c3f..312f8b9 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -4118,7 +4118,7 @@ need_resched: */ if (blk_needs_flush_plug(prev)) { raw_spin_unlock(&rq->lock); - blk_flush_plug(prev); + blk_schedule_flush_plug(prev); raw_spin_lock(&rq->lock); } } |