aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/block
diff options
context:
space:
mode:
authorJens Axboe <axboe@suse.de>2005-06-28 16:35:11 +0200
committerLinus Torvalds <torvalds@ppc970.osdl.org>2005-06-28 14:56:50 -0700
commit082cf69eb82681f4eacb3a5653834c7970714bef (patch)
treea0817817c787a89abd0eb7e5bf6f217523060b63 /drivers/block
parentf8b58edf3acf0dcc186b8330939000ecf709368a (diff)
downloadkernel_samsung_tuna-082cf69eb82681f4eacb3a5653834c7970714bef.zip
kernel_samsung_tuna-082cf69eb82681f4eacb3a5653834c7970714bef.tar.gz
kernel_samsung_tuna-082cf69eb82681f4eacb3a5653834c7970714bef.tar.bz2
[PATCH] ll_rw_blk: prevent huge request allocations
Currently we cap request allocations at q->nr_requests, but we allow a batching io context to allocate up to 32 more (default setting). This can flood the queue with request allocations, with only a few batching processes. The real fix would be to limit the number of batchers, but as that isn't currently tracked, I suggest we just cap the maximum number of allocated requests to eg 50% over the limit. This was observed in real life, users typically see this as vmstat bo numbers going off the wall with seconds of no queueing afterwards. Behaviour this bursty is not beneficial. Signed-off-by: Jens Axboe <axboe@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'drivers/block')
-rw-r--r--drivers/block/ll_rw_blk.c9
1 files changed, 9 insertions, 0 deletions
diff --git a/drivers/block/ll_rw_blk.c b/drivers/block/ll_rw_blk.c
index 234fdcf..6c98cf0 100644
--- a/drivers/block/ll_rw_blk.c
+++ b/drivers/block/ll_rw_blk.c
@@ -1912,6 +1912,15 @@ static struct request *get_request(request_queue_t *q, int rw, struct bio *bio,
}
get_rq:
+ /*
+ * Only allow batching queuers to allocate up to 50% over the defined
+ * limit of requests, otherwise we could have thousands of requests
+ * allocated with any setting of ->nr_requests
+ */
+ if (rl->count[rw] >= (3 * q->nr_requests / 2)) {
+ spin_unlock_irq(q->queue_lock);
+ goto out;
+ }
rl->count[rw]++;
rl->starved[rw] = 0;
if (rl->count[rw] >= queue_congestion_on_threshold(q))