diff options
author | Nick Piggin <nickpiggin@yahoo.com.au> | 2005-09-10 00:26:18 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@g5.osdl.org> | 2005-09-10 10:06:23 -0700 |
commit | e17224bf1d01b461ec02a60f5a9b7657a89bdd23 (patch) | |
tree | 30dbb20798fde88a09680e9d82bd32ad8c343692 /kernel | |
parent | d6d5cfaf4551aa7713ca6ab73bb77e832602204b (diff) | |
download | kernel_samsung_espresso10-e17224bf1d01b461ec02a60f5a9b7657a89bdd23.zip kernel_samsung_espresso10-e17224bf1d01b461ec02a60f5a9b7657a89bdd23.tar.gz kernel_samsung_espresso10-e17224bf1d01b461ec02a60f5a9b7657a89bdd23.tar.bz2 |
[PATCH] sched: less locking
During periodic load balancing, don't hold this runqueue's lock while
scanning remote runqueues, which can take a non trivial amount of time
especially on very large systems.
Holding the runqueue lock will only help to stabilise ->nr_running, however
this doesn't do much to help because tasks being woken will simply get held
up on the runqueue lock, so ->nr_running would not provide a really
accurate picture of runqueue load in that case anyway.
What's more, ->nr_running (and possibly the cpu_load averages) of remote
runqueues won't be stable anyway, so load balancing is always an inexact
operation.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched.c | 9 |
1 files changed, 2 insertions, 7 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index 9301895..8535e5c 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -2075,7 +2075,6 @@ static int load_balance(int this_cpu, runqueue_t *this_rq, int nr_moved, all_pinned = 0; int active_balance = 0; - spin_lock(&this_rq->lock); schedstat_inc(sd, lb_cnt[idle]); group = find_busiest_group(sd, this_cpu, &imbalance, idle); @@ -2102,18 +2101,16 @@ static int load_balance(int this_cpu, runqueue_t *this_rq, * still unbalanced. nr_moved simply stays zero, so it is * correctly treated as an imbalance. */ - double_lock_balance(this_rq, busiest); + double_rq_lock(this_rq, busiest); nr_moved = move_tasks(this_rq, this_cpu, busiest, imbalance, sd, idle, &all_pinned); - spin_unlock(&busiest->lock); + double_rq_unlock(this_rq, busiest); /* All tasks on this runqueue were pinned by CPU affinity */ if (unlikely(all_pinned)) goto out_balanced; } - spin_unlock(&this_rq->lock); - if (!nr_moved) { schedstat_inc(sd, lb_failed[idle]); sd->nr_balance_failed++; @@ -2156,8 +2153,6 @@ static int load_balance(int this_cpu, runqueue_t *this_rq, return nr_moved; out_balanced: - spin_unlock(&this_rq->lock); - schedstat_inc(sd, lb_balanced[idle]); sd->nr_balance_failed = 0; |