diff options
author | Gautham R Shenoy <ego@in.ibm.com> | 2010-01-20 14:02:44 -0600 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2010-01-21 13:40:17 +0100 |
commit | 871e35bc9733f273eaf5ceb69bbd0423b58e5285 (patch) | |
tree | 0c740fdbba9ade54143834ce52581b2d76a23795 /kernel | |
parent | 8f190fb3f7a405682666d3723f6ec370b5afe4da (diff) | |
download | kernel_samsung_crespo-871e35bc9733f273eaf5ceb69bbd0423b58e5285.zip kernel_samsung_crespo-871e35bc9733f273eaf5ceb69bbd0423b58e5285.tar.gz kernel_samsung_crespo-871e35bc9733f273eaf5ceb69bbd0423b58e5285.tar.bz2 |
sched: Fix the place where group powers are updated
We want to update the sched_group_powers when balance_cpu == this_cpu.
Currently the group powers are updated only if the balance_cpu is the
first CPU in the local group. But balance_cpu = this_cpu could also be
the first idle cpu in the group. Hence fix the place where the group
powers are updated.
Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
Signed-off-by: Joel Schopp <jschopp@austin.ibm.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1264017764.5717.127.camel@jschopp-laptop>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched_fair.c | 7 |
1 files changed, 3 insertions, 4 deletions
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index 0b482f5..22231cc 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -2418,11 +2418,8 @@ static inline void update_sg_lb_stats(struct sched_domain *sd, unsigned long sum_avg_load_per_task; unsigned long avg_load_per_task; - if (local_group) { + if (local_group) balance_cpu = group_first_cpu(group); - if (balance_cpu == this_cpu) - update_group_power(sd, this_cpu); - } /* Tally up the load of all CPUs in the group */ sum_avg_load_per_task = avg_load_per_task = 0; @@ -2470,6 +2467,8 @@ static inline void update_sg_lb_stats(struct sched_domain *sd, return; } + update_group_power(sd, this_cpu); + /* Adjust by relative CPU power of the group */ sgs->avg_load = (sgs->group_load * SCHED_LOAD_SCALE) / group->cpu_power; |