diff options
author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2008-06-27 13:41:39 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2008-06-27 14:31:47 +0200 |
commit | f5bfb7d9ff73d72ee4f2f4830a6f0c9088d00f92 (patch) | |
tree | 402e8caaef4d3f0c26a52b171e04dbb67ea08cfa /kernel/sched_fair.c | |
parent | f1d239f73200a5803a89e5929fb3abc1596b7589 (diff) | |
download | kernel_samsung_crespo-f5bfb7d9ff73d72ee4f2f4830a6f0c9088d00f92.zip kernel_samsung_crespo-f5bfb7d9ff73d72ee4f2f4830a6f0c9088d00f92.tar.gz kernel_samsung_crespo-f5bfb7d9ff73d72ee4f2f4830a6f0c9088d00f92.tar.bz2 |
sched: bias effective_load() error towards failing wake_affine().
Measurement shows that the difference between cgroup:/ and cgroup:/foo
wake_affine() results is that the latter succeeds significantly more.
Therefore bias the calculations towards failing the test.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Cc: Mike Galbraith <efault@gmx.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched_fair.c')
-rw-r--r-- | kernel/sched_fair.c | 28 |
1 files changed, 28 insertions, 0 deletions
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index e87f1a5..9bcc003 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -1074,6 +1074,27 @@ static inline int wake_idle(int cpu, struct task_struct *p) static const struct sched_class fair_sched_class; #ifdef CONFIG_FAIR_GROUP_SCHED +/* + * effective_load() calculates the load change as seen from the root_task_group + * + * Adding load to a group doesn't make a group heavier, but can cause movement + * of group shares between cpus. Assuming the shares were perfectly aligned one + * can calculate the shift in shares. + * + * The problem is that perfectly aligning the shares is rather expensive, hence + * we try to avoid doing that too often - see update_shares(), which ratelimits + * this change. + * + * We compensate this by not only taking the current delta into account, but + * also considering the delta between when the shares were last adjusted and + * now. + * + * We still saw a performance dip, some tracing learned us that between + * cgroup:/ and cgroup:/foo balancing the number of affine wakeups increased + * significantly. Therefore try to bias the error in direction of failing + * the affine wakeup. + * + */ static long effective_load(struct task_group *tg, int cpu, long wl, long wg) { @@ -1084,6 +1105,13 @@ static long effective_load(struct task_group *tg, int cpu, return wl; /* + * By not taking the decrease of shares on the other cpu into + * account our error leans towards reducing the affine wakeups. + */ + if (!wl && sched_feat(ASYM_EFF_LOAD)) + return wl; + + /* * Instead of using this increment, also add the difference * between when the shares were last updated and now. */ |