diff options
author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2012-01-11 13:11:12 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2012-01-11 17:15:12 +0100 |
commit | bced76aeaca03b45e3b4bdb868cada328e497847 (patch) | |
tree | e2965b6cfecdc257a01b89ec6780b8de3b2e6d39 /kernel | |
parent | 6db9dc150eabce7053c8df2a2146aa0d6748ec42 (diff) | |
download | kernel_goldelico_gta04-bced76aeaca03b45e3b4bdb868cada328e497847.zip kernel_goldelico_gta04-bced76aeaca03b45e3b4bdb868cada328e497847.tar.gz kernel_goldelico_gta04-bced76aeaca03b45e3b4bdb868cada328e497847.tar.bz2 |
sched: Fix lockup by limiting load-balance retries on lock-break
Eric and David reported dead machines and traced it to commit
a195f004 ("sched: Fix load-balance lock-breaking"), it turns out
there's still a scenario where we can end up re-trying forever.
Since there is no strict forward progress guarantee in the
load-balance iteration we can get stuck re-retrying the same
task-set over and over.
Creating a forward progress guarantee with the existing
structure is somewhat non-trivial, for now simply terminate the
retry loop after a few tries.
Reported-by: Eric Dumazet <eric.dumazet@gmail.com>
Tested-by: Eric Dumazet <eric.dumazet@gmail.com>
Reported-by: David Ahern <dsahern@gmail.com>
[ logic cleanup as suggested by Eric ]
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Link: http://lkml.kernel.org/r/1326297936.2442.157.camel@twins
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched/fair.c | 10 |
1 files changed, 7 insertions, 3 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 8e42de9..84adb2d 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3130,8 +3130,10 @@ task_hot(struct task_struct *p, u64 now, struct sched_domain *sd) } #define LBF_ALL_PINNED 0x01 -#define LBF_NEED_BREAK 0x02 -#define LBF_ABORT 0x04 +#define LBF_NEED_BREAK 0x02 /* clears into HAD_BREAK */ +#define LBF_HAD_BREAK 0x04 +#define LBF_HAD_BREAKS 0x0C /* count HAD_BREAKs overflows into ABORT */ +#define LBF_ABORT 0x10 /* * can_migrate_task - may task p from runqueue rq be migrated to this_cpu? @@ -4508,7 +4510,9 @@ redo: goto out_balanced; if (lb_flags & LBF_NEED_BREAK) { - lb_flags &= ~LBF_NEED_BREAK; + lb_flags += LBF_HAD_BREAK - LBF_NEED_BREAK; + if (lb_flags & LBF_ABORT) + goto out_balanced; goto redo; } |