diff options
author | Mike Galbraith <efault@gmx.de> | 2010-03-11 17:17:13 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2010-03-11 18:32:49 +0100 |
commit | 39c0cbe2150cbd848a25ba6cdb271d1ad46818ad (patch) | |
tree | 7b9c356b39a2b50219398ce534d7d64e7ab4bf06 /kernel/time/tick-sched.c | |
parent | 41acab8851a0408c1d5ad6c21a07456f88b54d40 (diff) | |
download | kernel_samsung_espresso10-39c0cbe2150cbd848a25ba6cdb271d1ad46818ad.zip kernel_samsung_espresso10-39c0cbe2150cbd848a25ba6cdb271d1ad46818ad.tar.gz kernel_samsung_espresso10-39c0cbe2150cbd848a25ba6cdb271d1ad46818ad.tar.bz2 |
sched: Rate-limit nohz
Entering nohz code on every micro-idle is costing ~10% throughput for netperf
TCP_RR when scheduling cross-cpu. Rate limiting entry fixes this, but raises
ticks a bit. On my Q6600, an idle box goes from ~85 interrupts/sec to 128.
The higher the context switch rate, the more nohz entry costs. With this patch
and some cycle recovery patches in my tree, max cross cpu context switch rate is
improved by ~16%, a large portion of which of which is this ratelimiting.
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1268301003.6785.28.camel@marge.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/time/tick-sched.c')
-rw-r--r-- | kernel/time/tick-sched.c | 3 |
1 files changed, 3 insertions, 0 deletions
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c index f992762..f25735a 100644 --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c @@ -262,6 +262,9 @@ void tick_nohz_stop_sched_tick(int inidle) goto end; } + if (nohz_ratelimit(cpu)) + goto end; + ts->idle_calls++; /* Read jiffies and the time when jiffies were updated last */ do { |