aboutsummaryrefslogtreecommitdiffstats
path: root/net/sched
diff options
context:
space:
mode:
authorVasily Averin <vvs@parallels.com>2013-04-01 03:01:32 +0000
committerDavid S. Miller <davem@davemloft.net>2013-04-02 14:29:20 -0400
commitf0f6ee1f70c4eaab9d52cf7d255df4bd89f8d1c2 (patch)
tree74e0b553a8853d7e4b574cb12de54a5e359952ab /net/sched
parentbab6a9eac05360db25c81b0090f6b1195dd986cc (diff)
downloadkernel_goldelico_gta04-f0f6ee1f70c4eaab9d52cf7d255df4bd89f8d1c2.zip
kernel_goldelico_gta04-f0f6ee1f70c4eaab9d52cf7d255df4bd89f8d1c2.tar.gz
kernel_goldelico_gta04-f0f6ee1f70c4eaab9d52cf7d255df4bd89f8d1c2.tar.bz2
cbq: incorrect processing of high limits
currently cbq works incorrectly for limits > 10% real link bandwidth, and practically does not work for limits > 50% real link bandwidth. Below are results of experiments taken on 1 Gbit link In shaper | Actual Result -----------+--------------- 100M | 108 Mbps 200M | 244 Mbps 300M | 412 Mbps 500M | 893 Mbps This happen because of q->now changes incorrectly in cbq_dequeue(): when it is called before real end of packet transmitting, L2T is greater than real time delay, q_now gets an extra boost but never compensate it. To fix this problem we prevent change of q->now until its synchronization with real time. Signed-off-by: Vasily Averin <vvs@openvz.org> Reviewed-by: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/sched')
-rw-r--r--net/sched/sch_cbq.c5
1 files changed, 4 insertions, 1 deletions
diff --git a/net/sched/sch_cbq.c b/net/sched/sch_cbq.c
index 13aa47a..1bc210f 100644
--- a/net/sched/sch_cbq.c
+++ b/net/sched/sch_cbq.c
@@ -962,8 +962,11 @@ cbq_dequeue(struct Qdisc *sch)
cbq_update(q);
if ((incr -= incr2) < 0)
incr = 0;
+ q->now += incr;
+ } else {
+ if (now > q->now)
+ q->now = now;
}
- q->now += incr;
q->now_rt = now;
for (;;) {