aboutsummaryrefslogtreecommitdiffstats
path: root/net/sched
diff options
context:
space:
mode:
authorEric Dumazet <eric.dumazet@gmail.com>2011-05-23 11:02:42 +0000
committerDavid S. Miller <davem@davemloft.net>2011-05-23 17:36:00 -0400
commit8efa885406359af300d46910642b50ca82c0fe47 (patch)
tree1eecc0b8152d775b5c261a2a1749a2f711f81f13 /net/sched
parenta4910b744486254cfa61995954c118fb2283c4fd (diff)
downloadkernel_samsung_espresso10-8efa885406359af300d46910642b50ca82c0fe47.zip
kernel_samsung_espresso10-8efa885406359af300d46910642b50ca82c0fe47.tar.gz
kernel_samsung_espresso10-8efa885406359af300d46910642b50ca82c0fe47.tar.bz2
sch_sfq: avoid giving spurious NET_XMIT_CN signals
While chasing a possible net_sched bug, I found that IP fragments have litle chance to pass a congestioned SFQ qdisc : - Say SFQ qdisc is full because one flow is non responsive. - ip_fragment() wants to send two fragments belonging to an idle flow. - sfq_enqueue() queues first packet, but see queue limit reached : - sfq_enqueue() drops one packet from 'big consumer', and returns NET_XMIT_CN. - ip_fragment() cancel remaining fragments. This patch restores fairness, making sure we return NET_XMIT_CN only if we dropped a packet from the same flow. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> CC: Patrick McHardy <kaber@trash.net> CC: Jarek Poplawski <jarkao2@gmail.com> CC: Jamal Hadi Salim <hadi@cyberus.ca> CC: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/sched')
-rw-r--r--net/sched/sch_sfq.c8
1 files changed, 6 insertions, 2 deletions
diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
index 7ef87f9..b1d00f8 100644
--- a/net/sched/sch_sfq.c
+++ b/net/sched/sch_sfq.c
@@ -361,7 +361,7 @@ sfq_enqueue(struct sk_buff *skb, struct Qdisc *sch)
{
struct sfq_sched_data *q = qdisc_priv(sch);
unsigned int hash;
- sfq_index x;
+ sfq_index x, qlen;
struct sfq_slot *slot;
int uninitialized_var(ret);
@@ -405,8 +405,12 @@ sfq_enqueue(struct sk_buff *skb, struct Qdisc *sch)
if (++sch->q.qlen <= q->limit)
return NET_XMIT_SUCCESS;
+ qlen = slot->qlen;
sfq_drop(sch);
- return NET_XMIT_CN;
+ /* Return Congestion Notification only if we dropped a packet
+ * from this flow.
+ */
+ return (qlen != slot->qlen) ? NET_XMIT_CN : NET_XMIT_SUCCESS;
}
static struct sk_buff *