diff options
author | Herbert Xu <herbert@gondor.apana.org.au> | 2006-06-23 02:06:41 -0700 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2006-06-23 02:06:41 -0700 |
commit | 5b057c6b1a25d57edf2b4d1e956e50936480a9ff (patch) | |
tree | e641febd6f562e0ed1198c160ff353ab513f0612 /net | |
parent | 5fa21d821f6972e70942f2c555ec29dde962bdb2 (diff) | |
download | kernel_samsung_aries-5b057c6b1a25d57edf2b4d1e956e50936480a9ff.zip kernel_samsung_aries-5b057c6b1a25d57edf2b4d1e956e50936480a9ff.tar.gz kernel_samsung_aries-5b057c6b1a25d57edf2b4d1e956e50936480a9ff.tar.bz2 |
[NET]: Avoid allocating skb in skb_pad
First of all it is unnecessary to allocate a new skb in skb_pad since
the existing one is not shared. More importantly, our hard_start_xmit
interface does not allow a new skb to be allocated since that breaks
requeueing.
This patch uses pskb_expand_head to expand the existing skb and linearize
it if needed. Actually, someone should sift through every instance of
skb_pad on a non-linear skb as they do not fit the reasons why this was
originally created.
Incidentally, this fixes a minor bug when the skb is cloned (tcpdump,
TCP, etc.). As it is skb_pad will simply write over a cloned skb. Because
of the position of the write it is unlikely to cause problems but still
it's best if we don't do it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net')
-rw-r--r-- | net/core/skbuff.c | 36 |
1 files changed, 26 insertions, 10 deletions
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index bb7210f..fe63d4e 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -781,24 +781,40 @@ struct sk_buff *skb_copy_expand(const struct sk_buff *skb, * filled. Used by network drivers which may DMA or transfer data * beyond the buffer end onto the wire. * - * May return NULL in out of memory cases. + * May return error in out of memory cases. The skb is freed on error. */ -struct sk_buff *skb_pad(struct sk_buff *skb, int pad) +int skb_pad(struct sk_buff *skb, int pad) { - struct sk_buff *nskb; + int err; + int ntail; /* If the skbuff is non linear tailroom is always zero.. */ - if (skb_tailroom(skb) >= pad) { + if (!skb_cloned(skb) && skb_tailroom(skb) >= pad) { memset(skb->data+skb->len, 0, pad); - return skb; + return 0; } - - nskb = skb_copy_expand(skb, skb_headroom(skb), skb_tailroom(skb) + pad, GFP_ATOMIC); + + ntail = skb->data_len + pad - (skb->end - skb->tail); + if (likely(skb_cloned(skb) || ntail > 0)) { + err = pskb_expand_head(skb, 0, ntail, GFP_ATOMIC); + if (unlikely(err)) + goto free_skb; + } + + /* FIXME: The use of this function with non-linear skb's really needs + * to be audited. + */ + err = skb_linearize(skb); + if (unlikely(err)) + goto free_skb; + + memset(skb->data + skb->len, 0, pad); + return 0; + +free_skb: kfree_skb(skb); - if (nskb) - memset(nskb->data+nskb->len, 0, pad); - return nskb; + return err; } /* Trims skb to length len. It can change skb pointers. |