diff options
author | Herbert Xu <herbert@gondor.apana.org.au> | 2009-01-04 16:13:40 -0800 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2009-01-04 16:13:40 -0800 |
commit | 5d38a079ce3971f932bbdc0dc5b887806fabd5dc (patch) | |
tree | 79d948098add1f6c52ecd42c151ce6b6fa1dbc5a /net/core/skbuff.c | |
parent | b530256d2e0f1a75fab31f9821129fff1bb49faa (diff) | |
download | kernel_goldelico_gta04-5d38a079ce3971f932bbdc0dc5b887806fabd5dc.zip kernel_goldelico_gta04-5d38a079ce3971f932bbdc0dc5b887806fabd5dc.tar.gz kernel_goldelico_gta04-5d38a079ce3971f932bbdc0dc5b887806fabd5dc.tar.bz2 |
gro: Add page frag support
This patch allows GRO to merge page frags (skb_shinfo(skb)->frags)
in one skb, rather than using the less efficient frag_list.
It also adds a new interface, napi_gro_frags to allow drivers
to inject page frags directly into the stack without allocating
an skb. This is intended to be the GRO equivalent for LRO's
lro_receive_frags interface.
The existing GSO interface can already handle page frags with
or without an appended frag_list so nothing needs to be changed
there.
The merging itself is rather simple. We store any new frag entries
after the last existing entry, without checking whether the first
new entry can be merged with the last existing entry. Making this
check would actually be easy but since no existing driver can
produce contiguous frags anyway it would just be mental masturbation.
If the total number of entries would exceed the capacity of a
single skb, we simply resort to using frag_list as we do now.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/core/skbuff.c')
-rw-r--r-- | net/core/skbuff.c | 14 |
1 files changed, 13 insertions, 1 deletions
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 3aafb10..5110b35 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -2594,6 +2594,17 @@ int skb_gro_receive(struct sk_buff **head, struct sk_buff *skb) if (skb_shinfo(p)->frag_list) goto merge; + else if (!skb_headlen(p) && !skb_headlen(skb) && + skb_shinfo(p)->nr_frags + skb_shinfo(skb)->nr_frags < + MAX_SKB_FRAGS) { + memcpy(skb_shinfo(p)->frags + skb_shinfo(p)->nr_frags, + skb_shinfo(skb)->frags, + skb_shinfo(skb)->nr_frags * sizeof(skb_frag_t)); + + skb_shinfo(p)->nr_frags += skb_shinfo(skb)->nr_frags; + NAPI_GRO_CB(skb)->free = 1; + goto done; + } headroom = skb_headroom(p); nskb = netdev_alloc_skb(p->dev, headroom); @@ -2628,11 +2639,12 @@ int skb_gro_receive(struct sk_buff **head, struct sk_buff *skb) p = nskb; merge: - NAPI_GRO_CB(p)->count++; p->prev->next = skb; p->prev = skb; skb_header_release(skb); +done: + NAPI_GRO_CB(p)->count++; p->data_len += skb->len; p->truesize += skb->len; p->len += skb->len; |