diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2012-05-21 10:03:46 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2012-05-21 10:03:46 -0700 |
commit | cb62ab71fe2b16e8203a0f0a2ef4eda23d761338 (patch) | |
tree | 536ba39658e47d511a489c52f7aac60cd78967e5 /net/sctp | |
parent | 31ed8e6f93a27304c9e157dab0267772cd94eaad (diff) | |
parent | 74863948f925d9f3bb4e3d3a783e49e9c662d839 (diff) | |
download | kernel_goldelico_gta04-cb62ab71fe2b16e8203a0f0a2ef4eda23d761338.zip kernel_goldelico_gta04-cb62ab71fe2b16e8203a0f0a2ef4eda23d761338.tar.gz kernel_goldelico_gta04-cb62ab71fe2b16e8203a0f0a2ef4eda23d761338.tar.bz2 |
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next
Pull networking changes from David Miller:
1) Get rid of the error prone NLA_PUT*() macros that used an embedded
goto.
2) Kill off the token-ring and MCA networking drivers, from Paul
Gortmaker.
3) Reduce high-order allocations made by datagram AF_UNIX sockets, from
Eric Dumazet.
4) Add PTP hardware clock support to IGB and IXGBE, from Richard
Cochran and Jacob Keller.
5) Allow users to query timestamping capabilities of a card via
ethtool, from Richard Cochran.
6) Add loadbalance mode to the teaming driver, from Jiri Pirko. Part
of this is that we can now have BPF filters not attached to sockets,
and the loadbalancing function is calculated using one.
7) Francois Romieu went through the network drivers removing gratuitous
uses of netdev->base_addr, perhaps some day we can remove it
completely but it's used for ISA probing still.
8) Add a BPF JIT for sparc. I know, who cares, right? :-)
9) Move networking sysctl registry away from using the compatability
mode interfaces in the sysctl code. From Eric W Biederman.
10) Pavel Emelyanov added a way to save and restore TCP socket state via
TCP_REPAIR, TCP_REPAIR_QUEUE, and TCP_QUEUE_SEQ socket options as
well as a way to forcefully bind a socket to a port via the
sk->sk_reuse value SK_FORCE_REUSE. There is also a
TCP_REPAIR_OPTIONS which allows to reinstante the TCP options
enabled on the connection.
11) Several enhancements from Eric Dumazet that, in particular, can
enhance splice performance on TCP sockets significantly.
a) Reset the offset of the per-socket sendmsg page when we know
we're the only use of the page in linear_to_page().
b) Add facilities such that skb->data can be backed a page rather
than SLAB kmalloc'd memory. In particular devices which were
receiving into linear RX buffers can now end up providing paged
data.
The big result is that code like splice and GRO do not have to copy
any more.
12) Allow a pure sender to more gracefully handle ACK backlogs in TCP.
What can happen at high rates is that the sender hasn't grown his
receive buffer limits at all (he's not receiving data so really
doesn't need to), but the non-data ACKs consume receive buffer
space.
sk_add_backlog() is too aggressive in dropping frames in this case,
so relax it's requirements by using the receive buffer plus the send
buffer limit as the backlog limit instead of just the former.
Also from Eric Dumazet.
13) Add ipv6 support to L2TP, from Benjamin LaHaise, James Chapman, and
Chris Elston.
14) Implement TCP early retransmit (RFC 5827), from Yuchung Cheng.
Basically, we can start fast retransmit before hiting the dupack
threshold under certain conditions.
15) New CODEL active queue management packet scheduler, from Eric
Dumazet based upon initial work by Dave Taht.
Basically, the big feature is that packets are dropped (or ECN bits
are set) based upon how long packets live in the queue, rather than
the queue length (which is what RED uses).
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1341 commits)
drivers/net/stmmac: seq_file fix memory leak
ipv6/exthdrs: strict Pad1 and PadN check
USB: qmi_wwan: Add ZTE (Vodafone) K3520-Z
USB: qmi_wwan: Add ZTE (Vodafone) K3765-Z
USB: qmi_wwan: Make forced int 4 whitelist generic
net/ipv4: replace simple_strtoul with kstrtoul
net/ipv4/ipconfig: neaten __setup placement
net: qmi_wwan: Add Vodafone/Huawei K5005 support
net: cdc_ether: Add ZTE WWAN matches before generic Ethernet
ipv6: use skb coalescing in reassembly
ipv4: use skb coalescing in defragmentation
net: introduce skb_try_coalesce()
net:ipv6:fixed space issues relating to operators.
net:ipv6:fixed a trailing white space issue.
ipv6: disable GSO on sockets hitting dst_allfrag
tg3: use netdev_alloc_frag() API
net: napi_frags_skb() is static
ppp: avoid false drop_monitor false positives
ipv6: bool/const conversions phase2
ipx: Remove spurious NULL checking in ipx_ioctl().
...
Diffstat (limited to 'net/sctp')
-rw-r--r-- | net/sctp/associola.c | 4 | ||||
-rw-r--r-- | net/sctp/input.c | 4 | ||||
-rw-r--r-- | net/sctp/output.c | 4 | ||||
-rw-r--r-- | net/sctp/outqueue.c | 2 | ||||
-rw-r--r-- | net/sctp/sm_sideeffect.c | 9 | ||||
-rw-r--r-- | net/sctp/sm_statefuns.c | 22 | ||||
-rw-r--r-- | net/sctp/socket.c | 6 | ||||
-rw-r--r-- | net/sctp/sysctl.c | 10 |
8 files changed, 25 insertions, 36 deletions
diff --git a/net/sctp/associola.c b/net/sctp/associola.c index acd2edb..5bc9ab1 100644 --- a/net/sctp/associola.c +++ b/net/sctp/associola.c @@ -1408,7 +1408,7 @@ static inline int sctp_peer_needs_update(struct sctp_association *asoc) } /* Increase asoc's rwnd by len and send any window update SACK if needed. */ -void sctp_assoc_rwnd_increase(struct sctp_association *asoc, unsigned len) +void sctp_assoc_rwnd_increase(struct sctp_association *asoc, unsigned int len) { struct sctp_chunk *sack; struct timer_list *timer; @@ -1465,7 +1465,7 @@ void sctp_assoc_rwnd_increase(struct sctp_association *asoc, unsigned len) } /* Decrease asoc's rwnd by len. */ -void sctp_assoc_rwnd_decrease(struct sctp_association *asoc, unsigned len) +void sctp_assoc_rwnd_decrease(struct sctp_association *asoc, unsigned int len) { int rx_count; int over = 0; diff --git a/net/sctp/input.c b/net/sctp/input.c index 80f71af..80564fe 100644 --- a/net/sctp/input.c +++ b/net/sctp/input.c @@ -342,7 +342,7 @@ int sctp_backlog_rcv(struct sock *sk, struct sk_buff *skb) sctp_bh_lock_sock(sk); if (sock_owned_by_user(sk)) { - if (sk_add_backlog(sk, skb)) + if (sk_add_backlog(sk, skb, sk->sk_rcvbuf)) sctp_chunk_free(chunk); else backloged = 1; @@ -376,7 +376,7 @@ static int sctp_add_backlog(struct sock *sk, struct sk_buff *skb) struct sctp_ep_common *rcvr = chunk->rcvr; int ret; - ret = sk_add_backlog(sk, skb); + ret = sk_add_backlog(sk, skb, sk->sk_rcvbuf); if (!ret) { /* Hold the assoc/ep while hanging on the backlog queue. * This way, we know structures we need will not disappear diff --git a/net/sctp/output.c b/net/sctp/output.c index 8fc4dcd..f1b7d4b 100644 --- a/net/sctp/output.c +++ b/net/sctp/output.c @@ -661,8 +661,8 @@ static sctp_xmit_t sctp_packet_can_append_data(struct sctp_packet *packet, */ if (!sctp_sk(asoc->base.sk)->nodelay && sctp_packet_empty(packet) && inflight && sctp_state(asoc, ESTABLISHED)) { - unsigned max = transport->pathmtu - packet->overhead; - unsigned len = chunk->skb->len + q->out_qlen; + unsigned int max = transport->pathmtu - packet->overhead; + unsigned int len = chunk->skb->len + q->out_qlen; /* Check whether this chunk and all the rest of pending * data will fit or delay in hopes of bundling a full diff --git a/net/sctp/outqueue.c b/net/sctp/outqueue.c index cfeb1d4..a0fa19f 100644 --- a/net/sctp/outqueue.c +++ b/net/sctp/outqueue.c @@ -1147,7 +1147,7 @@ int sctp_outq_sack(struct sctp_outq *q, struct sctp_sackhdr *sack) __u32 sack_ctsn, ctsn, tsn; __u32 highest_tsn, highest_new_tsn; __u32 sack_a_rwnd; - unsigned outstanding; + unsigned int outstanding; struct sctp_transport *primary = asoc->peer.primary_path; int count_of_newacks = 0; int gap_ack_blocks; diff --git a/net/sctp/sm_sideeffect.c b/net/sctp/sm_sideeffect.c index 1ff51c9..c96d1a8 100644 --- a/net/sctp/sm_sideeffect.c +++ b/net/sctp/sm_sideeffect.c @@ -524,7 +524,7 @@ static void sctp_do_8_2_transport_strike(struct sctp_association *asoc, /* Worker routine to handle INIT command failure. */ static void sctp_cmd_init_failed(sctp_cmd_seq_t *commands, struct sctp_association *asoc, - unsigned error) + unsigned int error) { struct sctp_ulpevent *event; @@ -550,7 +550,7 @@ static void sctp_cmd_assoc_failed(sctp_cmd_seq_t *commands, sctp_event_t event_type, sctp_subtype_t subtype, struct sctp_chunk *chunk, - unsigned error) + unsigned int error) { struct sctp_ulpevent *event; @@ -1161,9 +1161,8 @@ static int sctp_side_effects(sctp_event_t event_type, sctp_subtype_t subtype, break; case SCTP_DISPOSITION_VIOLATION: - if (net_ratelimit()) - pr_err("protocol violation state %d chunkid %d\n", - state, subtype.chunk); + net_err_ratelimited("protocol violation state %d chunkid %d\n", + state, subtype.chunk); break; case SCTP_DISPOSITION_NOT_IMPL: diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c index 891f5db..9fca103 100644 --- a/net/sctp/sm_statefuns.c +++ b/net/sctp/sm_statefuns.c @@ -1129,17 +1129,15 @@ sctp_disposition_t sctp_sf_backbeat_8_3(const struct sctp_endpoint *ep, /* This should never happen, but lets log it if so. */ if (unlikely(!link)) { if (from_addr.sa.sa_family == AF_INET6) { - if (net_ratelimit()) - pr_warn("%s association %p could not find address %pI6\n", - __func__, - asoc, - &from_addr.v6.sin6_addr); + net_warn_ratelimited("%s association %p could not find address %pI6\n", + __func__, + asoc, + &from_addr.v6.sin6_addr); } else { - if (net_ratelimit()) - pr_warn("%s association %p could not find address %pI4\n", - __func__, - asoc, - &from_addr.v4.sin_addr.s_addr); + net_warn_ratelimited("%s association %p could not find address %pI4\n", + __func__, + asoc, + &from_addr.v4.sin_addr.s_addr); } return SCTP_DISPOSITION_DISCARD; } @@ -2410,7 +2408,7 @@ static sctp_disposition_t __sctp_sf_do_9_1_abort(const struct sctp_endpoint *ep, sctp_cmd_seq_t *commands) { struct sctp_chunk *chunk = arg; - unsigned len; + unsigned int len; __be16 error = SCTP_ERROR_NO_ERROR; /* See if we have an error cause code in the chunk. */ @@ -2446,7 +2444,7 @@ sctp_disposition_t sctp_sf_cookie_wait_abort(const struct sctp_endpoint *ep, sctp_cmd_seq_t *commands) { struct sctp_chunk *chunk = arg; - unsigned len; + unsigned int len; __be16 error = SCTP_ERROR_NO_ERROR; if (!sctp_vtag_verify_either(chunk, asoc)) diff --git a/net/sctp/socket.c b/net/sctp/socket.c index 92ba71d..b3b8a8d 100644 --- a/net/sctp/socket.c +++ b/net/sctp/socket.c @@ -5840,10 +5840,8 @@ SCTP_STATIC int sctp_listen_start(struct sock *sk, int backlog) if (!sctp_sk(sk)->hmac && sctp_hmac_alg) { tfm = crypto_alloc_hash(sctp_hmac_alg, 0, CRYPTO_ALG_ASYNC); if (IS_ERR(tfm)) { - if (net_ratelimit()) { - pr_info("failed to load transform for %s: %ld\n", - sctp_hmac_alg, PTR_ERR(tfm)); - } + net_info_ratelimited("failed to load transform for %s: %ld\n", + sctp_hmac_alg, PTR_ERR(tfm)); return -ENOSYS; } sctp_sk(sk)->hmac = tfm; diff --git a/net/sctp/sysctl.c b/net/sctp/sysctl.c index 60ffbd0..e5fe639 100644 --- a/net/sctp/sysctl.c +++ b/net/sctp/sysctl.c @@ -275,22 +275,16 @@ static ctl_table sctp_table[] = { { /* sentinel */ } }; -static struct ctl_path sctp_path[] = { - { .procname = "net", }, - { .procname = "sctp", }, - { } -}; - static struct ctl_table_header * sctp_sysctl_header; /* Sysctl registration. */ void sctp_sysctl_register(void) { - sctp_sysctl_header = register_sysctl_paths(sctp_path, sctp_table); + sctp_sysctl_header = register_net_sysctl(&init_net, "net/sctp", sctp_table); } /* Sysctl deregistration. */ void sctp_sysctl_unregister(void) { - unregister_sysctl_table(sctp_sysctl_header); + unregister_net_sysctl_table(sctp_sysctl_header); } |