aboutsummaryrefslogtreecommitdiffstats
path: root/net/core
Commit message (Collapse)AuthorAgeFilesLines
* netns: fix net_generic array leakAlexey Dobriyan2008-10-141-1/+1
| | | | | Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: Rationalise email address: Network Specific PartsAlan Cox2008-10-134-4/+4
| | | | | | | | | | Clean up the various different email addresses of mine listed in the code to a single current and valid address. As Dave says his network merges for 2.6.28 are now done this seems a good point to send them in where they won't risk disrupting real changes. Signed-off-by: Alan Cox <alan@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* pktgen: fix skb leak in case of failureIlpo Järvinen2008-10-131-3/+5
| | | | | | | | | Seems that skb goes into void unless something magic happened in pskb_expand_head in case of failure. Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Acked-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: Fix off-by-one in skb_dma_mapDimitris Michailidis2008-10-121-1/+1
| | | | | | | | The unwind loop iterates down to -1 instead of stopping at 0 and ends up accessing ->frags[-1]. Signed-off-by: Dimitris Michailidis <dm@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'master' of ↵David S. Miller2008-10-082-28/+17
|\ | | | | | | | | | | | | | | | | master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6 Conflicts: drivers/net/e1000e/ich8lan.c drivers/net/e1000e/netdev.c
| * net: Fix netdev_run_todo dead-lockHerbert Xu2008-10-072-22/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Benjamin Thery tracked down a bug that explains many instances of the error unregister_netdevice: waiting for %s to become free. Usage count = %d It turns out that netdev_run_todo can dead-lock with itself if a second instance of it is run in a thread that will then free a reference to the device waited on by the first instance. The problem is really quite silly. We were trying to create parallelism where none was required. As netdev_run_todo always follows a RTNL section, and that todo tasks can only be added with the RTNL held, by definition you should only need to wait for the very ones that you've added and be done with it. There is no need for a second mutex or spinlock. This is exactly what the following patch does. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: only invoke dev->change_rx_flags when device is UPPatrick McHardy2008-10-071-6/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Jesper Dangaard Brouer <hawk@comx.dk> reported a bug when setting a VLAN device down that is in promiscous mode: When the VLAN device is set down, the promiscous count on the real device is decremented by one by vlan_dev_stop(). When removing the promiscous flag from the VLAN device afterwards, the promiscous count on the real device is decremented a second time by the vlan_change_rx_flags() callback. The root cause for this is that the ->change_rx_flags() callback is invoked while the device is down. The synchronization is meant to mirror the behaviour of the ->set_rx_mode callbacks, meaning the ->open function is responsible for doing a full sync on open, the ->close() function is responsible for doing full cleanup on ->stop() and ->change_rx_flags() is meant to do incremental changes while the device is UP. Only invoke ->change_rx_flags() while the device is UP to provide the intended behaviour. Tested-by: Jesper Dangaard Brouer <jdb@comx.dk> Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* | netns: export netns listAlexey Dobriyan2008-10-081-0/+1
| | | | | | | | | | | | | | | | | | Conntrack code will use it for a) removing expectations and helpers when corresponding module is removed, and b) removing conntracks when L3 protocol conntrack module is removed. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Patrick McHardy <kaber@trash.net>
* | net: packet split receive apiPeter Zijlstra2008-10-071-0/+20
| | | | | | | | | | | | | | | | | | | | | | Add some packet-split receive hooks. For one this allows to do NUMA node affine page allocs. Later on these hooks will be extended to do emergency reserve allocations for fragments. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: wrap sk->sk_backlog_rcv()Peter Zijlstra2008-10-071-2/+2
| | | | | | | | | | | | | | | | Wrap calling sk->sk_backlog_rcv() in a function. This will allow extending the generic sk_backlog_rcv behaviour. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: BUG instead of corrupting memory in pskb_expand_headHerbert Xu2008-10-011-0/+2
| | | | | | | | | | | | | | | | | | If the caller of pskb_expand_head specifies a negative nhead we'll silently overwrite other people's memory. This patch makes it BUG instead. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
* | Merge branch 'master' of ↵David S. Miller2008-10-011-2/+4
|\ \ | |/ | | | | | | | | | | | | | | | | master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6 Conflicts: drivers/net/wireless/ath9k/core.c drivers/net/wireless/ath9k/main.c net/core/dev.c
| * netdev: simple_tx_hash shouldn't hash inside fragmentsAlexander Duyck2008-09-201-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | Currently simple_tx_hash is hashing inside of udp fragments. As a result packets are getting getting sent to all queues when they shouldn't be. This causes a serious performance regression which can be seen by sending UDP frames larger than mtu on multiqueue devices. This change will make it so that fragments are hashed only as IP datagrams w/o any protocol information. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: add skb_recycle_check() to enable netdriver skb recyclingLennert Buytenhek2008-10-011-2/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds skb_recycle_check(), which can be used by a network driver after transmitting an skb to check whether this skb can be recycled as a receive buffer. skb_recycle_check() checks that the skb is not shared or cloned, and that it is linear and its head portion large enough (as determined by the driver) to be recycled as a receive buffer. If these conditions are met, it does any necessary reference count dropping and cleans up the skbuff as if it just came from __alloc_skb(). Signed-off-by: Lennert Buytenhek <buytenh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | netdev: docbook comment update (revised)Stephen Hemminger2008-09-301-2/+44
| | | | | | | | | | | | | | | | Add more docbook comments to network device functions and cleanup the comments. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | netdev: use const for some name functionsStephen Hemminger2008-09-301-5/+4
| | | | | | | | | | | | | | | | | | dev_change_name and netdev_drivername should use const char on parameters that are read-only input values. The strcpy to newname is not needed since newname is not used later in function. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: remove ifalias on empty given alias Oliver Hartkopp2008-09-231-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | This patch removes the potentially allocated ifalias when the (new) given alias is empty. E.g. when setting echo "" > /sys/class/net/eth0/ifalias Signed-off-by: Oliver Hartkopp <oliver@hartkopp.net> Acked-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | neigh: Remove by-hand SKB queue handling.David S. Miller2008-09-231-13/+8
| | | | | | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: network device name ifalias supportStephen Hemminger2008-09-223-0/+72
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch add support for keeping an additional character alias associated with an network interface. This is useful for maintaining the SNMP ifAlias value which is a user defined value. Routers use this to hold information like which circuit or line it is connected to. It is just an arbitrary text label on the network device. There are two exposed interfaces with this patch, the value can be read/written either via netlink or sysfs. This could be maintained just by the snmp daemon, but it is more generally useful for other management tools, and the kernel is good place to act as an agreed upon interface to store it. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | Phonet: global definitionsRemi Denis-Courmont2008-09-221-3/+6
| | | | | | | | | | Signed-off-by: Remi Denis-Courmont <remi.denis-courmont@nokia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: Use hton[sl]() instead of __constant_hton[sl]() where applicableArnaldo Carvalho de Melo2008-09-201-2/+2
| | | | | | | | | | Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | ISDN sockets: add missing lockdep stringsRémi Denis-Courmont2008-09-181-3/+3
| | | | | | | | | | Signed-off-by: Rémi Denis-Courmont <remi.denis-courmont@nokia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: fix scheduling of dst_gc_task by __dst_freeBenjamin Thery2008-09-121-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The dst garbage collector dst_gc_task() may not be scheduled as we expect it to be in __dst_free(). Indeed, when the dst_gc_timer was replaced by the delayed_work dst_gc_work, the mod_timer() call used to schedule the garbage collector at an earlier date was replaced by a schedule_delayed_work() (see commit 86bba269d08f0c545ae76c90b56727f65d62d57f). But, the behaviour of mod_timer() and schedule_delayed_work() is different in the way they handle the delay. mod_timer() stops the timer and re-arm it with the new given delay, whereas schedule_delayed_work() only check if the work is already queued in the workqueue (and queue it (with delay) if it is not) BUT it does NOT take into account the new delay (even if the new delay is earlier in time). schedule_delayed_work() returns 0 if it didn't queue the work, but we don't check the return code in __dst_free(). If I understand the code in __dst_free() correctly, we want dst_gc_task to be queued after DST_GC_INC jiffies if we pass the test (and not in some undetermined time in the future), so I think we should add a call to cancel_delayed_work() before schedule_delayed_work(). Patch below. Or we should at least test the return code of schedule_delayed_work(), and reset the values of dst_garbage.timer_inc and dst_garbage.timer_expires back to their former values if schedule_delayed_work() failed. Otherwise the subsequent calls to __dst_free will test the wrong values and assume wrong thing about when the garbage collector is supposed to be scheduled. dst_gc_task() also calls schedule_delayed_work() without checking its return code (or calling cancel_scheduled_work() first), but it should fine there: dst_gc_task is the routine of the delayed_work, so no dst_gc_work should be pending in the queue when it's running. Signed-off-by: Benjamin Thery <benjamin.thery@bull.net> Acked-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: Add SKB DMA mapping helper functions.David S. Miller2008-09-112-0/+67
| | | | | | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* | Merge branch 'master' of ↵David S. Miller2008-09-081-1/+6
|\ \ | |/ | | | | | | | | | | | | master.kernel.org:/pub/scm/linux/kernel/git/torvalds/linux-2.6 Conflicts: net/mac80211/mlme.c
| * pkt_sched: Fix qdisc state in net_tx_action()Jarek Poplawski2008-09-071-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | net_tx_action() can skip __QDISC_STATE_SCHED bit clearing while qdisc is neither ran nor rescheduled, which may cause endless loop in dev_deactivate(). Reported-by: Denys Fedoryshchenko <denys@visp.net.lb> Tested-by: Denys Fedoryshchenko <denys@visp.net.lb> Signed-off-by: Jarek Poplawski <jarkao2@gmail.com> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: Enable TSO if supported by at least one deviceHerbert Xu2008-09-081-0/+6
|/ | | | | | | | | | | | | | | | | | As it stands users of netdev_compute_features (e.g., bridges/bonding) will only enable TSO if all consituent devices support it. This is unnecessarily pessimistic since even on devices that do not support hardware TSO and SG, emulated TSO still performs to a par with TSO off. This patch enables TSO if at least on constituent device supports it in hardware. The direct beneficiaries will be virtualisation that uses bridging since this means that TSO will always be enabled for communication from the host to the guests. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
* pkt_sched: Prevent livelock in TX queue running.David S. Miller2008-08-191-1/+3
| | | | | | | | | | | | | | | | | If dev_deactivate() is trying to quiesce the queue, it is theoretically possible for another cpu to livelock trying to process that queue. This happens because dev_deactivate() grabs the queue spinlock as it checks the queue state, whereas net_tx_action() does a trylock and reschedules the qdisc if it hits the lock. This breaks the livelock by adding a check on __QDISC_STATE_DEACTIVATED to net_tx_action() when the trylock fails. Based upon feedback from Herbert Xu and Jarek Poplawski. Signed-off-by: David S. Miller <davem@davemloft.net>
* Revert "pkt_sched: Protect gen estimators under est_lock."David S. Miller2008-08-181-5/+4
| | | | | | | | | This reverts commit d4766692e72422f3b0f0e9ac6773d92baad07d51. qdisc_destroy() now runs in RTNL fully again, so this change is no longer needed. Signed-off-by: David S. Miller <davem@davemloft.net>
* pkt_sched: Fix missed RCU unlock in dev_queue_xmit()David S. Miller2008-08-171-6/+4
| | | | | | Noticed by Jarek Poplawski. Signed-off-by: David S. Miller <davem@davemloft.net>
* net: Change handling of the __QDISC_STATE_SCHED flag in net_tx_action().Jarek Poplawski2008-08-171-15/+19
| | | | | | | | | Change handling of the __QDISC_STATE_SCHED flag in net_tx_action() to enable proper control in dev_deactivate(). Now, if this flag is seen as unset under root_lock means a qdisc can't be netif_scheduled. Signed-off-by: Jarek Poplawski <jarkao2@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* pkt_sched: Add 'deactivated' state.David S. Miller2008-08-171-1/+8
| | | | | | | | | | | | | | | | This new state lets dev_deactivate() mark a qdisc as having been deactivated. dev_queue_xmit() and ing_filter() check for this bit and do not try to process the qdisc if the bit is set. dev_deactivate() polls the qdisc after setting the bit, waiting for both __QDISC_STATE_RUNNING and __QDISC_STATE_SCHED to clear. This isn't perfect yet, but subsequent changesets will make it so. This part is just one piece of the puzzle. Signed-off-by: David S. Miller <davem@davemloft.net>
* net: skb_copy_datagram_from_iovec()Rusty Russell2008-08-151-0/+87
| | | | | | | | | | | | There's an skb_copy_datagram_iovec() to copy out of a paged skb, but nothing the other way around (because we don't do that). We want to allocate big skbs in tun.c, so let's add the function. It's a carbon copy of skb_copy_datagram_iovec() with enough changes to be annoying. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: Preserve netfilter attributes in skb_gso_segment using __copy_skb_headerHerbert Xu2008-08-151-10/+2
| | | | | | | | | | | | skb_gso_segment didn't preserve some attributes in the original skb such as the netfilter fields. This was harmless until they were used which is the case for packets going through lo. This patch makes it call __copy_skb_header which also picks up some other missing attributes. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
* pkt_sched: Protect gen estimators under est_lock.Jarek Poplawski2008-08-131-4/+5
| | | | | | | | gen_kill_estimator() required rtnl_lock() protection, but since it is moved to an RCU callback __qdisc_destroy() let's use est_lock instead. Signed-off-by: Jarek Poplawski <jarkao2@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* pktgen: prevent pktgen from using bad tx queueAndrew Gallatin2008-08-131-0/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With the new multi-queue transmit code, it is possible to accidentally make pktgen pick a non-existing tx queue simply by using a stale script to drive pktgen. Access to this non-existing tx queue will then trigger a bad memory access and kill the machine. For example, setting "queue_map_max 2" will cause my machine to die when accessing a garbage spinlock in the non-existing tx queue: BUG: spinlock bad magic on CPU#0, kpktgend_0/564 lock: ffff88001ddf6718, .magic: ffffffff, .owner: /-1, .owner_cpu: 0 Pid: 564, comm: kpktgend_0 Not tainted 2.6.27-rc3 #35 Call Trace: [<ffffffff803a1228>] spin_bug+0xa4/0xac [<ffffffff803a1253>] _raw_spin_lock+0x23/0x123 [<ffffffff8055b06f>] _spin_lock_bh+0x17/0x1b [<ffffffff804cb57d>] pktgen_thread_worker+0xa97/0x1002 [<ffffffff8022874d>] ? finish_task_switch+0x38/0x97 [<ffffffff80242077>] ? autoremove_wake_function+0x0/0x36 [<ffffffff80242077>] ? autoremove_wake_function+0x0/0x36 [<ffffffff804caae6>] ? pktgen_thread_worker+0x0/0x1002 [<ffffffff80241a40>] kthread+0x44/0x6d [<ffffffff8020c399>] child_rip+0xa/0x11 [<ffffffff802419fc>] ? kthread+0x0/0x6d [<ffffffff8020c38f>] ? child_rip+0x0/0x11 The attached patch adds some sanity checking to prevent these sorts of configuration errors. Signed-off-by: Andrew Gallatin <gallatin@myri.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* pktgen: multiqueue etc.Robert Olsson2008-08-071-3/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sofar far pktgen have had a restriction to only use one device per kernel thread. With the new multiqueue architecture this is no longer adequate. The patch below is an effort to remove this by in pktgen configuration adding a tag to the device name a la eth0@0 etc. The tag is used for usual device config just as before. Also a new flag is introduced to mirror queue_map with sending threads smp_processor_id() QUEUE_MAP_CPU. An example: We use 4 CPU's to send to one 10g interface (eth0) and we use the new tagging to send a mix of packet sizes, 64, 576 and 1500 bytes. Also we use TX queues according to smp_processor_id() PGDEV=/proc/net/pktgen/kpktgend_0 pgset "add_device eth0@0" PGDEV=/proc/net/pktgen/kpktgend_1 pgset "add_device eth0@1" PGDEV=/proc/net/pktgen/kpktgend_2 pgset "add_device eth0@2" PGDEV=/proc/net/pktgen/kpktgend_3 pgset "add_device eth0@3" .... PGDEV=/proc/net/pktgen/eth0@0 pgset "pkt_size 64" pgset "flag QUEUE_MAP_CPU" PGDEV=/proc/net/pktgen/eth0@1 pgset "pkt_size 572" pgset "flag QUEUE_MAP_CPU" PGDEV=/proc/net/pktgen/eth0@2 pgset "pkt_size 1496" PGDEV=/proc/net/pktgen/eth0@3 pgset "pkt_size 1496" pgset "flag QUEUE_MAP_CPU" Signed-off-by: Robert Olsson <robert.olsson@its.uu.se> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/core: Allow receive on active slaves.Joe Eykholt2008-08-071-2/+4
| | | | | | | | | If a packet_type specifies an active slave to bonding and not just any interface, allow it to receive frames that came in on that interface. Signed-off-by: Joe Eykholt <jre@nuovasystems.com> Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
* net/core: Allow certain receives on inactive slave.Joe Eykholt2008-08-071-7/+8
| | | | | | | | | | | Allow a packet_type that specifies the exact device to receive even on an inactive bonding slave devices. This is important for some L2 protocols such as LLDP and FCoE. This can eventually be used for the bonding special cases as well. Signed-off-by: Joe Eykholt <jre@nuovasystems.com> Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
* net/core: Uninline skb_bond().Joe Eykholt2008-08-071-20/+8
| | | | | | | | Otherwise subsequent changes need multiple return values. Signed-off-by: Joe Eykholt <jre@nuovasystems.com> Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
* pktgen: mac countRobert Olsson2008-08-051-2/+2
| | | | | | | | dst_mac_count and src_mac_count patch from Eneas Hunguana We have sent one mac address to much. Signed-off-by: Robert Olsson <robert.olsson@its.uu.se> Signed-off-by: David S. Miller <davem@davemloft.net>
* pktgen: random flow Robert Olsson2008-08-051-1/+5
| | | | | | | Random flow generation has not worked. This fixes it. Signed-off-by: Robert Olsson <robert.olsson@its.uu.se> Signed-off-by: David S. Miller <davem@davemloft.net>
* net_sched: Add qdisc __NET_XMIT_BYPASS flagJarek Poplawski2008-08-041-1/+0
| | | | | | | | | | | | Patrick McHardy <kaber@trash.net> noticed that it would be nice to handle NET_XMIT_BYPASS by NET_XMIT_SUCCESS with an internal qdisc flag __NET_XMIT_BYPASS and to remove the mapping from dev_queue_xmit(). David Miller <davem@davemloft.net> spotted a serious bug in the first version of this patch. Signed-off-by: Jarek Poplawski <jarkao2@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: eliminate refcounting in backlog queueStephen Hemminger2008-08-031-7/+16
| | | | | | | | | | Avoid the overhead of atomic increment/decrement on each received packet. This helps performance of non-NAPI devices (like loopback). Use cleanup function to walk queue on each cpu and clean out any left over packets. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: use software GSO for SG+CSUM capable netdevicesLennert Buytenhek2008-08-031-0/+4
| | | | | | | | | | | | | | | | | If a netdevice does not support hardware GSO, allowing the stack to use GSO anyway and then splitting the GSO skb into MSS-sized pieces as it is handed to the netdevice for transmitting is likely still a win as far as throughput and/or CPU usage are concerned, since it reduces the number of trips through the output path. This patch enables the use of GSO on any netdevice that supports SG. If a GSO skb is then sent to a netdevice that supports SG but does not support hardware GSO, net/core/dev.c:dev_hard_start_xmit() will take care of doing the necessary GSO segmentation in software. Signed-off-by: Lennert Buytenhek <buytenh@marvell.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: fix missing pneigh entries in the neighbor seq_file codeChris Larson2008-08-031-5/+6
| | | | | | | | | | | | | | | | When pneigh entries exist, but the user's read buffer isn't sufficient to hold them all, one of the pneigh entries will be missing from the results. In neigh_get_idx_any, the number of elements which neigh_get_idx encountered is not correctly subtracted from the position number before the call to pneigh_get_idx. neigh_get_idx reduces the position by 1 for each call to neigh_get_next, but it does not reduce it by one for the first element (neigh_get_first). The patch alters the neigh_get_idx and pneigh_get_idx functions to subtract one from pos, for the first element, when pos is non-zero. Signed-off-by: Chris Larson <clarson@mvista.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: in the first call to neigh_seq_next, call neigh_get_first, not ↵Chris Larson2008-08-031-1/+1
| | | | | | | | | | | neigh_get_idx. neigh_seq_next won't be called both with *pos > 0 && v == SEQ_START_TOKEN, so there's no point calling neigh_get_idx when we're on the start token, just call neigh_get_first directly. Signed-off-by: Chris Larson <clarson@mvista.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* pkt_sched: Use qdisc_lock() on already sampled root qdisc.David S. Miller2008-08-021-2/+2
| | | | | | | | | | | | | Based upon a bug report by Jeff Kirsher. Don't use qdisc_root_lock() in these cases as the root qdisc could have been changed, and we'd thus lock the wrong object. Tested by Emil S Tantilov who confirms that this seems to fix the problem. Signed-off-by: David S. Miller <davem@davemloft.net>
* netdev: Fix lockdep warnings in multiqueue configurations.David S. Miller2008-07-313-2/+7
| | | | | | | | | | | | | | When support for multiple TX queues were added, the netif_tx_lock() routines we converted to iterate over all TX queues and grab each queue's spinlock. This causes heartburn for lockdep and it's not a healthy thing to do with lots of TX queues anyways. So modify this to use a top-level lock and a "frozen" state for the individual TX queues. Signed-off-by: David S. Miller <davem@davemloft.net>
* pkt_sched: Fix OOPS on ingress qdisc add.David S. Miller2008-07-301-2/+2
| | | | | | | | | | | | | | | | | | | | | Bug report from Steven Jan Springl: Issuing the following command causes a kernel oops: tc qdisc add dev eth0 handle ffff: ingress The problem mostly stems from all of the special case handling of ingress qdiscs. So, to fix this, do the grafting operation the same way we do for TX qdiscs. Which means that dev_activate() and dev_deactivate() now do the "qdisc_sleeping <--> qdisc" transitions on dev->rx_queue too. Future simplifications are possible now, mainly because it is impossible for dev_queue->{qdisc,qdisc_sleeping} to be NULL. There are NULL checks all over to handle the ingress qdisc special case that used to exist before this commit. Signed-off-by: David S. Miller <davem@davemloft.net>