summaryrefslogtreecommitdiffstats
path: root/net
Commit message (Collapse)AuthorAgeFilesLines
* ipv6: user better hash for addrconfstephen hemminger2010-03-201-17/+8
| | | | | | | | | | | | | The existing hash function has a couple of issues: * it is hardwired to 16 for IN6_ADDR_HSIZE * limited to 256 and callers using int * use jhash2 rather than some old BSD algorithm No need for random seed since this is local only (based on assigned addresses) table. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* IPv6: convert addrconf hash list to RCUstephen hemminger2010-03-201-41/+44
| | | | | | | | | | | Convert from reader/writer lock to RCU and spinlock for addrconf hash list. Adds an additional helper macro for hlist_for_each_entry_continue_rcu to handle the continue case. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* ipv6: convert addrconf list to hliststephen hemminger2010-03-201-75/+53
| | | | | | | | | | | Using hash list macros, simplifies code and helps later RCU. This patch includes some initialization that is not strictly necessary, since an empty hlist node/list is all zero; and list is in BSS and node is allocated with kzalloc. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* ipv6: convert temporary address list to list macrosstephen hemminger2010-03-201-18/+12
| | | | | | | Use list macros instead of open coded linked list. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'master' of ↵David S. Miller2010-03-2012-37/+95
|\ | | | | | | master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6
| * netfilter: ctnetlink: fix reliable event delivery if message building failsPablo Neira Ayuso2010-03-202-3/+5
| | | | | | | | | | | | | | | | | | This patch fixes a bug that allows to lose events when reliable event delivery mode is used, ie. if NETLINK_BROADCAST_SEND_ERROR and NETLINK_RECV_NO_ENOBUFS socket options are set. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
| * netlink: fix NETLINK_RECV_NO_ENOBUFS in netlink_set_err()Pablo Neira Ayuso2010-03-201-3/+14
| | | | | | | | | | | | | | | | | | | | | | | | Currently, ENOBUFS errors are reported to the socket via netlink_set_err() even if NETLINK_RECV_NO_ENOBUFS is set. However, that should not happen. This fixes this problem and it changes the prototype of netlink_set_err() to return the number of sockets that have set the NETLINK_RECV_NO_ENOBUFS socket option. This return value is used in the next patch in these bugfix series. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
| * NET_DMA: free skbs periodicallySteven J. Magnani2010-03-201-20/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Under NET_DMA, data transfer can grind to a halt when userland issues a large read on a socket with a high RCVLOWAT (i.e., 512 KB for both). This appears to be because the NET_DMA design queues up lots of memcpy operations, but doesn't issue or wait for them (and thus free the associated skbs) until it is time for tcp_recvmesg() to return. The socket hangs when its TCP window goes to zero before enough data is available to satisfy the read. Periodically issue asynchronous memcpy operations, and free skbs for ones that have completed, to prevent sockets from going into zero-window mode. Signed-off-by: Steven J. Magnani <steve@digidescorp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * tcp: Fix tcp_mark_head_lost() with packets == 0Lennart Schulte2010-03-191-0/+3
| | | | | | | | | | | | | | | | | | | | A packet is marked as lost in case packets == 0, although nothing should be done. This results in a too early retransmitted packet during recovery in some cases. This small patch fixes this issue by returning immediately. Signed-off-by: Lennart Schulte <lennart.schulte@nets.rwth-aachen.de> Signed-off-by: Arnd Hannemann <hannemann@nets.rwth-aachen.de> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: ipmr/ip6mr: fix potential out-of-bounds vif_table accessPatrick McHardy2010-03-192-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mfc_parent of cache entries is used to index into the vif_table and is initialised from mfcctl->mfcc_parent. This can take values of to 2^16-1, while the vif_table has only MAXVIFS (32) entries. The same problem affects ip6mr. Refuse invalid values to fix a potential out-of-bounds access. Unlike the other validity checks, this is checked in ipmr_mfc_add() instead of the setsockopt handler since its unused in the delete path and might be uninitialized. Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
| * TCP: check min TTL on received ICMP packetsstephen hemminger2010-03-191-0/+5
| | | | | | | | | | | | | | | | | | This adds RFC5082 checks for TTL on received ICMP packets. It adds some security against spoofed ICMP packets disrupting GTSM protected sessions. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * ipv6: Remove redundant dst NULL check in ip6_dst_checkHerbert Xu2010-03-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | As the only path leading to ip6_dst_check makes an indirect call through dst->ops, dst cannot be NULL in ip6_dst_check. This patch removes this check in case it misleads people who come across this code. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
| * ipv4: check rt_genid in dst_checkTimo Teräs2010-03-191-4/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Xfrm_dst keeps a reference to ipv4 rtable entries on each cached bundle. The only way to renew xfrm_dst when the underlying route has changed, is to implement dst_check for this. This is what ipv6 side does too. The problems started after 87c1e12b5eeb7b30b4b41291bef8e0b41fc3dde9 ("ipsec: Fix bogus bundle flowi") which fixed a bug causing xfrm_dst to not get reused, until that all lookups always generated new xfrm_dst with new route reference and path mtu worked. But after the fix, the old routes started to get reused even after they were expired causing pmtu to break (well it would occationally work if the rtable gc had run recently and marked the route obsolete causing dst_check to get called). Signed-off-by: Timo Teras <timo.teras@iki.fi> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: Potential null skb->dev dereferenceEric Dumazet2010-03-182-5/+7
| | | | | | | | | | | | | | | | | | | | | | When doing "ifenslave -d bond0 eth0", there is chance to get NULL dereference in netif_receive_skb(), because dev->master suddenly becomes NULL after we tested it. We should use ACCESS_ONCE() to avoid this (or rcu_dereference()) Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * tcp: Fix OOB POLLIN avoidance.Alexandra Kossovsky2010-03-181-1/+1
| | | | | | | | | | | | | | | | From: Alexandra.Kossovsky@oktetlabs.ru Fixes kernel bugzilla #15541 Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: forbid underlaying devices to change its typeJiri Pirko2010-03-182-0/+8
| | | | | | | | | | | | | | | | | | | | It's not desired for underlaying devices to change type. At the time, there is for example possible to have bond with changed type from Ethernet to Infiniband as a port of a bridge. This patch fixes this. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | bonding: check return value of nofitier when changing typeJiri Pirko2010-03-181-2/+2
| | | | | | | | | | | | | | | | This patch adds the possibility to refuse the bonding type change for other subsystems (such as for example bridge, vlan, etc.) Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: rename notifier defines for netdev type changeJiri Pirko2010-03-182-10/+10
| | | | | | | | | | | | | | | | Since generally there could be more netdevices changing type other than bonding, making this event type name "bonding-unrelated" Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | rps: Fixed build with CONFIG_SMP not enabled.Tom Herbert2010-03-181-0/+24
| | | | | | | | | | Signed-off-by: Tom Herbert <therbert@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: convert multiple drivers to use netdev_for_each_mc_addr, part7Jiri Pirko2010-03-161-5/+5
| | | | | | | | | | | | | | In mlx4, using char * to store mc address in private structure instead. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | tipc: Allow retransmission of cloned buffersNeil Horman2010-03-161-5/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Forward port commit fc477e160af086f6e30c3d4fdf5f5c000d29beb5 from git://tipc.cslab.ericsson.net/pub/git/people/allan/tipc.git Origional commit message: Allow retransmission of cloned buffers This patch fixes an issue with TIPC's message retransmission logic that prevented retransmission of clone sk_buffs. Originally intended as a means of avoiding wasted work in retransmitting messages that were still on the driver's outbound queue, it also prevented TIPC from retransmitting messages through other means -- such as the secondary bearer of the broadcast link, or another interface in a set of bonded interfaces. This fix removes existing checks for cloned sk_buffs that prevented such retransmission. Origionally-Signed-off-by: Allan Stephens <allan.stephens@windriver.com> Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | tipc: Increase frequency of load distribution over broadcast linkNeil Horman2010-03-161-21/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Forward port commit 29eb572941501c40ac6e62dbc5043bf9ee76ee56 from git://tipc.cslab.ericsson.net/pub/git/people/allan/tipc.git Origional commit message: Increase frequency of load distribution over broadcast link This patch enhances the behavior of TIPC's broadcast link so that it alternates between redundant bearers (if available) after every message sent, rather than after every 10 messages. This change helps to speed up delivery of retransmitted messages by ensuring that they are not sent repeatedly over a bearer that is no longer working, but not yet recognized as failed. Tested by myself in the latest net-2.6 tree using the tipc sanity test suite Origionally-signed-off-by: Allan Stephens <allan.stephens@windriver.com> Signed-off-by: Neil Horman <nhorman@tuxdriver.com> bcast.c | 35 ++++++++++++++--------------------- 1 file changed, 14 insertions(+), 21 deletions(-) Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: core: add IFLA_STATS64 supportJan Engelhardt2010-03-161-1/+41
| | | | | | | | | | | | | | | | | | | | | | `ip -s link` shows interface counters truncated to 32 bit. This is because interface statistics are transported only in 32-bit quantity to userspace. This commit adds a new IFLA_STATS64 attribute that exports them in full 64 bit. References: http://lkml.indiana.edu/hypermail/linux/kernel/0307.3/0215.html Signed-off-by: Jan Engelhardt <jengelh@medozas.de> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: tcp: make veno selectable as default congestion moduleJan Engelhardt2010-03-161-0/+4
| | | | | | | | | | Signed-off-by: Jan Engelhardt <jengelh@medozas.de> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: tcp: make hybla selectable as default congestion moduleJan Engelhardt2010-03-161-0/+4
| | | | | | | | | | Signed-off-by: Jan Engelhardt <jengelh@medozas.de> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: remove rcu locking from fib_rules_event()Eric Dumazet2010-03-161-8/+2
| | | | | | | | | | | | | | | | We hold RTNL at this point and dont use RCU variants of list traversals, we dont need rcu_read_lock()/rcu_read_unlock() Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | bridge: per-cpu packet statistics (v3)stephen hemminger2010-03-164-6/+57
| | | | | | | | | | | | | | | | | | | | The shared packet statistics are a potential source of slow down on bridged traffic. Convert to per-cpu array, but only keep those statistics which change per-packet. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | rps: Receive Packet SteeringTom Herbert2010-03-163-57/+505
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements software receive side packet steering (RPS). RPS distributes the load of received packet processing across multiple CPUs. Problem statement: Protocol processing done in the NAPI context for received packets is serialized per device queue and becomes a bottleneck under high packet load. This substantially limits pps that can be achieved on a single queue NIC and provides no scaling with multiple cores. This solution queues packets early on in the receive path on the backlog queues of other CPUs. This allows protocol processing (e.g. IP and TCP) to be performed on packets in parallel. For each device (or each receive queue in a multi-queue device) a mask of CPUs is set to indicate the CPUs that can process packets. A CPU is selected on a per packet basis by hashing contents of the packet header (e.g. the TCP or UDP 4-tuple) and using the result to index into the CPU mask. The IPI mechanism is used to raise networking receive softirqs between CPUs. This effectively emulates in software what a multi-queue NIC can provide, but is generic requiring no device support. Many devices now provide a hash over the 4-tuple on a per packet basis (e.g. the Toeplitz hash). This patch allow drivers to set the HW reported hash in an skb field, and that value in turn is used to index into the RPS maps. Using the HW generated hash can avoid cache misses on the packet when steering it to a remote CPU. The CPU mask is set on a per device and per queue basis in the sysfs variable /sys/class/net/<device>/queues/rx-<n>/rps_cpus. This is a set of canonical bit maps for receive queues in the device (numbered by <n>). If a device does not support multi-queue, a single variable is used for the device (rx-0). Generally, we have found this technique increases pps capabilities of a single queue device with good CPU utilization. Optimal settings for the CPU mask seem to depend on architectures and cache hierarcy. Below are some results running 500 instances of netperf TCP_RR test with 1 byte req. and resp. Results show cumulative transaction rate and system CPU utilization. e1000e on 8 core Intel Without RPS: 108K tps at 33% CPU With RPS: 311K tps at 64% CPU forcedeth on 16 core AMD Without RPS: 156K tps at 15% CPU With RPS: 404K tps at 49% CPU bnx2x on 16 core AMD Without RPS 567K tps at 61% CPU (4 HW RX queues) Without RPS 738K tps at 96% CPU (8 HW RX queues) With RPS: 854K tps at 76% CPU (4 HW RX queues) Caveats: - The benefits of this patch are dependent on architecture and cache hierarchy. Tuning the masks to get best performance is probably necessary. - This patch adds overhead in the path for processing a single packet. In a lightly loaded server this overhead may eliminate the advantages of increased parallelism, and possibly cause some relative performance degradation. We have found that masks that are cache aware (share same caches with the interrupting CPU) mitigate much of this. - The RPS masks can be changed dynamically, however whenever the mask is changed this introduces the possibility of generating out of order packets. It's probably best not change the masks too frequently. Signed-off-by: Tom Herbert <therbert@google.com> include/linux/netdevice.h | 32 ++++- include/linux/skbuff.h | 3 + net/core/dev.c | 335 +++++++++++++++++++++++++++++++++++++-------- net/core/net-sysfs.c | 225 ++++++++++++++++++++++++++++++- net/core/skbuff.c | 2 + 5 files changed, 538 insertions(+), 59 deletions(-) Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | RDS: Enable per-cpu workqueue threadsTina Yang2010-03-161-1/+1
| | | | | | | | | | | | | | | | Create per-cpu workqueue threads instead of a single krdsd thread. This is a step towards better scalability. Signed-off-by: Andy Grover <andy.grover@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | RDS: Do not call set_page_dirty() with irqs offAndy Grover2010-03-163-7/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | set_page_dirty() unconditionally re-enables interrupts, so if we call it with irqs off, they will be on after the call, and that's bad. This patch moves the call after we've re-enabled interrupts in send_drop_to(), so it's safe. Also, add BUG_ONs to let us know if we ever do call set_page_dirty with interrupts off. Signed-off-by: Andy Grover <andy.grover@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | RDS: Properly unmap when getting a remote access errorSherman Pun2010-03-161-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | If the RDMA op has aborted with a remote access error, in addition to what we already do (tell userspace it has completed with an error) also unmap it and put() the rm. Otherwise, hangs may occur on arches that track maps and will not exit without proper cleanup. Signed-off-by: Andy Grover <andy.grover@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | RDS: only put sockets that have seen congestion on the poll_waitqAndy Grover2010-03-163-2/+11
| | | | | | | | | | | | | | | | | | | | | | | | rds_poll_waitq's listeners will be awoken if we receive a congestion notification. Bad performance may result because *all* polled sockets contend for this single lock. However, it should not be necessary to wake pollers when a congestion update arrives if they have never experienced congestion, and not putting these on the waitq will hopefully greatly reduce contention. Signed-off-by: Andy Grover <andy.grover@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | RDS: Fix locking in rds_send_drop_to()Tina Yang2010-03-161-6/+7
| | | | | | | | | | | | | | | | | | | | | | | | It seems rds_send_drop_to() called __rds_rdma_send_complete(rs, rm, RDS_RDMA_CANCELED) with only rds_sock lock, but not rds_message lock. It raced with other threads that is attempting to modify the rds_message as well, such as from within rds_rdma_send_complete(). Signed-off-by: Tina Yang <tina.yang@oracle.com> Signed-off-by: Andy Grover <andy.grover@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | RDS: Turn down alarming reconnect messagesAndy Grover2010-03-163-3/+6
| | | | | | | | | | | | | | | | | | | | RDS's error messages when a connection goes down are a little extreme. A connection may go down, and it will be re-established, and everything is fine. This patch links these messages through rdsdebug(), instead of to printk directly. Signed-off-by: Andy Grover <andy.grover@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | RDS: Workaround for in-use MRs on close causing crashAndy Grover2010-03-161-2/+2
| | | | | | | | | | | | | | | | | | | | if a machine is shut down without closing sockets properly, and freeing all MRs, then a BUG_ON will bring it down. This patch changes these to WARN_ONs -- leaking MRs is not fatal (although not ideal, and there is more work to do here for a proper fix.) Signed-off-by: Andy Grover <andy.grover@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | RDS: Fix send locking issueTina Yang2010-03-161-3/+2
| | | | | | | | | | | | | | | | | | | | Fix a deadlock between rds_rdma_send_complete() and rds_send_remove_from_sock() when rds socket lock and rds message lock are acquired out-of-order. Signed-off-by: Tina Yang <Tina.Yang@oracle.com> Signed-off-by: Andy Grover <andy.grover@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | RDS: Fix congestion issues for loopbackAndy Grover2010-03-163-9/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have two kinds of loopback: software (via loop transport) and hardware (via IB). sw is used for 127.0.0.1, and doesn't support rdma ops. hw is used for sends to local device IPs, and supports rdma. Both are used in different cases. For both of these, when there is a congestion map update, we want to call rds_cong_map_updated() but not actually send anything -- since loopback local and foreign congestion maps point to the same spot, they're already in sync. The old code never called sw loop's xmit_cong_map(),so rds_cong_map_updated() wasn't being called for it. sw loop ports would not work right with the congestion monitor. Fixing that meant that hw loopback now would send congestion maps to itself. This is also undesirable (racy), so we check for this case in the ib-specific xmit code. Signed-off-by: Andy Grover <andy.grover@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | RDS/TCP: Wait to wake thread when write space availableAndy Grover2010-03-161-1/+3
| | | | | | | | | | | | | | | | | | | | Instead of waking the send thread whenever any send space is available, wait until it is at least half empty. This is modeled on how sock_def_write_space() does it, and may help to minimize context switches. Signed-off-by: Andy Grover <andy.grover@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | RDS: update copy_to_user state in tcp transportAndy Grover2010-03-161-0/+1
| | | | | | | | | | | | | | | | | | | | Other transports use rds_page_copy_user, which updates our s_copy_to_user counter. TCP doesn't, so it needs to explicity call rds_stats_add(). Reported-by: Richard Frank <richard.frank@oracle.com> Signed-off-by: Andy Grover <andy.grover@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | RDS: sendmsg() should check sndtimeo, not rcvtimeoAndy Grover2010-03-161-1/+1
| | | | | | | | | | | | | | Most likely cut n paste error - sendmsg() was checking sock_rcvtimeo. Signed-off-by: Andy Grover <andy.grover@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | RDS: Do not BUG() on error returned from ib_post_sendAndy Grover2010-03-165-12/+9
|/ | | | | | | | | BUGging on a runtime error code should be avoided. This patch also eliminates all other BUG()s that have no real reason to exist. Signed-off-by: Andy Grover <andy.grover@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* bridge: Make first arg to deliver_clone const.David S. Miller2010-03-161-2/+4
| | | | | | Otherwise we get a warning from the call in br_forward(). Signed-off-by: David S. Miller <davem@davemloft.net>
* bridge br_multicast: Don't refer to BR_INPUT_SKB_CB(skb)->mrouters_only ↵YOSHIFUJI Hideaki / 吉藤英明2010-03-163-2/+10
| | | | | | | | | | | | | | | | | without IGMP snooping. Without CONFIG_BRIDGE_IGMP_SNOOPING, BR_INPUT_SKB_CB(skb)->mrouters_only is not appropriately initialized, so we can see garbage. A clear option to fix this is to set it even without that config, but we cannot optimize out the branch. Let's introduce a macro that returns value of mrouters_only and let it return 0 without CONFIG_BRIDGE_IGMP_SNOOPING. Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* route: Fix caught BUG_ON during rt_secret_rebuild_oneshot()Vitaliy Gusev2010-03-161-10/+6
| | | | | | | | | | | | | | | | route: Fix caught BUG_ON during rt_secret_rebuild_oneshot() Call rt_secret_rebuild can cause BUG_ON(timer_pending(&net->ipv4.rt_secret_timer)) in add_timer as there is not any synchronization for call rt_secret_rebuild_oneshot() for the same net namespace. Also this issue affects to rt_secret_reschedule(). Thus use mod_timer enstead. Signed-off-by: Vitaliy Gusev <vgusev@openvz.org> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* bridge br_multicast: Fix skb leakage in error path.YOSHIFUJI Hideaki / 吉藤英明2010-03-161-2/+3
| | | | | Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* bridge br_multicast: Fix handling of Max Response Code in IGMPv3 message.YOSHIFUJI Hideaki / 吉藤英明2010-03-161-2/+2
| | | | | Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* NET: netpoll, fix potential NULL ptr dereferenceJiri Slaby2010-03-161-2/+2
| | | | | | | | | | | | | Stanse found that one error path in netpoll_setup dereferences npinfo even though it is NULL. Avoid that by adding new label and go to that instead. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Cc: Daniel Borkmann <danborkmann@googlemail.com> Cc: David S. Miller <davem@davemloft.net> Acked-by: chavey@google.com Acked-by: Matt Mackall <mpm@selenic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* tipc: fix lockdep warning on address assignmentNeil Horman2010-03-161-10/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | So in the forward porting of various tipc packages, I was constantly getting this lockdep warning everytime I used tipc-config to set a network address for the protocol: [ INFO: possible circular locking dependency detected ] 2.6.33 #1 tipc-config/1326 is trying to acquire lock: (ref_table_lock){+.-...}, at: [<ffffffffa0315148>] tipc_ref_discard+0x53/0xd4 [tipc] but task is already holding lock: (&(&entry->lock)->rlock#2){+.-...}, at: [<ffffffffa03150d5>] tipc_ref_lock+0x43/0x63 [tipc] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&(&entry->lock)->rlock#2){+.-...}: [<ffffffff8107b508>] __lock_acquire+0xb67/0xd0f [<ffffffff8107b78c>] lock_acquire+0xdc/0x102 [<ffffffff8145471e>] _raw_spin_lock_bh+0x3b/0x6e [<ffffffffa03152b1>] tipc_ref_acquire+0xe8/0x11b [tipc] [<ffffffffa031433f>] tipc_createport_raw+0x78/0x1b9 [tipc] [<ffffffffa031450b>] tipc_createport+0x8b/0x125 [tipc] [<ffffffffa030f221>] tipc_subscr_start+0xce/0x126 [tipc] [<ffffffffa0308fb2>] process_signal_queue+0x47/0x7d [tipc] [<ffffffff81053e0c>] tasklet_action+0x8c/0xf4 [<ffffffff81054bd8>] __do_softirq+0xf8/0x1cd [<ffffffff8100aadc>] call_softirq+0x1c/0x30 [<ffffffff810549f4>] _local_bh_enable_ip+0xb8/0xd7 [<ffffffff81054a21>] local_bh_enable_ip+0xe/0x10 [<ffffffff81454d31>] _raw_spin_unlock_bh+0x34/0x39 [<ffffffffa0308eb8>] spin_unlock_bh.clone.0+0x15/0x17 [tipc] [<ffffffffa0308f47>] tipc_k_signal+0x8d/0xb1 [tipc] [<ffffffffa0308dd9>] tipc_core_start+0x8a/0xad [tipc] [<ffffffffa01b1087>] 0xffffffffa01b1087 [<ffffffff8100207d>] do_one_initcall+0x72/0x18a [<ffffffff810872fb>] sys_init_module+0xd8/0x23a [<ffffffff81009b42>] system_call_fastpath+0x16/0x1b -> #0 (ref_table_lock){+.-...}: [<ffffffff8107b3b2>] __lock_acquire+0xa11/0xd0f [<ffffffff8107b78c>] lock_acquire+0xdc/0x102 [<ffffffff81454836>] _raw_write_lock_bh+0x3b/0x6e [<ffffffffa0315148>] tipc_ref_discard+0x53/0xd4 [tipc] [<ffffffffa03141ee>] tipc_deleteport+0x40/0x119 [tipc] [<ffffffffa0316e35>] release+0xeb/0x137 [tipc] [<ffffffff8139dbf4>] sock_release+0x1f/0x6f [<ffffffff8139dc6b>] sock_close+0x27/0x2b [<ffffffff811116f6>] __fput+0x12a/0x1df [<ffffffff811117c5>] fput+0x1a/0x1c [<ffffffff8110e49b>] filp_close+0x68/0x72 [<ffffffff8110e552>] sys_close+0xad/0xe7 [<ffffffff81009b42>] system_call_fastpath+0x16/0x1b Finally decided I should fix this. Its a straightforward inversion, tipc_ref_acquire takes two locks in this order: ref_table_lock entry->lock while tipc_deleteport takes them in this order: entry->lock (via tipc_port_lock()) ref_table_lock (via tipc_ref_discard()) when the same entry is referenced, we get the above warning. The fix is equally straightforward. Theres no real relation between the entry->lock and the ref_table_lock (they just are needed at the same time), so move the entry->lock aquisition in tipc_ref_acquire down, after we unlock ref_table_lock (this is safe since the ref_table_lock guards changes to the reference table, and we've already claimed a slot there. I've tested the below fix and confirmed that it clears up the lockdep issue Signed-off-by: Neil Horman <nhorman@tuxdriver.com> CC: Allan Stephens <allan.stephens@windriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* bridge: Fix br_forward crash in promiscuous modeMichael Braun2010-03-163-5/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | From: Michael Braun <michael-dev@fami-braun.de> bridge: Fix br_forward crash in promiscuous mode It's a linux-next kernel from 2010-03-12 on an x86 system and it OOPs in the bridge module in br_pass_frame_up (called by br_handle_frame_finish) because brdev cannot be dereferenced (its set to a non-null value). Adding some BUG_ON statements revealed that BR_INPUT_SKB_CB(skb)->brdev == br-dev (as set in br_handle_frame_finish first) only holds until br_forward is called. The next call to br_pass_frame_up then fails. Digging deeper it seems that br_forward either frees the skb or passes it to NF_HOOK which will in turn take care of freeing the skb. The same is holds for br_pass_frame_ip. So it seems as if two independent skb allocations are required. As far as I can see, commit b33084be192ee1e347d98bb5c9e38a53d98d35e2 ("bridge: Avoid unnecessary clone on forward path") removed skb duplication and so likely causes this crash. This crash does not happen on 2.6.33. I've therefore modified br_forward the same way br_flood has been modified so that the skb is not freed if skb0 is going to be used and I can confirm that the attached patch resolves the issue for me. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
* bridge: Move NULL mdb check into br_mdb_ip_getHerbert Xu2010-03-151-4/+5
| | | | | | | | | | | | Since all callers of br_mdb_ip_get need to check whether the hash table is NULL, this patch moves the check into the function. This fixes the two callers (query/leave handler) that didn't check it. Reported-by: Michael Braun <michael-dev@fami-braun.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
OpenPOWER on IntegriCloud