summaryrefslogtreecommitdiffstats
path: root/net
Commit message (Collapse)AuthorAgeFilesLines
* netdev: Allocate multiple queues for TX.David S. Miller2008-07-1711-92/+223
| | | | | | | | | | | | | | | | alloc_netdev_mq() now allocates an array of netdev_queue structures for TX, based upon the queue_count argument. Furthermore, all accesses to the TX queues are now vectored through the netdev_get_tx_queue() and netdev_for_each_tx_queue() interfaces. This makes it easy to grep the tree for all things that want to get to a TX queue of a net device. Problem spots which are not really multiqueue aware yet, and only work with one queue, can easily be spotted by grepping for all netdev_get_tx_queue() calls that pass in a zero index. Signed-off-by: David S. Miller <davem@davemloft.net>
* garp: retry sending JoinIn messages after allocation failuresPatrick McHardy2008-07-161-1/+4
| | | | | | | | Increase reliability by retrying to send JoinIn messages after memory allocation failures on each TRANSMIT_PDU event until it succeeds. Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* core: add stat to track unresolved discards in neighbor cacheNeil Horman2008-07-161-3/+5
| | | | | | | | | | | | | | in __neigh_event_send, if we have a neighbour entry which is in NUD_INCOMPLETE state, we enqueue any outbound frames to that neighbour to the neighbours arp_queue, which is default capped to a length of 3 skbs. If that queue exceeds its set length, it will drop an skb on the queue to enqueue the newly arrived skb. This results in a drop for which we have no statistics incremented. This patch adds an unresolved_discards stat to /proc/net/stat/ndisc_cache to track these lost frames. Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to NET_ADD_STATS_USERPavel Emelyanov2008-07-161-3/+3
| | | | | | | | | Done with NET_XXX_STATS macros :) To be continued... Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to NET_ADD_STATS_BHPavel Emelyanov2008-07-161-3/+12
| | | | | | | | | | | | This one is tricky. The thing is that this macro is only used when killing tw buckets, but since this killer is promiscuous wrt to which net each particular tw belongs to, I have to use it only when NET_NS is off. When the net namespaces are on, I use the INET_INC_STATS_BH for each bucket. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to NET_INC_STATS_USERPavel Emelyanov2008-07-161-3/+3
| | | | | Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to NET_INC_STATS_BHPavel Emelyanov2008-07-1616-79/+82
| | | | | Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to NET_INC_STATSPavel Emelyanov2008-07-162-3/+3
| | | | | Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* tcp: replace tcp_sock argument with sock in some placesPavel Emelyanov2008-07-161-13/+18
| | | | | | | | | These places have a tcp_sock, but we'd prefer the sock itself to get net from it. Fortunately, tcp_sk macro is just a type cast, so this replace is really cheap. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* inet: prepare net on the stack for NET accounting macrosPavel Emelyanov2008-07-163-3/+6
| | | | | Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* sock: add net to prot->enter_memory_pressure callbackPavel Emelyanov2008-07-164-5/+5
| | | | | | | | | | | | The tcp_enter_memory_pressure calls NET_INC_STATS, but doesn't have where to get the net from. I decided to add a sk argument, not the net itself, only to factor all the required sock_net(sk) calls inside the enter_memory_pressure callback itself. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to TCP_DEC_STATSPavel Emelyanov2008-07-161-1/+1
| | | | | Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to TCP_INC_STATS_BHPavel Emelyanov2008-07-165-20/+20
| | | | | | | | | Same as before - the sock is always there to get the net from, but there are also some places with the net already saved on the stack. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to TCP_INC_STATSPavel Emelyanov2008-07-162-7/+7
| | | | | | | Fortunately (almost) all the TCP code has a sock to get the net from :) Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* tcp: add net to tcp_mib_initPavel Emelyanov2008-07-161-1/+1
| | | | | | | | | | This one sets TCP MIBs after zeroing them, and thus requires the net. The existing single caller can use init_net (temporarily). Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* inet: prepare struct net for TCP MIB accountingPavel Emelyanov2008-07-162-4/+9
| | | | | | | | | This is the same as the first patch in the set, but preparing the net for TCP_XXX_STATS - save the struct net on the stack where required and possible. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to IP_ADD_STATS_BHPavel Emelyanov2008-07-161-1/+1
| | | | | | | | Very simple - only ip_evictor (fragments) requires such. This patch ends up the IP_XXX_STATS patching. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to IP_INC_STATS_BHPavel Emelyanov2008-07-1611-33/+40
| | | | | Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to IP_INC_STATSPavel Emelyanov2008-07-163-17/+17
| | | | | | | | All the callers already have either the net itself, or the place where to get it from. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* ipv4: prepare net initialization for IP accountingPavel Emelyanov2008-07-163-5/+8
| | | | | | | | | | | Some places, that deal with IP statistics already have where to get a struct net from, but use it directly, without declaring a separate variable on the stack. So, save this net on the stack for future IP_XXX_STATS macros. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/ipv4/tcp.c: Fix use of PULLHUP instead of POLLHUP in comments.Will Newton2008-07-161-4/+4
| | | | | | | | Change PULLHUP to POLLHUP in tcp_poll comments and clean up another comment for grammar and coding style. Signed-off-by: Will Newton <will.newton@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: make __skb_splice_bits staticHarvey Harrison2008-07-161-1/+1
| | | | | | | net/core/skbuff.c:1335:5: warning: symbol '__skb_splice_bits' was not declared. Should it be static? Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'stealer/ipvs/sync-daemon-cleanup-for-next' of ↵David S. Miller2008-07-161-259/+172
|\ | | | | | | git://git.stealer.net/linux-2.6
| * ipvs: Use schedule_timeout_interruptible() instead of msleep_interruptible()Sven Wegener2008-07-161-1/+1
| | | | | | | | | | | | | | | | So that kthread_stop() can wake up the thread and we don't have to wait one second in the worst case for the daemon to actually stop. Signed-off-by: Sven Wegener <sven.wegener@stealer.net> Acked-by: Simon Horman <horms@verge.net.au>
| * ipvs: Put backup thread on mcast socket wait queueSven Wegener2008-07-161-2/+5
| | | | | | | | | | | | | | | | | | Instead of doing an endless loop with sleeping for one second, we now put the backup thread onto the mcast socket wait queue and it gets woken up as soon as we have data to process. Signed-off-by: Sven Wegener <sven.wegener@stealer.net> Acked-by: Simon Horman <horms@verge.net.au>
| * ipvs: Use kthread_run() instead of doing a double-fork via kernel_thread()Sven Wegener2008-07-161-233/+136
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This also moves the setup code out of the daemons, so that we're able to return proper error codes to user space. The current code will return success to user space when the daemon is started with an invald mcast interface. With these changes we get an appropriate "No such device" error. We longer need our own completion to be sure the daemons are actually running, because they no longer contain code that can fail and kthread_run() takes care of the rest. Signed-off-by: Sven Wegener <sven.wegener@stealer.net> Acked-by: Simon Horman <horms@verge.net.au>
| * ipvs: Use ERR_PTR for returning errors from make_receive_sock() and ↵Sven Wegener2008-07-161-19/+27
| | | | | | | | | | | | | | | | | | | | make_send_sock() The additional information we now return to the caller is currently not used, but will be used to return errors to user space. Signed-off-by: Sven Wegener <sven.wegener@stealer.net> Acked-by: Simon Horman <horms@verge.net.au>
| * ipvs: Initialize mcast addr at compile timeSven Wegener2008-07-161-6/+5
| | | | | | | | | | | | | | There's no need to do it at runtime, the values are constant. Signed-off-by: Sven Wegener <sven.wegener@stealer.net> Acked-by: Simon Horman <horms@verge.net.au>
* | ipvs: More reliable synchronization on connection closeRumen G. Bogdanovski2008-07-161-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | This patch enhances the synchronization of the closing connections between the master and the backup director. It prevents the closed connections to expire with the 15 min timeout of the ESTABLISHED state on the backup and makes them expire as they would do on the master with much shorter timeouts. Signed-off-by: Rumen G. Bogdanovski <rumen@voicecho.com> Acked-by: Simon Horman <horms@verge.net.au> Signed-off-by: David S. Miller <davem@davemloft.net>
* | iucv: fix memory leak in cpu hotplug error path.Akinobu Mita2008-07-151-1/+4
|/ | | | | | | Fix memory leak in error path in CPU_UP_PREPARE notifier. Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: refactor tcp splice receive path to improve readabilityOctavian Purdila2008-07-151-92/+61
| | | | | | | | | | | | | | | - move all of the details on offsets, lengths and buffers into a single function instead of doing these operation from multiple places - use a bottom up approach: try to avoid details in the high level functions, introduce them gradually as we go deeper in the function call stack With helpful feedback from Jarek Poplawski. Signed-off-by: Octavian Purdila <opurdila@ixiacom.com> Acked-by: Jarek Poplawski <jarkao2@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* netdev: Do not use TX lock to protect address lists.David S. Miller2008-07-154-60/+28
| | | | | | | | | Now that we have a specific lock to protect the network device unicast and multicast lists, remove extraneous grabs of the TX lock in cases where the code only needs address list protection. Signed-off-by: David S. Miller <davem@davemloft.net>
* netdev: Add netdev->addr_list_lock protection.David S. Miller2008-07-154-0/+34
| | | | | | | | | | | | | Add netif_addr_{lock,unlock}{,_bh}() helpers. Use them to protect operations that operate on or read the network device unicast and multicast address lists. Also use them in cases where the code simply wants to block calls into the driver's ->set_rx_mode() and ->set_multicast_list() methods. Signed-off-by: David S. Miller <davem@davemloft.net>
* netdev: Add addr_list_lock to struct net_device.David S. Miller2008-07-151-0/+1
| | | | | | | | This will be used to protect the per-device unicast and multicast address lists, as well as the callbacks into the drivers which configure such state such as ->set_rx_mode() and ->set_multicast_list(). Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add struct net to ICMPMSGIN_INC_STATS_BHPavel Emelyanov2008-07-141-1/+1
| | | | | Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add struct net to ICMPMSGOUT_INC_STATSPavel Emelyanov2008-07-141-1/+1
| | | | | Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add struct net to ICMP_INC_STATS_BHPavel Emelyanov2008-07-145-12/+13
| | | | | Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add struct net to ICMP_INC_STATSPavel Emelyanov2008-07-141-1/+1
| | | | | Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* inet: toss struct net initialization aroundPavel Emelyanov2008-07-144-6/+7
| | | | | | | | | | | | | Some places, that deal with ICMP statistics already have where to get a struct net from, but use it directly, without declaring a separate variable on the stack. Since I will need this net soon, I declare a struct net on the stack and use it in the existing places in a separate patch not to spoil the future ones. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* icmp: add struct net argument to icmp_out_countPavel Emelyanov2008-07-143-3/+5
| | | | | | | | This routine deals with ICMP statistics, but doesn't have a struct net at hands, so add one. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* vlan: remove unnecessary include statementsPatrick McHardy2008-07-143-22/+4
| | | | | Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* vlan: clean up hard_start_xmit functionsPatrick McHardy2008-07-141-35/+6
| | | | | | | | Remove excessive comments and debugging, use NETDEV_TX codes, remove some empty lines. Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* vlan: clean up vlan_dev_hard_header()Patrick McHardy2008-07-141-46/+9
| | | | | | | | Remove some debugging and excessive comments, merge the two dev_hard_header calls into one. Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* vlan: ethtool ->get_flags supportPatrick McHardy2008-07-141-0/+13
| | | | | | | | | | Allow to query LRO settings of underlying device when VLAN RX acceleration is used. Suggested by Ben Hutchings <bhutchings@solarflare.com>. Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* packet: deliver VLAN TCI to userspacePatrick McHardy2008-07-141-0/+2
| | | | | | | | Store the VLAN tag in the auxillary data/tpacket2_hdr so userspace can properly deal with hardware VLAN tagging/stripping. Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* packet: support extensible, 64 bit clean mmaped ring structurePatrick McHardy2008-07-141-33/+146
| | | | | | | | | | | | | | | | | | | | | | | | | The tpacket_hdr is not 64 bit clean due to use of an unsigned long and can't be extended because the following struct sockaddr_ll needs to be at a fixed offset. Add support for a version 2 tpacket protocol that removes these limitations. Userspace can query the header size through a new getsockopt option and change the protocol version through a setsockopt option. The changes needed to switch to the new protocol version are: 1. replace struct tpacket_hdr by struct tpacket2_hdr 2. query header len and save 3. set protocol version to 2 - set up ring as usual 4. for getting the sockaddr_ll, use (void *)hdr + TPACKET_ALIGN(hdrlen) instead of (void *)hdr + TPACKET_ALIGN(sizeof(struct tpacket_hdr)) Steps 2 and 4 can be omitted if the struct sockaddr_ll isn't needed. Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* vlan: deliver packets received with VLAN acceleration to network tapsPatrick McHardy2008-07-142-0/+31
| | | | | | | | | | | | | | When VLAN header stripping is used, packets currently bypass packet sockets (and other network taps) completely. For locally existing VLANs, they appear directly on the VLAN device, for unknown VLANs they are silently dropped. Add a new function netif_nit_deliver() to deliver incoming packets to all network interface taps and use it in __vlan_hwaccel_rx() to make VLAN packets visible on the underlying device. Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* vlan: Don't store VLAN tag in cbPatrick McHardy2008-07-141-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use a real skb member to store the skb to avoid clashes with qdiscs, which are allowed to use the cb area themselves. As currently only real devices that consume the skb set the NETIF_F_HW_VLAN_TX flag, no explicit invalidation is neccessary. The new member fills a hole on 64 bit, the skb layout changes from: __u32 mark; /* 172 4 */ sk_buff_data_t transport_header; /* 176 4 */ sk_buff_data_t network_header; /* 180 4 */ sk_buff_data_t mac_header; /* 184 4 */ sk_buff_data_t tail; /* 188 4 */ /* --- cacheline 3 boundary (192 bytes) --- */ sk_buff_data_t end; /* 192 4 */ /* XXX 4 bytes hole, try to pack */ to __u32 mark; /* 172 4 */ __u16 vlan_tci; /* 176 2 */ /* XXX 2 bytes hole, try to pack */ sk_buff_data_t transport_header; /* 180 4 */ sk_buff_data_t network_header; /* 184 4 */ Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* tipc: Optimization to multicast name lookup algorithmAllan Stephens2008-07-141-13/+30
| | | | | | | | | | | | | | | This patch simplifies and speeds up TIPC's algorithm for identifying on-node and off-node destinations that overlap a multicast name sequence range. Rather than traversing the list of all known name publications within the cluster, it now traverses the (potentially much shorter) list of name publications made by the node itself, and determines if any off-node destinations exist by comparing the sizes of the two lists. (Since the node list must be a subset of the cluster list, a difference in sizes means that at least one off-node destination must exist.) Signed-off-by: Allan Stephens <allan.stephens@windriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* tipc: Add missing locks when inspecting node list & link listAllan Stephens2008-07-141-5/+24
| | | | | | | | | | This patch ensures that TIPC configuration commands that display info about neighboring nodes and their links take the spinlocks that protect the node list and link lists from changing while the lists are being traversed. Signed-off-by: Allan Stephens <allan.stephens@windriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
OpenPOWER on IntegriCloud