summaryrefslogtreecommitdiffstats
path: root/drivers/net/ethernet/intel/ixgbe
Commit message (Collapse)AuthorAgeFilesLines
...
* | ixgbe: Drop real_adapter from l2 fwd acceleration structureAlexander Duyck2018-04-252-13/+16
| | | | | | | | | | | | | | | | | | | | | | This patch drops the real_adapter member from the fwd_adapter structure. The general idea behind the change is that the real_adapter is carrying unnecessary data since we could always just grab the adapter structure from netdev_priv(macvlan->lowerdev) if we really needed to get at it. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* | ixgbe/fm10k: Only support macvlan offload for types that support destination ↵Alexander Duyck2018-04-251-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | filtering Both the ixgbe and fm10k drivers support destination filtering. Instead of adding a ton of complexity to support either source or passthru mode we can instead just avoid offloading them for now. Doing this we avoid leaking packets into interfaces that aren't meant to receive them. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* | ixgbe/fm10k: Drop tracking stats for macvlan broadcast/multicastAlexander Duyck2018-04-251-4/+3
| | | | | | | | | | | | | | | | | | Drop dead code now that we shouldn't be receiving broadcast or multicast frames on the queues associated to the macvlan netdev. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* | macvlan: Use software path for offloaded local, broadcast, and multicast trafficAlexander Duyck2018-04-251-29/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change makes it so that we use a software path for packets that are going to be locally switched between two macvlan interfaces on the same device. In addition we resort to software replication of broadcast and multicast packets instead of offloading that to hardware. The general idea is that using the device for east/west traffic local to the system is extremely inefficient. We can only support up to whatever the PCIe limit is for any given device so this caps us at somewhere around 20G for devices supported by ixgbe. This is compounded even further when you take broadcast and multicast into account as a single 10G port can come to a crawl as a packet is replicated up to 60+ times in some cases. In order to get away from that I am implementing changes so that we handle broadcast/multicast replication and east/west local traffic all in software. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* | macvlan: Rename fwd_priv to accel_priv and add accessor functionAlexander Duyck2018-04-251-6/+4
| | | | | | | | | | | | | | | | | | | | | | This change renames the fwd_priv member to accel_priv as this more accurately reflects the actual purpose of this value. In addition I am adding an accessor which will allow us to further abstract this in the future if needed. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* | ixgbe: Drop support for macvlan specific unicast listsAlexander Duyck2018-04-251-31/+0
| | | | | | | | | | | | | | | | | | | | Drop the code for handling macvlan specific unicast lists. It isn't needed since we don't take any efforts to maintain it when we bring the interface up and it takes the slow path anyway. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* | xdp: transition into using xdp_frame for ndo_xdp_xmitJesper Dangaard Brouer2018-04-171-11/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Changing API ndo_xdp_xmit to take a struct xdp_frame instead of struct xdp_buff. This brings xdp_return_frame and ndp_xdp_xmit in sync. This builds towards changing the API further to become a bulk API, because xdp_buff is not a queue-able object while xdp_frame is. V4: Adjust for commit 59655a5b6c83 ("tuntap: XDP_TX can use native XDP") V7: Adjust for commit d9314c474d4f ("i40e: add support for XDP_REDIRECT") Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | xdp: transition into using xdp_frame for return APIJesper Dangaard Brouer2018-04-172-9/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Changing API xdp_return_frame() to take struct xdp_frame as argument, seems like a natural choice. But there are some subtle performance details here that needs extra care, which is a deliberate choice. When de-referencing xdp_frame on a remote CPU during DMA-TX completion, result in the cache-line is change to "Shared" state. Later when the page is reused for RX, then this xdp_frame cache-line is written, which change the state to "Modified". This situation already happens (naturally) for, virtio_net, tun and cpumap as the xdp_frame pointer is the queued object. In tun and cpumap, the ptr_ring is used for efficiently transferring cache-lines (with pointers) between CPUs. Thus, the only option is to de-referencing xdp_frame. It is only the ixgbe driver that had an optimization, in which it can avoid doing the de-reference of xdp_frame. The driver already have TX-ring queue, which (in case of remote DMA-TX completion) have to be transferred between CPUs anyhow. In this data area, we stored a struct xdp_mem_info and a data pointer, which allowed us to avoid de-referencing xdp_frame. To compensate for this, a prefetchw is used for telling the cache coherency protocol about our access pattern. My benchmarks show that this prefetchw is enough to compensate the ixgbe driver. V7: Adjust for commit d9314c474d4f ("i40e: add support for XDP_REDIRECT") V8: Adjust for commit bd658dda4237 ("net/mlx5e: Separate dma base address and offset in dma_sync call") Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | xdp: rhashtable with allocator ID to pointer mappingJesper Dangaard Brouer2018-04-171-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the IDA infrastructure for getting a cyclic increasing ID number, that is used for keeping track of each registered allocator per RX-queue xdp_rxq_info. Instead of using the IDR infrastructure, which uses a radix tree, use a dynamic rhashtable, for creating ID to pointer lookup table, because this is faster. The problem that is being solved here is that, the xdp_rxq_info pointer (stored in xdp_buff) cannot be used directly, as the guaranteed lifetime is too short. The info is needed on a (potentially) remote CPU during DMA-TX completion time . In an xdp_frame the xdp_mem_info is stored, when it got converted from an xdp_buff, which is sufficient for the simple page refcnt based recycle schemes. For more advanced allocators there is a need to store a pointer to the registered allocator. Thus, there is a need to guard the lifetime or validity of the allocator pointer, which is done through this rhashtable ID map to pointer. The removal and validity of of the allocator and helper struct xdp_mem_allocator is guarded by RCU. The allocator will be created by the driver, and registered with xdp_rxq_info_reg_mem_model(). It is up-to debate who is responsible for freeing the allocator pointer or invoking the allocator destructor function. In any case, this must happen via RCU freeing. Use the IDA infrastructure for getting a cyclic increasing ID number, that is used for keeping track of each registered allocator per RX-queue xdp_rxq_info. V4: Per req of Jason Wang - Use xdp_rxq_info_reg_mem_model() in all drivers implementing XDP_REDIRECT, even-though it's not strictly necessary when allocator==NULL for type MEM_TYPE_PAGE_SHARED (given it's zero). V6: Per req of Alex Duyck - Introduce rhashtable_lookup() call in later patch V8: Address sparse should be static warnings (from kbuild test robot) Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | ixgbe: use xdp_return_frame APIJesper Dangaard Brouer2018-04-172-2/+5
|/ | | | | | | | | | | Extend struct ixgbe_tx_buffer to store the xdp_mem_info. Notice that this could be optimized further by putting this into a union in the struct ixgbe_tx_buffer, but this patchset works towards removing this again. Thus, this is not done. Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* ethernet: Use octal not symbolic permissionsJoe Perches2018-03-261-1/+1
| | | | | | | | | | | | | | Prefer the direct use of octal for permissions. Done with checkpatch -f --types=SYMBOLIC_PERMS --fix-inplace and some typing. Miscellanea: o Whitespace neatening around these conversions. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* ixgbe: tweak page counting for XDP_REDIRECTBjörn Töpel2018-03-231-3/+4
| | | | | | | | | | | | | | | The current page counting scheme assumes that the reference count cannot decrease until the received frame is sent to the upper layers of the networking stack. This assumption does not hold for the XDP_REDIRECT action, since a page (pointed out by xdp_buff) can have its reference count decreased via the xdp_do_redirect call. To work around that, we now start off by a large page count and then don't allow a refcount less than two. Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: enable TSO with IPsec offloadShannon Nelson2018-03-232-11/+29
| | | | | | | | | | Fix things up to support TSO offload in conjunction with IPsec hw offload. This raises throughput with IPsec offload on to nearly line rate. Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: no need for esp trailer if GSOShannon Nelson2018-03-231-16/+21
| | | | | | | | | | | There is no need to calculate the trailer length if we're doing a GSO/TSO, as there is no trailer added to the packet data. Also, don't bother clearing the flags field as it was already cleared earlier. Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: remove unneeded ipsec test in TX pathShannon Nelson2018-03-231-4/+2
| | | | | | | | | | Since the ipsec data fields will be zero anyway in the non-ipsec case, we can remove the conditional jump. Suggested-by: Alexander Duyck <alexander.duyck@gmail.com> Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: no need for ipsec csum feature checkShannon Nelson2018-03-231-6/+0
| | | | | | | | | | | | With the patch commit f8aa2696b4af ("esp: check the NETIF_F_HW_ESP_TX_CSUM bit before segmenting") we no longer need to protect ourself from checksum offload requests on IPsec packets, so we can remove the check in our .ndo_features_check callback. Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: fix read-modify-write in x550 phy setupPaul Greenwalt2018-03-231-2/+2
| | | | | | | | | | | | | | | Replaced an assignment operation with an OR operation. The variable assignment was overwriting the value read from the PHY register. The OR operation sets only the intended register bits. The bits that were being overwritten are reserved, so the assignment had no functional impact. Reported by: Shannon Nelson <shannon.nelson@oracle.com> Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: add status reg reads to ixgbe_check_removePaul Greenwalt2018-03-232-11/+21
| | | | | | | | | | | | | | Add status register reads and delay between reads to ixgbe_check_remove. Registers can read 0xFFFFFFFF during PCI reset, which causes the driver to remove the adapter. The additional status register reads can reduce the chance of this race condition. If the status register is not 0xFFFFFFFF, then ixgbe_check_remove returns the value of the register being read. Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* intel: add SPDX identifiers to all the Intel driversJeff Kirsher2018-03-2318-0/+18
| | | | | | | | | Add the SPDX identifiers to all the Intel wired LAN driver files, as outlined in Documentation/process/license-rules.rst. Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* ixgbe: fix disabling hide VLAN on VF resetPaul Greenwalt2018-03-121-1/+5
| | | | | | | | | | If port VLAN is enabled, set PFQDE.HIDE_VLAN during VF reset. Setting only PFQDE.PFQDE during VF reset was clearing PFQDE.HIDE_VLAN. Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: Add receive length error counterTonghao Zhang2018-03-121-0/+1
| | | | | | | | | ixgbe enabled rlec counter and the rx_error used it. We can export the counter directly via ethtool -S ethX. Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: remove unneeded ipsec state free callbackShannon Nelson2018-03-121-13/+0
| | | | | | | | | | With commit 7f05b467a735 ("xfrm: check for xdo_dev_state_free") we no longer need to add an empty callback function to the driver, so now let's remove the useless code. Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: fix ipsec trailer lengthShannon Nelson2018-03-121-1/+23
| | | | | | | | | | | | | | | Fix up the Tx trailer length calculation. We can't believe the trailer len from the xstate information because it was calculated before the packet was put together and padding added. This bit of code finds the padding value in the trailer, adds it to the authentication length, and saves it so later we can put it into the Tx descriptor to tell the device where to stop the checksum calculation. Fixes: 592594704761 ("ixgbe: process the Tx ipsec offload") Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: check for 128-bit authenticationShannon Nelson2018-03-122-5/+12
| | | | | | | | | | Make sure the Security Association is using a 128-bit authentication, since that's the only size that the hardware offload supports. Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller2018-03-061-0/+8
|\ | | | | | | | | | | | | | | | | | | All of the conflicts were cases of overlapping changes. In net/core/devlink.c, we have to make care that the resouce size_params have become a struct member rather than a pointer to such an object. Signed-off-by: David S. Miller <davem@davemloft.net>
| * ixgbe: fix crash in build_skb Rx code pathEmil Tantilov2018-02-261-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add check for build_skb enabled ring in ixgbe_dma_sync_frag(). In that case &skb_shinfo(skb)->frags[0] may not always be set which can lead to a crash. Instead we derive the page offset from skb->data. Fixes: 42073d91a214 ("ixgbe: Have the CPU take ownership of the buffers sooner") CC: stable <stable@vger.kernel.org> Reported-by: Ambarish Soman <asoman@redhat.com> Suggested-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | ixgbe: prevent ptp_rx_hang from running when in FILTER_ALL modeJacob Keller2018-02-261-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On hardware which supports timestamping all packets, the timestamps are recorded in the packet buffer, and the driver no longer uses or reads the registers. This makes the logic for checking and clearing Rx timestamp hangs meaningless. If we run the ixgbe_ptp_rx_hang() function in this case, then the driver will continuously spam the log output with "Clearing Rx timestamp hang". These messages are spurious, and confusing to end users. The original code in commit a9763f3cb54c ("ixgbe: Update PTP to support X550EM_x devices", 2015-12-03) did have a flag PTP_RX_TIMESTAMP_IN_REGISTER which was intended to be used to avoid the Rx timestamp hang check, however it did not actually check the flag before calling the function. Do so now in order to stop the checks and prevent the spurious log messages. Fixes: a9763f3cb54c ("ixgbe: Update PTP to support X550EM_x devices", 2015-12-03) Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* | ixgbe: Avoid to write the RETA table when unnecessaryTonghao Zhang2018-02-261-2/+2
| | | | | | | | | | | | | | | | | | If indir == 0 in the ixgbe_set_rxfh(), it is unnecessary to write the HW. Because redirection table is not changed. Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* | ixgbe: remove redundant initialization of 'pool'Colin Ian King2018-02-261-1/+0
|/ | | | | | | | | | | | | Variable pool is being assigned zero and then in the following for-loop is it being set to zero again. Remove the redundant first assignment. Cleans up clang warning: drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c:61:2: warning: Value stored to 'pool' is never read Signed-off-by: Colin Ian King <colin.king@canonical.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: don't set RXDCTL.RLPML for 82599Emil Tantilov2018-01-261-2/+6
| | | | | | | | | | | | | | | | commit 2de6aa3a666e ("ixgbe: Add support for padding packet") Uses RXDCTL.RLPML to limit the maximum frame size on Rx when using build_skb. Unfortunately that register does not work on 82599. Added an explicit check to avoid setting this register on 82599 MAC. Extended the comment related to the setting of RXDCTL.RLPML to better explain its purpose. Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: Fix && vs || typoDan Carpenter2018-01-261-1/+1
| | | | | | | | | | | "offset" can't be both 0x0 and 0xFFFF so presumably || was intended instead of &&. That matches with how this check is done in other functions. Fixes: 73834aec7199 ("ixgbe: extend firmware version support") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: add support for reporting 5G link speedPaul Greenwalt2018-01-261-0/+3
| | | | | | | | | Since 5G link speed is supported by some devices, add reporting of 5G link speed. Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: Don't report unsupported timestamping filters for X550Miroslav Lichvar2018-01-261-18/+19
| | | | | | | | | | | | The current code enables on X550 timestamping of all packets for any filter, which means ethtool should not report any PTP-specific filters as unsupported. Signed-off-by: Miroslav Lichvar <mlichvar@redhat.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Acked-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: use ARRAY_SIZE for array sizing calculation on array bufColin Ian King2018-01-261-1/+1
| | | | | | | | | Use the ARRAY_SIZE macro on array buf to determine size of the array. Improvement suggested by coccinelle. Signed-off-by: Colin Ian King <colin.king@canonical.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: use tc_cls_can_offload_and_chain0()Jakub Kicinski2018-01-251-4/+1
| | | | | | | | | Make use of tc_cls_can_offload_and_chain0() to set extack msg in case ethtool tc offload flag is not set or chain unsupported. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* ixgbe: register ipsec offload with the xfrm subsystemShannon Nelson2018-01-232-0/+23
| | | | | | | | | | With all the support code in place we can now link in the ipsec offload operations and set the ESP feature flag for the XFRM subsystem to see. Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: ipsec offload statsShannon Nelson2018-01-234-1/+10
| | | | | | | | Add a simple statistic to count the ipsec offloads. Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: process the Tx ipsec offloadShannon Nelson2018-01-235-9/+112
| | | | | | | | | | If the skb has a security association referenced in the skb, then set up the Tx descriptor with the ipsec offload bits. While we're here, we fix an oddly named field in the context descriptor struct. Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: process the Rx ipsec offloadShannon Nelson2018-01-233-2/+115
| | | | | | | | | | | | If the chip sees and decrypts an ipsec offload, set up the skb sp pointer with the ralated SA info. Since the chip is rude enough to keep to itself the table index it used for the decryption, we have to do our own table lookup, using the hash for speed. Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: restore offloaded SAs after a resetShannon Nelson2018-01-233-0/+44
| | | | | | | | | | On a chip reset most of the table contents are lost, so must be restored. This scans the driver's ipsec tables and restores both the filled and empty table slots to their pre-reset values. Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: add ipsec offload add and remove SAShannon Nelson2018-01-233-1/+399
| | | | | | | | | | | | | | | | | Add the functions for setting up and removing offloaded SAs (Security Associations) with the x540 hardware. We set up the callback structure but we don't yet set the hardware feature bit to be sure the XFRM service won't actually try to use us for an offload yet. The software tables are made up to mimic the hardware tables to make it easier to track what's in the hardware, and the SA table index is used for the XFRM offload handle. However, there is a hashing field in the Rx SA tracking that will be used to facilitate faster table searches in the Rx fast path. Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: add ipsec data structuresShannon Nelson2018-01-232-0/+45
| | | | | | | | Set up the data structures to be used by the ipsec offload. Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: add ipsec engine start and stop routinesShannon Nelson2018-01-231-0/+142
| | | | | | | | | | Add in the code for running and stopping the hardware ipsec encryption/decryption engine. It is good to keep the engine off when not in use in order to save on the power draw. Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: add ipsec register access routinesShannon Nelson2018-01-235-0/+222
| | | | | | | | | Add a few routines to make access to the ipsec registers just a little easier, and throw in the beginnings of an initialization. Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: clean up ipsec definesShannon Nelson2018-01-231-13/+7
| | | | | | | | | | Clean up the ipsec/macsec descriptor bit definitions to match the rest of the defines and file organization. Also recognise the bit-definition overlap in the error mask macro. Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: Fix kernel-doc format warningsTony Nguyen2018-01-1212-39/+99
| | | | | | | | | | Recent checks added for formatting kernel-doc comments are causing warnings if W= is run with a non-zero value. This patch fixes function comments to resolve warnings when W=1 is used. Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: Fix handling of macvlan Tx offloadAlexander Duyck2018-01-122-10/+14
| | | | | | | | | | | | This update makes it so that we report the actual number of Tx queues via real_num_tx_queues but are still restricted to RSS on only the first pool by setting num_tc equal to 1. Doing this locks us into only having the ability to setup XPS on the queues in that pool, and only those queues should be used for transmitting anything other than macvlan traffic. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: avoid bringing rings up/down as macvlans are added/removedAlexander Duyck2018-01-122-58/+72
| | | | | | | | | | | | | | | | | This change makes it so that instead of bringing rings up/down for various we just update the netdev pointer for the Rx ring and set or clear the MAC filter for the interface. By doing it this way we can avoid a number of races and issues in the code as things were getting messy with the macvlan clean-up racing with the interface clean-up to bring the rings down on shutdown. With this change we opt to leave the rings owned by the PF interface for both Tx and Rx and just direct the packets once they are received to the macvlan netdev. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe: Do not manipulate macvlan Tx queues when performing macvlan offloadAlexander Duyck2018-01-122-108/+25
| | | | | | | | | | | | | | | | We should not be stopping/starting the upper devices Tx queues when handling a macvlan offload. Instead we should be stopping and starting traffic on our own queues. In order to prevent us from doing this I am updating the code so that we no longer change the queue configuration on the upper device, nor do we update the queue_index on our own device. Instead we can just use the queue index for our local device and not update the netdev in the case of the transmit rings. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* ixgbe/fm10k: Record macvlan stats instead of Rx queue for macvlan offloaded ↵Alexander Duyck2018-01-121-2/+8
| | | | | | | | | | | | | | | | | rings We shouldn't be recording the Rx queue on macvlan offloaded frames since the macvlan is normally brought up as a single queue device, and it will trigger warnings for RPS if we have recorded queue IDs larger than the "real_num_rx_queues" value recorded for the device. Instead we should be recording the macvlan statistics since we are bypassing the normal macvlan statistics that would have been generated by the receive path. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
OpenPOWER on IntegriCloud