| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
This fixes:
drivers/net/sfc/mcdi_mac.c: In function ‘efx_mcdi_set_mac’:
drivers/net/sfc/mcdi_mac.c:36:2: warning: case value ‘3’ not in enumerated type ‘enum efx_fc_type’
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We need to keep the TX queues stopped throughout a reset, without
triggering the TX watchdog and regardless of the link state. The
proper way to do this is to use netif_device_{detach,attach}() just as
we do around suspend/resume, rather than the current bodge of faking
link-down.
Since we also need to do this during an offline self-test and we
perform a reset during that, add these function calls outside of
efx_reset_down() and efx_reset_up().
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
|
|
|
|
|
|
|
| |
This option appears to have been broken by commit
8313aca38b3937947fffebca6e34bac8e24300c8 ('sfc: Allocate each channel
separately, along with its RX and TX queues').
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
|\
| |
| |
| |
| |
| |
| | |
master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6
Conflicts:
drivers/net/bnx2x/bnx2x_ethtool.c
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
During self-tests we use efx_process_channel_now() to handle
completion and other events synchronously. This disables interrupts
and NAPI processing for the channel in question, but it may still be
interrupted by another channel. A single socket may receive packets
from multiple net devices or even multiple channels of the same net
device, so this can result in deadlock on a socket lock.
Receiving packets in process context will also result in incorrect
classification by the network cgroup classifier.
Therefore, we must only use efx_process_channel_now() in the offline
loopback tests (which never deliver packets up the stack) and not for
the online interrupt and event tests.
For the interrupt test, there is no reason to process events. We
only care that an interrupt is raised.
For the event test, we want to know whether events have been received,
and there may be many events ahead of the one we inject. Therefore
remove efx_channel::magic_count and instead test whether
efx_channel::eventq_read_ptr advances. This is currently an event
queue index and might wrap around to exactly the same value, resulting
in a false negative. Therefore move the masking to efx_event() and
efx_nic_eventq_read_ack() so that it cannot wrap within the time of
the test.
The event test also tries to diagnose failures by checking whether an
event was delivered without causing an interrupt. Add and use a
helper function that only does this.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If the TX queues are running during loopback self tests, host
traffic gets looped back which causes the test to fail. Avoid
restarting the TX queues after the port reset so that any packets
sent by the host get held back until after the tests have completed.
[bwh: Also wake all TX queues at the end of self-tests.]
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The phy, mac, and board information structures should be const.
Since tables contain function pointer this improves security
(at least theoretically).
Compile tested only.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Acked-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
| |
| |
| |
| | |
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
|/
|
|
|
|
| |
The TSO code already supports IPv6 on VLAN, so enable it.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
If SR-IOV is enabled by firmware, even if it is not enabled in the PCI
capability, TX pushes using write-combining may be corrupted.
We want to know whether it is enabled before mapping the NIC
registers, and even if PCI extended capabilities are not accessible.
Therefore, we look for the MSI capability, which is removed if SR-IOV
is enabled.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Based on work by Neil Turton <nturton@solarflare.com> and
Kieran Mansley <kmansley@solarflare.com>.
The BIU has now been verified to handle 3- and 4-dword writes within a
single 128-bit register correctly. This means we can enable write-
combining and only insert write barriers between writes to distinct
registers.
This has been observed to save about 0.5 us when pushing a TX
descriptor to an empty TX queue.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
|
|
|
| |
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use the existing filter management functions to insert TCP/IPv4 and
UDP/IPv4 4-tuple filters for Receive Flow Steering.
For each channel, track how many RFS filters are being added during
processing of received packets and scan the corresponding number of
table entries for filters that may be reclaimed. Do this in batches
to reduce lock overhead.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement the ndo_setup_tc() operation with 2 traffic classes.
Current Solarstorm controllers do not implement TX queue priority, but
they do allow queues to be 'paced' with an enforced delay between
packets. Paced and unpaced queues are scheduled in round-robin within
two separate hardware bins (paced queues with a large delay may be
placed into a third bin temporarily, but we won't use that). If there
are queues in both bins, the TX scheduler will alternate between them.
If we make high-priority queues unpaced and best-effort queues paced,
and high-priority queues are mostly empty, a single high-priority queue
can then instantly take 50% of the packet rate regardless of how many
of the best-effort queues have descriptors outstanding.
We do not actually want an enforced delay between packets on best-
effort queues, so we set the pace value to a reserved value that
actually results in a delay of 0.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
efx_channel_get_{rx,tx}_queue() currently return NULL if the channel
isn't used for traffic in that direction. In most cases this is a
bug, but some callers rely on it as an existence test.
Add existence test functions efx_channel_has_{rx_queue,tx_queues}()
and use them as appropriate.
Change efx_channel_get_{rx,tx}_queue() to assert that the requested
queue exists.
Remove now-redundant initialisation from efx_set_channels().
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
|
|
|
|
|
|
|
|
| |
efx_hard_start_xmit() needs to implement a mapping which is the
inverse of tx_queue::core_txq. Move the initialisation of
tx_queue::core_txq next to efx_hard_start_xmit() to make the
connection more obvious.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
|
|
|
|
|
|
| |
Commit a4900ac ("sfc: Create multiple TX queues") accidentally
disabled the rss_cpus module parameter.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
|
|
|
| |
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Long before this driver went into mainline, it had support for
multiple TX queues per port, with lockless TX enabled. Since Linux
did not know anything of this, filling up any hardware TX queue would
stop the core TX queue and multiple hardware TX queues could fill up
before the scheduler reacted. Thus it was necessary to keep a count
of how many TX queues were stopped and to wake the core TX queue only
when all had free space again.
The driver also previously (ab)used the per-hardware-queue stopped
flag as a counter to deal with various things that can inhibit TX, but
it no longer does that.
Remove the per-channel tx_stop_count, tx_stop_lock and
per-hardware-queue stopped count and just use the networking core
queue state directly.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
|\
| |
| |
| |
| |
| |
| |
| | |
master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6
Conflicts:
drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
net/llc/af_llc.c
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Call netif_napi_{add,del}() on the NAPI contexts in the new and
old channels, respectively.
Since efx_init_napi() cannot fail, make its return type void.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If we are using a legacy interrupt, our IRQ may be shared and our
interrupt handler may be called even though interrupts are disabled on
the NIC. When we change ring sizes, we reallocate the event queue and
the interrupt handler may use an invalid pointer when called for
another device's interrupt.
Maintain a legacy_irq_enabled flag and test that at the top of the
interrupt handler. Note that this problem results from the need to
work around broken INT_ISR0 reads, and does not affect the legacy
interrupt handler for Falcon A1.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
For some reason we failed to make this change when perm_addr was
introduced.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
| |
| |
| |
| |
| |
| |
| |
| | |
We only have direct access to MDIO on Falcon, so move this out of
struct efx_nic.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
| |
| |
| |
| |
| |
| |
| |
| | |
We only have direct access to SPI on Falcon, so move all this state
out of struct efx_nic.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|/
|
|
|
| |
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
| |
Make local functions and variable static. Do some rearrangement
of the string table stuff to put it where it gets used.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Acked-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
| |
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
| |
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
| |
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
| |
This requires some reorganisation of channel setup and teardown to
ensure that we can always roll-back a failed change.
Based on work by Steve Hodgson <shodgson@solarflare.com>
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
| |
- Allow the ring size to be specified in non
power-of-two sizes (for instance to limit
the amount of receive buffers).
- Automatically size the event queue.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
| |
This will allow for reallocation of channel structures and rings.
Change module parameter separate_tx_channels to be read-only, since we
now require its value to be constant.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In preparation for changes to the way channels and queue structures
are allocated, revise the macros and functions used to look up and
iterator over them.
- Replace efx_for_each_tx_queue() with iteration over channels then TX
queues
- Replace efx_for_each_rx_queue() with iteration over channels then RX
queues (with one exception, shortly to be removed)
- Introduce efx_get_{channel,rx_queue,tx_queue}() functions to look up
channels and queues by index
- Introduce efx_channel_get_{rx,tx}_queue() functions to look up a
channel's queues
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
| |
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
| |
rx_over_errors appears to be intended as a count of packets that
overflow a packet buffer in the NIC. Given that we implement a
cut-through receive path, this should always be 0.
rx_dropped appears to be the correct counter for packets dropped due
to lack of host buffers.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is a small possibility that a reader gets incorrect values on 32
bit arches. SNMP applications could catch incorrect counters when a
32bit high part is changed by another stats consumer/provider.
One way to solve this is to add a rtnl_link_stats64 param to all
ndo_get_stats64() methods, and also add such a parameter to
dev_get_stats().
Rule is that we are not allowed to use dev->stats64 as a temporary
storage for 64bit stats, but a caller provided area (usually on stack)
Old drivers (only providing get_stats() method) need no changes.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
| |
Allow ethtool to query the number of RX rings, the fields used in RX
flow hashing and the hash indirection table.
Allow ethtool to update the RX flow hash indirection table.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
| |
We will use this hash key for Toeplitz IPv4 hashing too.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
| |
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
| |
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace EFX_ERR() with netif_err(), EFX_INFO() with netif_info(),
EFX_LOG() with netif_dbg() and EFX_TRACE() and EFX_REGDUMP() with
netif_vdbg().
Replace EFX_ERR_RL(), EFX_INFO_RL() and EFX_LOG_RL() using explicit
calls to net_ratelimit().
Implement the ethtool operations to get and set message level flags,
and add a 'debug' module parameter for the initial value.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
| |
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
| |
rx_errors is defined as 'bad packets received', but we are currently
including various overflow errors as well.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Insert a structure at the start of the shared page that
tracks the dma mapping refcnt. DMA into the next cache
line of the (shared) page (plus EFX_PAGE_IP_ALIGN).
When recycling a page, check the page refcnt. If the
page is otherwise unused, then resurrect the other
receive buffer that previously referenced the page.
Be careful not to overflow the receive ring, since we
can now resurrect n receive buffers in a row.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ensure that efx_fast_push_rx_descriptors() must only run
from efx_process_channel() [NAPI], or when napi_disable()
has been executed.
Reimplement the slow fill by sending an event to the
channel, so that NAPI runs, and hanging the subsequent
fast fill off the event handler. Replace the sfc_refill
workqueue and delayed work items with a timer. We do
not need to stop this timer in efx_flush_all() because
it's safe to send the event always; receiving it will
be delayed until NAPI is restarted.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Under certain conditions a PHY may backpressure Falcon B0
in such a way that flushes timeout. In normal circumstances
the phy poller would fix the PHY, and the flush could complete.
But efx_nic_flush_queues() is always called after efx_stop_all(),
so the poller has been stopped. Even if this weren't the case,
how long would we have to wait for the poller to fix this? And
several callers of efx_nic_flush_queues() are about to reset
the device anyway - so we don't need to do anything.
Work around this bug by scheduling a reset. Ensure that the
MAC is never rewired back into the datapath before the reset
runs (we already ignore all rx events anyway).
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
efx_pm_freeze() sets efx->state = STATE_FINI, which means
efx_reset_work() will abort any scheduled resets.
efx_pm_thaw() should reschedule efx_reset_work() again,
since a freeze/thaw will not have reset the hardware.
This bug was spotted by inspection - there is no real world example of
this happening.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|\
| |
| |
| | |
master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This fixes a regression introduced by commit
eb9f6744cbfa97674c13263802259b5aa0034594 "sfc: Implement ethtool
reset operation".
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Cc: stable@kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
|