summaryrefslogtreecommitdiffstats
path: root/drivers/infiniband/hw/ehca/ehca_reqs.c
Commit message (Collapse)AuthorAgeFilesLines
* IB/ehca: Generate flush status CQ entriesAlexander Schmidt2008-09-201-27/+184
| | | | | | | | | | | | | | When a QP goes into error state, it is required that CQ entries with a flush error status are delivered to the application for any outstanding work requests. eHCA does not do this in hardware, so this patch adds software flush CQE generation to the ehca driver. Whenever a QP gets into error state, it is added to the QP error list of its respective CQ. If the error QP list of a CQ is not empty, poll_cq() generates flush CQEs before polling the actual CQ. Signed-off-by: Alexander Schmidt <alexs@linux.vnet.ibm.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/ehca: Discard double CQE for one WRAlexander Schmidt2008-08-121-11/+43
| | | | | | | | | | | | | | | | | | | | | | | Under rare circumstances, the ehca hardware might erroneously generate two CQEs for the same WQE, which is not compliant to the IB spec and will cause unpredictable errors like memory being freed twice. To avoid this problem, the driver needs to detect the second CQE and discard it. For this purpose, introduce an array holding as many elements as the SQ of the QP, called sq_map. Each sq_map entry stores a "reported" flag for one WQE in the SQ. When a work request is posted to the SQ, the respective "reported" flag is set to zero. After the arrival of a CQE, the flag is set to 1, which allows to detect the occurence of a second CQE. The mapping between WQE / CQE and the corresponding sq_map element is implemented by replacing the lowest 16 Bits of the wr_id with the index in the queue map. The original 16 Bits are stored in the sq_map entry and are restored when the CQE is passed to the application. Signed-off-by: Alexander Schmidt <alexs@linux.vnet.ibm.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/ehca: Check idr_find() return valueAlexander Schmidt2008-08-121-1/+3
| | | | | | | | | | | The idr_find() function may fail when trying to get the QP that is associated with a CQE, e.g. when a QP has been destroyed between the generation of a CQE and the poll request for it. In consequence, the return value of idr_find() must be checked and the CQE must be discarded when the QP cannot be found. Signed-off-by: Alexander Schmidt <alexs@linux.vnet.ibm.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/ehca: Repoll CQ on invalid opcodeAlexander Schmidt2008-08-121-1/+1
| | | | | | | | | | When the ehca driver detects an invalid opcode in a CQE, it currently passes the CQE to the application and returns with success. This patch changes the CQE handling to discard CQEs with invalid opcodes and to continue reading the next CQE from the CQ. Signed-off-by: Alexander Schmidt <alexs@linux.vnet.ibm.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/ehca: Rename goto label in ehca_poll_cq_one()Alexander Schmidt2008-08-121-3/+3
| | | | | | | | Rename the "poll_cq_one_read_cqe" goto label to what it actually does, namely "repoll". Signed-off-by: Alexander Schmidt <alexs@linux.vnet.ibm.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* powerpc: Move include files to arch/powerpc/include/asmStephen Rothwell2008-08-041-1/+1
| | | | | | | | | | | | | | from include/asm-powerpc. This is the result of a mkdir arch/powerpc/include/asm git mv include/asm-powerpc/* arch/powerpc/include/asm Followed by a few documentation/comment fixups and a couple of places where <asm-powepc/...> was being used explicitly. Of the latter only one was outside the arch code and it is a driver only built for powerpc. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* IB/ehca: Reject receive work requests if QP is in RESET stateJoachim Fenkes2008-07-141-2/+10
| | | | | Signed-off-by: Joachim Fenkes <fenkes@de.ibm.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/core: Add memory management extensions supportSteve Wise2008-07-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for the IB "base memory management extension" (BMME) and the equivalent iWARP operations (which the iWARP verbs mandates all devices must implement). The new operations are: - Allocate an ib_mr for use in fast register work requests. - Allocate/free a physical buffer lists for use in fast register work requests. This allows device drivers to allocate this memory as needed for use in posting send requests (eg via dma_alloc_coherent). - New send queue work requests: * send with remote invalidate * fast register memory region * local invalidate memory region * RDMA read with invalidate local memory region (iWARP only) Consumer interface details: - A new device capability flag IB_DEVICE_MEM_MGT_EXTENSIONS is added to indicate device support for these features. - New send work request opcodes IB_WR_FAST_REG_MR, IB_WR_LOCAL_INV, IB_WR_RDMA_READ_WITH_INV are added. - A new consumer API function, ib_alloc_mr() is added to allocate fast register memory regions. - New consumer API functions, ib_alloc_fast_reg_page_list() and ib_free_fast_reg_page_list() are added to allocate and free device-specific memory for fast registration page lists. - A new consumer API function, ib_update_fast_reg_key(), is added to allow the key portion of the R_Key and L_Key of a fast registration MR to be updated. Consumers call this if desired before posting a IB_WR_FAST_REG_MR work request. Consumers can use this as follows: - MR is allocated with ib_alloc_mr(). - Page list memory is allocated with ib_alloc_fast_reg_page_list(). - MR R_Key/L_Key "key" field is updated with ib_update_fast_reg_key(). - MR made VALID and bound to a specific page list via ib_post_send(IB_WR_FAST_REG_MR) - MR made INVALID via ib_post_send(IB_WR_LOCAL_INV), ib_post_send(IB_WR_RDMA_READ_WITH_INV) or an incoming send with invalidate operation. - MR is deallocated with ib_dereg_mr() - page lists dealloced via ib_free_fast_reg_page_list(). Applications can allocate a fast register MR once, and then can repeatedly bind the MR to different physical block lists (PBLs) via posting work requests to a send queue (SQ). For each outstanding MR-to-PBL binding in the SQ pipe, a fast_reg_page_list needs to be allocated (the fast_reg_page_list is owned by the low-level driver from the consumer posting a work request until the request completes). Thus pipelining can be achieved while still allowing device-specific page_list processing. The 32-bit fast register memory key/STag is composed of a 24-bit index and an 8-bit key. The application can change the key each time it fast registers thus allowing more control over the peer's use of the key/STag (ie it can effectively be changed each time the rkey is rebound to a page list). Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/ehca: Reject send WRs only for RESET, INIT and RTR stateJoachim Fenkes2008-06-061-2/+4
| | | | | Signed-off-by: Joachim Fenkes <fenkes@de.ibm.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/ehca: Move high-volume debug output to higher debug levelsJoachim Fenkes2008-04-231-24/+22
| | | | | Signed-off-by: Joachim Fenkes <fenkes@de.ibm.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/ehca: Prevent posting of SQ WQEs if QP not in RTSJoachim Fenkes2008-04-231-0/+5
| | | | | | | ...as required by IB Spec, C10-29. Signed-off-by: Joachim Fenkes <fenkes@de.ibm.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/core: Add support for "send with invalidate" work requestsRoland Dreier2008-04-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a new IB_WR_SEND_WITH_INV send opcode that can be used to mark a "send with invalidate" work request as defined in the iWARP verbs and the InfiniBand base memory management extensions. Also put "imm_data" and a new "invalidate_rkey" member in a new "ex" union in struct ib_send_wr. The invalidate_rkey member can be used to pass in an R_Key/STag to be invalidated. Add this new union to struct ib_uverbs_send_wr. Add code to copy the invalidate_rkey field in ib_uverbs_post_send(). Fix up low-level drivers to deal with the change to struct ib_send_wr, and just remove the imm_data initialization from net/sunrpc/xprtrdma/, since that code never does any send with immediate operations. Also, move the existing IB_DEVICE_SEND_W_INV flag to a new bit, since the iWARP drivers currently in the tree set the bit. The amso1100 driver at least will silently fail to honor the IB_SEND_INVALIDATE bit if passed in as part of userspace send requests (since it does not implement kernel bypass work request queueing). Remove the flag from all existing drivers that set it until we know which ones are OK. The values chosen for the new flag is not consecutive to avoid clashing with flags defined in the XRC patches, which are not merged yet but which are already in use and are likely to be merged soon. This resurrects a patch sent long ago by Mikkel Hagen <mhagen@iol.unh.edu>. Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/ehca: Prevent sending UD packets to QP0Joachim Fenkes2008-02-041-0/+4
| | | | | | | | | | | | The IB spec doesn't allow packets to QP0 sent on any other VL than VL15. Hardware doesn't filter those packets on the send side, so we need to do this in the driver and firmware. As eHCA doesn't support QP0, we can just filter out all traffic going to QP0, regardless of SL or VL. Signed-off-by: Joachim Fenkes <fenkes@de.ibm.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/ehca: Prevent RDMA-related connection failures on some eHCA2 hardwareJoachim Fenkes2008-01-251-32/+80
| | | | | | | | | | | | | | Some HW revisions of eHCA2 may cause an RC connection to break if they received RDMA Reads over that connection before. This can be prevented by assuring that, after the first RDMA Read, the QP receives a new RDMA Read every few million link packets. Include code into the driver that inserts an empty (size 0) RDMA Read into the message stream every now and then if the consumer doesn't post them frequently enough. Signed-off-by: Joachim Fenkes <fenkes@de.ibm.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/ehca: Print return codes as signed decimal integersJoachim Fenkes2007-10-091-1/+1
| | | | | | | ...because -12 is easier to read than FFFFFFF4. Signed-off-by: Joachim Fenkes <fenkes@de.ibm.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/ehca: Fix warnings issued by checkpatch.plHoang-Nam Nguyen2007-07-171-6/+9
| | | | | | | | Run the existing ehca code through checkpatch.pl and clean up the worst of the coding style violations. Signed-off-by: Joachim Fenkes <fenkes@de.ibm.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/ehca: Improve latency by unlocking after triggering the hardwareHoang-Nam Nguyen2007-07-091-3/+2
| | | | | | | | Kick the hardware before unlocking the send/receive queue to overlap processing a little more. Signed-off-by: Joachim Fenkes <fenkes@de.ibm.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/ehca: Return QP pointer in poll_cq()Joachim Fenkes2007-07-091-3/+8
| | | | | | | Also add two unlikely() statements. Signed-off-by: Joachim Fenkes <fenkes@de.ibm.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/ehca: Lock renaming, static initializersJoachim Fenkes2007-07-091-12/+12
| | | | | | | | | | | - Rename all spinlock flags to "flags", matching the vast majority of kernel code. - Move hcall_lock into the only file it's used in. - Replaced spin_lock_init() and friends with static initializers for global variables. Signed-off-by: Joachim Fenkes <fenkes@de.ibm.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/ehca: add Shared Receive Queue supportJoachim Fenkes2007-07-091-12/+35
| | | | | | | | Support SRQs on eHCA2. Since an SRQ is a QP for eHCA2, a lot of code (structures, create, destroy, post_recv) can be shared between QP and SRQ. Signed-off-by: Joachim Fenkes <fenkes@de.ibm.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/ehca: Refactor "maybe missed event" codeJoachim Fenkes2007-07-091-1/+1
| | | | | | | | | | Refactor the ehca changes from commit ed23a727 ("IB: Return "maybe missed event" hint from ib_req_notify_cq()") so the queue arithmetic is done in slightly fewer lines. Also, move the spinlock flags into the block they're used in. Signed-off-by: Joachim Fenkes <fenkes@de.ibm.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB: Return "maybe missed event" hint from ib_req_notify_cq()Roland Dreier2007-05-061-3/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The semantics defined by the InfiniBand specification say that completion events are only generated when a completions is added to a completion queue (CQ) after completion notification is requested. In other words, this means that the following race is possible: while (CQ is not empty) ib_poll_cq(CQ); // new completion is added after while loop is exited ib_req_notify_cq(CQ); // no event is generated for the existing completion To close this race, the IB spec recommends doing another poll of the CQ after requesting notification. However, it is not always possible to arrange code this way (for example, we have found that NAPI for IPoIB cannot poll after requesting notification). Also, some hardware (eg Mellanox HCAs) actually will generate an event for completions added before the call to ib_req_notify_cq() -- which is allowed by the spec, since there's no way for any upper-layer consumer to know exactly when a completion was really added -- so the extra poll of the CQ is just a waste. Motivated by this, we add a new flag "IB_CQ_REPORT_MISSED_EVENTS" for ib_req_notify_cq() so that it can return a hint about whether the a completion may have been added before the request for notification. The return value of ib_req_notify_cq() is extended so: < 0 means an error occurred while requesting notification == 0 means notification was requested successfully, and if IB_CQ_REPORT_MISSED_EVENTS was passed in, then no events were missed and it is safe to wait for another event. > 0 is only returned if IB_CQ_REPORT_MISSED_EVENTS was passed in. It means that the consumer must poll the CQ again to make sure it is empty to avoid the race described above. We add a flag to enable this behavior rather than turning it on unconditionally, because checking for missed events may incur significant overhead for some low-level drivers, and consumers that don't care about the results of this test shouldn't be forced to pay for the test. Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB: Return qp pointer as part of ib_wcMichael S. Tsirkin2007-02-041-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | struct ib_wc currently only includes the local QP number: this matches the IB spec, but seems mostly useless. The following patch replaces this with the pointer to qp itself, and updates all low level drivers and all users. This has the following advantages: - Ability to get a per-qp context through wc->qp->qp_context - Existing drivers already have the qp pointer ready in poll cq, so this change actually saves a tiny bit (extra memory read) on data path (for ehca it would actually be expensive to find the QP pointer when polling a CQ, but ehca does not support SRQ so we can leave wc->qp as NULL for ehca) - Users that need the QP number can still get it through wc->qp->qp_num Use case: In IPoIB connected mode code, I have a common CQ shared by multiple QPs. To track connection usage, I need a way to get at some per-QP context upon the completion, and I would like to avoid allocating context object per work request just to stick a QP pointer into it. With this code, I can just use wc->qp->qp_context. Signed-off-by: Michael S. Tsirkin <mst@mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/ehca: Add driver for IBM eHCA InfiniBand adaptersHeiko J Schick2006-09-221-0/+653
Add a driver for IBM GX bus InfiniBand adapters, which are usable with some pSeries/System p systems. Signed-off-by: Heiko J Schick <schickhj.ibm.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
OpenPOWER on IntegriCloud