summaryrefslogtreecommitdiffstats
path: root/drivers/dma/ioatdma.h
Commit message (Collapse)AuthorAgeFilesLines
* I/OAT: I/OAT version 3.0 supportMaciej Sosnowski2008-07-221-1/+4
| | | | | | | | | | | | This patch adds to ioatdma and dca modules support for Intel I/OAT DMA engine ver.3 (aka CB3 device). The main features of I/OAT ver.3 are: * 8 single channel DMA devices (8 channels total) * 8 DCA providers, each can accept 2 requesters * 8-bit TAG values and 32-bit extended APIC IDs Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* I/OAT: tcp_dma_copybreak default value dependent on I/OAT versionMaciej Sosnowski2008-07-221-0/+15
| | | | | | | | | | | I/OAT DMA performance tuning showed different optimal values of tcp_dma_copybreak for different I/OAT versions (4096 for 1.2 and 2048 for 2.0). This patch lets ioatdma driver set tcp_dma_copybreak value according to these results. [dan.j.williams@intel.com: remove some ifdefs] Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* I/OAT: Add watchdog/reset functionality to ioatdmaMaciej Sosnowski2008-07-221-1/+9
| | | | | | | | | | | | Due to occasional DMA channel hangs observed for I/OAT versions 1.2 and 2.0 a watchdog has been introduced to check every 2 seconds if all channels progress normally. If stuck channel is detected, driver resets it. The reset is done in two parts. The second part is scheduled by the first one to reinitialize the channel after the restart. Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* I/OAT: fixups from code commentsShannon Nelson2007-12-171-1/+1
| | | | | | | | | | | | | | | | | | | A few fixups from Andrew's code comments. - removed "static inline" forward-declares - changed use of min() to min_t() - removed some unnecessary NULL initializations - removed a couple of BUG() calls Fixes this: drivers/dma/ioat_dma.c: In function `ioat1_tx_submit': drivers/dma/ioat_dma.c:177: sorry, unimplemented: inlining failed in call to '__ioat1_dma_memcpy_issue_pending': function body not available drivers/dma/ioat_dma.c:268: sorry, unimplemented: called from here Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Cc: "Williams, Dan J" <dan.j.williams@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* I/OAT: Add support for version 2 of ioatdma deviceShannon Nelson2007-11-141-20/+12
| | | | | | | | | | | | | | | | Add support for version 2 of the ioatdma device. This device handles the descriptor chain and DCA services slightly differently: - Instead of moving the dma descriptors between a busy and an idle chain, this new version uses a single circular chain so that we don't have rewrite the next_descriptor pointers as we add new requests, and the device doesn't need to re-read the last descriptor. - The new device has the DCA tags defined internally instead of needing them defined statically. Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Cc: "Williams, Dan J" <dan.j.williams@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* I/OAT: Tighten descriptor setup performanceShannon Nelson2007-10-181-3/+3
| | | | | | | | | | | | | The change to the async_tx interface cost this driver some performance by spreading the descriptor setup across several functions, including multiple passes over the new descriptor chain. Here we bring the work back into one primary function and only do one pass. [akpm@linux-foundation.org: cleanups, uninline] Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* I/OAT: clean up error handling and some print messagesShannon Nelson2007-10-181-0/+2
| | | | | | | | | | Make better use of dev_err(), and catch an error where the transaction creation might fail. Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* I/OAT: Add DCA servicesShannon Nelson2007-10-161-0/+3
| | | | | | | | | | | | | | Add code to connect to the DCA driver and provide cpu tags for use by drivers that would like to use Direct Cache Access hints. [Adrian Bunk] Several Kconfig cleanup items [Andrew Morten, Chris Leech] Fix for using cpu_physical_id() even when built for uni-processor Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* I/OAT: Add support for MSI and MSI-XShannon Nelson2007-10-161-0/+12
| | | | | | | | | | | Add support for MSI and MSI-X interrupt handling, including the ability to choose the desired interrupt method. Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Acked-by: David S. Miller <davem@davemloft.net> [bunk@kernel.org: drivers/dma/ioat_dma.c: make 3 functions static] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* I/OAT: Split PCI startup from DMA handling codeShannon Nelson2007-10-161-6/+14
| | | | | | | | | | | | | Split the general PCI startup from the DMA handling code in order to prepare for adding support for DCA services and future versions of the ioatdma device. [Rusty Russell] Removal of __unsafe() usage. Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [IOAT]: Remove redundant struct member to avoid descriptor cache missShannon Nelson2007-08-141-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The layout for struct ioat_desc_sw is non-optimal and causes an extra cache hit for every descriptor processed. By tightening up the struct layout and removing one item, we pull in the fields that get used in the speedpath and get a little better performance. Before: ------- struct ioat_desc_sw { struct ioat_dma_descriptor * hw; /* 0 8 */ struct list_head node; /* 8 16 */ int tx_cnt; /* 24 4 */ /* XXX 4 bytes hole, try to pack */ dma_addr_t src; /* 32 8 */ __u32 src_len; /* 40 4 */ /* XXX 4 bytes hole, try to pack */ dma_addr_t dst; /* 48 8 */ __u32 dst_len; /* 56 4 */ /* XXX 4 bytes hole, try to pack */ /* --- cacheline 1 boundary (64 bytes) --- */ struct dma_async_tx_descriptor async_tx; /* 64 144 */ /* --- cacheline 3 boundary (192 bytes) was 16 bytes ago --- */ /* size: 208, cachelines: 4 */ /* sum members: 196, holes: 3, sum holes: 12 */ /* last cacheline: 16 bytes */ }; /* definitions: 1 */ After: ------ struct ioat_desc_sw { struct ioat_dma_descriptor * hw; /* 0 8 */ struct list_head node; /* 8 16 */ int tx_cnt; /* 24 4 */ __u32 len; /* 28 4 */ dma_addr_t src; /* 32 8 */ dma_addr_t dst; /* 40 8 */ struct dma_async_tx_descriptor async_tx; /* 48 144 */ /* --- cacheline 3 boundary (192 bytes) --- */ /* size: 192, cachelines: 3 */ }; /* definitions: 1 */ Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* dmaengine: make clients responsible for managing channelsDan Williams2007-07-131-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The current implementation assumes that a channel will only be used by one client at a time. In order to enable channel sharing the dmaengine core is changed to a model where clients subscribe to channel-available-events. Instead of tracking how many channels a client wants and how many it has received the core just broadcasts the available channels and lets the clients optionally take a reference. The core learns about the clients' needs at dma_event_callback time. In support of multiple operation types, clients can specify a capability mask to only be notified of channels that satisfy a certain set of capabilities. Changelog: * removed DMA_TX_ARRAY_INIT, no longer needed * dma_client_chan_free -> dma_chan_release: switch to global reference counting only at device unregistration time, before it was also happening at client unregistration time * clients now return dma_state_client to dmaengine (ack, dup, nak) * checkpatch.pl fixes * fixup merge with git-ioat Cc: Chris Leech <christopher.leech@intel.com> Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Acked-by: David S. Miller <davem@davemloft.net>
* dmaengine: refactor dmaengine around dma_async_tx_descriptorDan Williams2007-07-131-7/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current dmaengine interface defines mutliple routines per operation, i.e. dma_async_memcpy_buf_to_buf, dma_async_memcpy_buf_to_page etc. Adding more operation types (xor, crc, etc) to this model would result in an unmanageable number of method permutations. Are we really going to add a set of hooks for each DMA engine whizbang feature? - Jeff Garzik The descriptor creation process is refactored using the new common dma_async_tx_descriptor structure. Instead of per driver do_<operation>_<dest>_to_<src> methods, drivers integrate dma_async_tx_descriptor into their private software descriptor and then define a 'prep' routine per operation. The prep routine allocates a descriptor and ensures that the tx_set_src, tx_set_dest, tx_submit routines are valid. Descriptor creation and submission becomes: struct dma_device *dev; struct dma_chan *chan; struct dma_async_tx_descriptor *tx; tx = dev->device_prep_dma_<operation>(chan, len, int_flag) tx->tx_set_src(dma_addr_t, tx, index /* for multi-source ops */) tx->tx_set_dest(dma_addr_t, tx, index) tx->tx_submit(tx) In addition to the refactoring, dma_async_tx_descriptor also lays the groundwork for definining cross-channel-operation dependencies, and a callback facility for asynchronous notification of operation completion. Changelog: * drop dma mapping methods, suggested by Chris Leech * fix ioat_dma_dependency_added, also caught by Andrew Morton * fix dma_sync_wait, change from Andrew Morton * uninline large functions, change from Andrew Morton * add tx->callback = NULL to dmaengine calls to interoperate with async_tx calls * hookup ioat_tx_submit * convert channel capabilities to a 'cpumask_t like' bitmap * removed DMA_TX_ARRAY_INIT, no longer needed * checkpatch.pl fixes * make set_src, set_dest, and tx_submit descriptor specific methods * fixup git-ioat merge * move group_list and phys to dma_async_tx_descriptor Cc: Jeff Garzik <jeff@garzik.org> Cc: Chris Leech <christopher.leech@intel.com> Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Acked-by: David S. Miller <davem@davemloft.net>
* [PATCH] drivers/dma trivial annotationsAl Viro2006-10-101-2/+2
| | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [I/OAT]: Move PCI_DEVICE_ID_INTEL_IOAT to linux/pci_ids.hDavid S. Miller2006-06-171-2/+1
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [I/OAT]: Driver for the Intel(R) I/OAT DMA engineChris Leech2006-06-171-0/+126
Adds a new ioatdma driver Signed-off-by: Chris Leech <christopher.leech@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
OpenPOWER on IntegriCloud