summaryrefslogtreecommitdiffstats
path: root/arch/arm/mm/dma-mapping.c
Commit message (Collapse)AuthorAgeFilesLines
* ARM: dma: replace ISA_DMA_THRESHOLD with a variableRussell King2011-07-121-3/+32
| | | | | | | | | ISA_DMA_THRESHOLD has been unused by non-arch code, so lets now get rid of it from ARM by replacing it with arm_dma_zone_mask. Move dma_supported() and dma_set_mask() out of line, and have dma_supported() check this new variable instead. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Merge branches 'fixes', 'pgt-next' and 'versatile' into develRussell King2011-03-201-1/+10
|\
| * ARM: pgtable: add pud-level codeRussell King2011-02-211-1/+10
| | | | | | | | | | | | | | | | | | | | Add pud_offset() et.al. between the pgd and pmd code in preparation of using pgtable-nopud.h rather than 4level-fixup.h. This incorporates a fix from Jamie Iles <jamie@jamieiles.com> for uaccess_with_memcpy.c. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | ARM: 6622/1: fix dma_unmap_sg() documentationLinus Walleij2011-01-121-1/+1
| | | | | | | | | | | | | | | | The kerneldoc for this function is at odds with the DMA-API document, which holds, so fix it. Signed-off-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | Merge branch 'misc' into develRussell King2011-01-061-5/+23
|\ \ | | | | | | | | | | | | | | | | | | | | | Conflicts: arch/arm/Kconfig arch/arm/common/Makefile arch/arm/kernel/Makefile arch/arm/kernel/smp.c
| * \ Merge branch 'smp' into miscRussell King2011-01-061-1/+1
| |\ \ | | |/ | | | | | | | | | | | | Conflicts: arch/arm/kernel/entry-armv.S arch/arm/mm/ioremap.c
| * | ARM: DMA: add support for DMA debuggingRussell King2011-01-061-3/+21
| | | | | | | | | | | | | | | | | | | | | Add ARM support for the DMA debug infrastructure, which allows the DMA API usage to be debugged. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | ARM: DMA: Replace page_to_dma()/dma_to_page() with pfn_to_dma()/dma_to_pfn()Russell King2011-01-031-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replace the page_to_dma() and dma_to_page() macros with their PFN equivalents. This allows us to map parts of memory which do not have a struct page allocated to them to bus addresses. This will be used internally by dma_alloc_coherent()/dma_alloc_writecombine(). Build tested on Versatile, OMAP1, IOP13xx and KS8695. Tested-by: Janusz Krzysztofik <jkrzyszt@tis.icnet.pl> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | | ARM: get rid of kmap_high_l1_vipt()Nicolas Pitre2010-12-191-3/+4
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | Since commit 3e4d3af501 "mm: stack based kmap_atomic()", it is no longer necessary to carry an ad hoc version of kmap_atomic() added in commit 7e5a69e83b "ARM: 6007/1: fix highmem with VIPT cache and DMA" to cope with reentrancy. In fact, it is now actively wrong to rely on fixed kmap type indices (namely KM_L1_CACHE) as kmap_atomic() totally ignores them now and a concurrent instance of it may reuse any slot for any purpose. Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
* | ARM: Fix DMA coherent allocator alignmentRussell King2010-11-071-1/+1
|/ | | | | | | | | An out by one bug meant that the DMA coherent allocator was aligning to one more bit than it should, causing it to run out of available memory quicker. Fix this. Reported-by: Petr Štetiar <ynezz@true.cz> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 6379/1: Assume new page cache pages have dirty D-cacheCatalin Marinas2010-09-191-0/+6
| | | | | | | | | | | | | | | | There are places in Linux where writes to newly allocated page cache pages happen without a subsequent call to flush_dcache_page() (several PIO drivers including USB HCD). This patch changes the meaning of PG_arch_1 to be PG_dcache_clean and always flush the D-cache for a newly mapped page in update_mmu_cache(). The patch also sets the PG_arch_1 bit in the DMA cache maintenance function to avoid additional cache flushing in update_mmu_cache(). Tested-by: Rabin Vincent <rabin.vincent@stericsson.com> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: Ensure PTE modifications via dma_alloc_coherent are visibleRussell King2010-09-081-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Dave Hylands reports: | We've observed a problem with dma_alloc_writecombine when the system | is under heavy load (heavy bus traffic). We've managed to reduce the | problem to the following snippet, which is run from a kthread in a | continuous loop: | | void *virtAddr; | dma_addr_t physAddr; | unsigned int numBytes = 256; | | for (;;) { | virtAddr = dma_alloc_writecombine(NULL, | numBytes, &physAddr, GFP_KERNEL); | if (virtAddr == NULL) { | printk(KERN_ERR "Running out of memory\n"); | break; | } | | /* access DMA memory allocated */ | tmp = virtAddr; | *tmp = 0x77; | | /* free DMA memory */ | dma_free_writecombine(NULL, | numBytes, virtAddr, physAddr); | | ...sleep here... | } | | By itself, the code will run forever with no issues. However, as we | increase our bus traffic (typically using DMA) then the *tmp = 0x77 | line will eventually cause a page fault. If we add a small delay (a | few microseconds) before the *tmp = 0x77, then we don't see a page | fault, even under heavy load. A dsb() is required after modifying the PTE entries to ensure that they will always be visible. Add this dsb(). Reported-by: Dave Hylands <dhylands@gmail.com> Tested-by: Dave Hylands <dhylands@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: DMA coherent allocator: align remapped addressesRussell King2010-07-271-1/+14
| | | | | | | | | | | | | | | The DMA coherent remap area is used to provide an uncached mapping of memory for coherency with DMA engines. Currently, we look for any free hole which our allocation will fit in with page alignment. However, this can lead to fragmentation of the area, and allows small allocations to cross L1 entry boundaries. This is undesirable as we want to move towards allocating sections of memory. Align allocations according to the size, limiting the alignment between the page and section sizes. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 6186/1: Avoid the CONSISTENT_DMA_SIZE warning on noMMU buildsCatalin Marinas2010-07-011-9/+9
| | | | | | | | This macro is not defined when !CONFIG_MMU so this patch moves the CONSISTENT_* definitions to the CONFIG_MMU section. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 6007/1: fix highmem with VIPT cache and DMANicolas Pitre2010-04-141-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The VIVT cache of a highmem page is always flushed before the page is unmapped. This cache flush is explicit through flush_cache_kmaps() in flush_all_zero_pkmaps(), or through __cpuc_flush_dcache_area() in kunmap_atomic(). There is also an implicit flush of those highmem pages that were part of a process that just terminated making those pages free as the whole VIVT cache has to be flushed on every task switch. Hence unmapped highmem pages need no cache maintenance in that case. However unmapped pages may still be cached with a VIPT cache because the cache is tagged with physical addresses. There is no need for a whole cache flush during task switching for that reason, and despite the explicit cache flushes in flush_all_zero_pkmaps() and kunmap_atomic(), some highmem pages that were mapped in user space end up still cached even when they become unmapped. So, we do have to perform cache maintenance on those unmapped highmem pages in the context of DMA when using a VIPT cache. Unfortunately, it is not possible to perform that cache maintenance using physical addresses as all the L1 cache maintenance coprocessor functions accept virtual addresses only. Therefore we have no choice but to set up a temporary virtual mapping for that purpose. And of course the explicit cache flushing when unmapping a highmem page on a system with a VIPT cache now can go, which should increase performance. While at it, because the code in __flush_dcache_page() has to be modified anyway, let's also make sure the mapped highmem pages are pinned with kmap_high_get() for the duration of the cache maintenance operation. Because kunmap() does unmap highmem pages lazily, it was reported by Gary King <GKing@nvidia.com> that those pages ended up being unmapped during cache maintenance on SMP causing segmentation faults. Signed-off-by: Nicolas Pitre <nico@marvell.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* include cleanup: Update gfp.h and slab.h includes to prepare for breaking ↵Tejun Heo2010-03-301-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
* Merge branch 'misc2' into develRussell King2010-02-251-3/+0
|\
| * ARM: 5927/1: Make delimiters of DMA area globally visibly.Fenkart/Bostandzhyan2010-02-151-3/+0
| | | | | | | | | | | | | | | | Adds DMA area to 'virtual memory map' startup message Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | ARM: dma-mapping: fix for speculative prefetchingRussell King2010-02-151-38/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ARMv6 and ARMv7 CPUs can perform speculative prefetching, which makes DMA cache coherency handling slightly more interesting. Rather than being able to rely upon the CPU not accessing the DMA buffer until DMA has completed, we now must expect that the cache could be loaded with possibly stale data from the DMA buffer. Where DMA involves data being transferred to the device, we clean the cache before handing it over for DMA, otherwise we invalidate the buffer to get rid of potential writebacks. On DMA Completion, if data was transferred from the device, we invalidate the buffer to get rid of any stale speculative prefetches. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
* | ARM: dma-mapping: provide per-cpu type map/unmap functionsRussell King2010-02-151-17/+12
| | | | | | | | | | Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
* | ARM: dma-mapping: simplify dma_cache_maint_pageRussell King2010-02-151-24/+18
| | | | | | | | | | | | | | | | dma_cache_maint_contiguous is now simple enough to live inside dma_cache_maint_page, so move it there. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
* | ARM: dma-mapping: move selection of page ops out of dma_cache_maint_contiguousRussell King2010-02-151-29/+30
| | | | | | | | | | Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
* | ARM: dma-mapping: push buffer ownership down into dma-mapping.cRussell King2010-02-151-4/+30
| | | | | | | | | | Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
* | ARM: dma-mapping: introduce the idea of buffer ownershipRussell King2010-02-151-5/+8
|/ | | | | | | | | | The DMA API has the notion of buffer ownership; make it explicit in the ARM implementation of this API. This gives us a set of hooks to allow us to deal with CPU cache issues arising from non-cache coherent DMA. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-By: Jamie Iles <jamie@jamieiles.com>
* ARM: dma-mapping: switch ARMv7 DMA mappings to retain 'memory' attributeRussell King2009-11-241-2/+2
| | | | | | | | | | | | | | On ARMv7, it is invalid to map the same physical address multiple times with different memory types. Since system RAM is already mapped as 'memory', subsequent remapping of it must retain this attribute. However, DMA memory maps it as "strongly ordered". Fix this by introducing 'pgprot_dmacoherent()' which provides the necessary page table bits for DMA mappings. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Acked-by: Greg Ungerer <gerg@uclinux.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
* ARM: dma-mapping: get rid of setting/clearing the reserved page bitRussell King2009-11-241-20/+3
| | | | | | | | It's unnecessary; x86 doesn't do it, and ALSA doesn't require it anymore. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Acked-by: Greg Ungerer <gerg@uclinux.org>
* ARM: dma-mapping: Factor out noMMU dma buffer allocation codeRussell King2009-11-241-30/+15
| | | | | | | | | This entirely separates the DMA coherent buffer remapping code from the allocation code, and gets rid of the duplicate copy in the !MMU section. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Acked-by: Greg Ungerer <gerg@uclinux.org>
* ARM: dma-mapping: clean up coherent arch dma allocationRussell King2009-11-241-19/+12
| | | | | | | | | | | | | | IXP23xx added support for dma_alloc_coherent() for DMA arches with an exception in dma_alloc_coherent(). This is a subset of what goes on in __dma_alloc(), and there is no reason why dma_alloc_writecombine() should not be given the same treatment (except, maybe, that IXP23xx doesn't use it.) We can better deal with this by moving the arch_is_coherent() test inside __dma_alloc() and killing the code duplication. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Acked-by: Greg Ungerer <gerg@uclinux.org>
* ARM: dma-mapping: move consistent_init into CONFIG_MMU sectionRussell King2009-11-241-40/+38
| | | | | | | | | No point wrapping the contents of this function with #ifdef CONFIG_MMU when we can place it and the core_initcall() entirely within the existing conditional block. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Acked-by: Greg Ungerer <gerg@uclinux.org>
* ARM: dma-mapping: factor dma_free_coherent() common codeRussell King2009-11-241-73/+68
| | | | | | | | | | | | | | | We effectively have three implementations of dma_free_coherent() mixed up in the code; the incoherent MMU, coherent MMU and noMMU versions. The coherent MMU and noMMU versions are actually functionally identical. The incoherent MMU version is almost the same, but with the additional step of unmapping the secondary mapping. Separate out this additional step into __dma_free_remap() and simplify the resulting dma_free_coherent() code. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Acked-by: Greg Ungerer <gerg@uclinux.org>
* ARM: dma-mapping: fix nommu dma_alloc_coherent()Russell King2009-11-241-16/+9
| | | | | | | | | | The nommu version of dma_alloc_coherent was using kmalloc/kfree to manage the memory. dma_alloc_coherent() is expected to work with a granularity of a page, so this is wrong. Fix it by using the helper functions now provided. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Acked-by: Greg Ungerer <gerg@uclinux.org>
* ARM: dma-mapping: fix coherent arch dma_alloc_coherent()Russell King2009-11-241-8/+10
| | | | | | | | | | The coherent architecture dma_alloc_coherent was using kmalloc/kfree to manage the memory. dma_alloc_coherent() is expected to work with a granularity of a page, so this is wrong. Fix it by using the helper functions now provided. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Acked-by: Greg Ungerer <gerg@uclinux.org>
* ARM: dma-mapping: functions to allocate/free a coherent bufferRussell King2009-11-241-44/+66
| | | | | Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Acked-by: Greg Ungerer <gerg@uclinux.org>
* ARM: dma-mapping: split out vmregion code from dma coherent mapping codeRussell King2009-11-241-119/+13
| | | | | Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Acked-by: Greg Ungerer <gerg@uclinux.org>
* ARM: Use GFP_DMA only for masks _less_ than 32-bitRussell King2009-10-251-2/+2
| | | | | | | | | We were using GFP_DMA for masks other than 0xffffffff, which is wrong when some masks are initialized to 0xffffffffffffffff. This caused such masks to obtain memory from the precious DMA pool. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* nommu: Add noMMU support to the DMA APICatalin Marinas2009-07-241-22/+72
| | | | Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* [ARM] introduce dma_cache_maint_page()Nicolas Pitre2009-03-151-1/+71
| | | | | | | | | | | | This is a helper to be used by the DMA mapping API to handle cache maintenance for memory identified by a page structure instead of a virtual address. Those pages may or may not be highmem pages, and when they're highmem pages, they may or may not be virtually mapped. When they're not mapped then there is no L1 cache to worry about. But even in that case the L2 cache must be processed since unmapped highmem pages can still be L2 cached. Signed-off-by: Nicolas Pitre <nico@marvell.com>
* [ARM] Fix virtual to physical translation macro corner casesRussell King2009-03-121-8/+12
| | | | | | | | | | | | | | | | | | | | The current use of these macros works well when the conversion is entirely linear. In this case, we can be assured that the following holds true: __va(p + s) - s = __va(p) However, this is not always the case, especially when there is a non-linear conversion (eg, when there is a 3.5GB hole in memory.) In this case, if 's' is the size of the region (eg, PAGE_SIZE) and 'p' is the final page, the above is most definitely not true. So, we must ensure that __va() and __pa() are only used with valid kernel direct mapped RAM addresses. This patch tweaks the code to achieve this. Tested-by: Charles Moschel <fred99@carolina.rr.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* NOMMU: Rename ARM's struct vm_regionDavid Howells2009-01-081-14/+14
| | | | | | | | | | | | | | | Rename ARM's struct vm_region so that I can introduce my own global version for NOMMU. It's feasible that the ARM version may wish to use my global one instead. The NOMMU vm_region struct defines areas of the physical memory map that are under mmap. This may include chunks of RAM or regions of memory mapped devices, such as flash. It is also used to retain copies of file content so that shareable private memory mappings of files can be made. As such, it may be compatible with what is described in the banner comment for ARM's vm_region struct. Signed-off-by: David Howells <dhowells@redhat.com>
* [ARM] dma: don't touch cache on dma_*_for_cpu()Russell King2008-09-301-6/+2
| | | | | | | | | | As per the dma_unmap_* calls, we don't touch the cache when a DMA buffer transitions from device to CPU ownership. Presently, no problems have been identified with speculative cache prefetching which in itself is a new feature in later architectures. We may have to revisit the DMA API later for these architectures anyway. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] dma: Reduce to one dma_sync_sg_* implementationRussell King2008-09-291-2/+8
| | | | Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] dma: Reduce to one dma_map_sg()/dma_unmap_sg() implementationRussell King2008-09-251-8/+16
| | | | | | | No point having two of these; dma_map_page() can do all the work for us. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] Update dma_map_sg()/dma_unmap_sg() APIRussell King2008-09-251-0/+92
| | | | | | Update the ARM DMA scatter gather APIs for the scatterlist changes. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] dma: rename consistent.c to dma-mapping.cRussell King2008-09-251-0/+514
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
OpenPOWER on IntegriCloud