summaryrefslogtreecommitdiffstats
path: root/include/asm-generic/mm_hooks.h
diff options
context:
space:
mode:
authorDave Hansen <dave@linux.vnet.ibm.com>2013-01-22 13:24:33 -0800
committerH. Peter Anvin <hpa@linux.intel.com>2013-01-25 16:33:23 -0800
commitd765653445129b7c476758040e3079480775f80a (patch)
treeb79e3e051de83e6326ad8d3bc08ad3c1c0eb1544 /include/asm-generic/mm_hooks.h
parentf3c4fbb68e93b10c781c0cc462a9d80770244da6 (diff)
downloadblackbird-op-linux-d765653445129b7c476758040e3079480775f80a.tar.gz
blackbird-op-linux-d765653445129b7c476758040e3079480775f80a.zip
x86, mm: Create slow_virt_to_phys()
This is necessary because __pa() does not work on some kinds of memory, like vmalloc() or the alloc_remap() areas on 32-bit NUMA systems. We have some functions to do conversions _like_ this in the vmalloc() code (like vmalloc_to_page()), but they do not work on sizes other than 4k pages. We would potentially need to be able to handle all the page sizes that we use for the kernel linear mapping (4k, 2M, 1G). In practice, on 32-bit NUMA systems, the percpu areas get stuck in the alloc_remap() area. Any __pa() call on them will break and basically return garbage. This patch introduces a new function slow_virt_to_phys(), which walks the kernel page tables on x86 and should do precisely the same logical thing as __pa(), but actually work on a wider range of memory. It should work on the normal linear mapping, vmalloc(), kmap(), etc... Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com> Link: http://lkml.kernel.org/r/20130122212433.4D1FCA62@kernel.stglabs.ibm.com Acked-by: Rik van Riel <riel@redhat.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Diffstat (limited to 'include/asm-generic/mm_hooks.h')
0 files changed, 0 insertions, 0 deletions
OpenPOWER on IntegriCloud