diff options
author | Michael Ellerman <mpe@ellerman.id.au> | 2019-05-14 23:00:58 +1000 |
---|---|---|
committer | Michael Ellerman <mpe@ellerman.id.au> | 2019-05-15 11:13:35 +1000 |
commit | 7338874c337f01dc84597a5500a588732725ffc6 (patch) | |
tree | a8f2771b8ccaabcfcb59aa3916b48d15036c14b5 /arch/powerpc | |
parent | 397d2300b08cdee052053e362018cdb6dd65eea2 (diff) | |
download | blackbird-op-linux-7338874c337f01dc84597a5500a588732725ffc6.tar.gz blackbird-op-linux-7338874c337f01dc84597a5500a588732725ffc6.zip |
powerpc/mm: Fix crashes with hugepages & 4K pages
The recent commit to cleanup ifdefs in the hugepage initialisation led
to crashes when using 4K pages as reported by Sachin:
BUG: Kernel NULL pointer dereference at 0x0000001c
Faulting instruction address: 0xc000000001d1e58c
Oops: Kernel access of bad area, sig: 11 [#1]
LE PAGE_SIZE=4K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
...
CPU: 3 PID: 4635 Comm: futex_wake04 Tainted: G W O 5.1.0-next-20190507-autotest #1
NIP: c000000001d1e58c LR: c000000001d1e54c CTR: 0000000000000000
REGS: c000000004937890 TRAP: 0300
MSR: 8000000000009033 <SF,EE,ME,IR,DR,RI,LE> CR: 22424822 XER: 00000000
CFAR: c00000000183e9e0 DAR: 000000000000001c DSISR: 40000000 IRQMASK: 0
...
NIP kmem_cache_alloc+0xbc/0x5a0
LR kmem_cache_alloc+0x7c/0x5a0
Call Trace:
huge_pte_alloc+0x580/0x950
hugetlb_fault+0x9a0/0x1250
handle_mm_fault+0x490/0x4a0
__do_page_fault+0x77c/0x1f00
do_page_fault+0x28/0x50
handle_page_fault+0x18/0x38
This is caused by us trying to allocate from a NULL kmem cache in
__hugepte_alloc(). The kmem cache is NULL because it was never
allocated in hugetlbpage_init(), because add_huge_page_size() returned
an error.
The reason add_huge_page_size() returned an error is a simple typo, we
are calling check_and_get_huge_psize(size) when we should be passing
shift instead.
The fact that we're able to trigger this path when the kmem caches are
NULL is a separate bug, ie. we should not advertise any hugepage sizes
if we haven't setup the required caches for them.
This was only seen with 4K pages, with 64K pages we don't need to
allocate any extra kmem caches because the 16M hugepage just occupies
a single entry at the PMD level.
Fixes: 723f268f19da ("powerpc/mm: cleanup ifdef mess in add_huge_page_size()")
Reported-by: Sachin Sant <sachinp@linux.ibm.com>
Tested-by: Sachin Sant <sachinp@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Diffstat (limited to 'arch/powerpc')
-rw-r--r-- | arch/powerpc/mm/hugetlbpage.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index c5c9ff2d7afc..b5d92dc32844 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -556,7 +556,7 @@ static int __init add_huge_page_size(unsigned long long size) if (size <= PAGE_SIZE || !is_power_of_2(size)) return -EINVAL; - mmu_psize = check_and_get_huge_psize(size); + mmu_psize = check_and_get_huge_psize(shift); if (mmu_psize < 0) return -EINVAL; |