summaryrefslogtreecommitdiffstats
path: root/arch/powerpc/mm
Commit message (Collapse)AuthorAgeFilesLines
...
| * powerpc/mm: Avoid unnecessary test and reduce code sizeChristophe Leroy2018-06-041-9/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | no_selective_tlbil hence the use of either steal_all_contexts() or steal_context_up() depends on the subarch, it won't change during run. Only the 8xx uses steal_all_contexts and CONFIG_PPC_8xx is exclusive of other processors. This patch replaces the test of no_selective_tlbil global var by a test of CONFIG_PPC_8xx selection. It avoids the test and removes unnecessary code. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/mm: constify FIRST_CONTEXT in mmu_context_nohashChristophe Leroy2018-06-041-19/+18
| | | | | | | | | | | | | | | | First context is now 1 for all supported platforms, so it can be made a constant. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/mm/hash: hard disable irq in the SLB insert pathAneesh Kumar K.V2018-06-031-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | When inserting SLB entries for EA above 512TB, we need to hard disable irq. This will make sure we don't take a PMU interrupt that can possibly touch user space address via a stack dump. To prevent this, we need to hard disable the interrupt. Also add a comment explaining why we don't need context synchronizing isync with slbmte. Fixes: f384796c4 ("powerpc/mm: Add support for handling > 512TB address in SLB miss") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/mm/hugetlb: Update hugetlb related locksAneesh Kumar K.V2018-06-032-15/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With split pmd page table lock enabled, we don't use mm->page_table_lock when updating pmd entries. This patch update hugetlb path to use the right lock when inserting huge page directory entries into page table. ex: if we are using hugepd and inserting hugepd entry at the pmd level, we use pmd_lockptr, which based on config can be split pmd lock. For update huge page directory entries itself we use mm->page_table_lock. We do have a helper huge_pte_lockptr() for that. Fixes: 675d99529 ("powerpc/book3s64: Enable split pmd ptlock") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/64s: Fix compiler store ordering to SLB shadow areaNicholas Piggin2018-06-031-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The stores to update the SLB shadow area must be made as they appear in the C code, so that the hypervisor does not see an entry with mismatched vsid and esid. Use WRITE_ONCE for this. GCC has been observed to elide the first store to esid in the update, which means that if the hypervisor interrupts the guest after storing to vsid, it could see an entry with old esid and new vsid, which may possibly result in memory corruption. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/64s/radix: flush remote CPUs out of single-threaded mm_cpumaskNicholas Piggin2018-06-031-27/+121
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a single-threaded process has a non-local mm_cpumask, try to use that point to flush the TLBs out of other CPUs in the cpumask. An IPI is used for clearing remote CPUs for a few reasons: - An IPI can end lazy TLB use of the mm, which is required to prevent TLB entries being created on the remote CPU. The alternative is to drop lazy TLB switching completely, which costs 7.5% in a context switch ping-pong test betwee a process and kernel idle thread. - An IPI can have remote CPUs flush the entire PID, but the local CPU can flush a specific VA. tlbie would require over-flushing of the local CPU (where the process is running). - A single threaded process that is migrated to a different CPU is likely to have a relatively small mm_cpumask, so IPI is reasonable. No other thread can concurrently switch to this mm, because it must have been given a reference to mm_users by the current thread before it can use_mm. mm_users can be asynchronously incremented (by mm_activate or mmget_not_zero), but those users must use remote mm access and can't use_mm or access user address space. Existing code makes the this assumption already, for example sparc64 has reset mm_cpumask using this condition since the start of history, see arch/sparc/kernel/smp_64.c. This reduces tlbies for a kernel compile workload from 0.90M to 0.12M, tlbiels are increased significantly due to the PID flushing for the cleaning up remote CPUs, and increased local flushes (PID flushes take 128 tlbiels vs 1 tlbie). Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/64s/radix: optimise pte_updateNicholas Piggin2018-06-032-3/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Implementing pte_update with pte_xchg (which uses cmpxchg) is inefficient. A single larx/stcx. works fine, no need for the less efficient cmpxchg sequence. Then remove the memory barriers from the operation. There is a requirement for TLB flushing to load mm_cpumask after the store that reduces pte permissions, which is moved into the TLB flush code. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/64s/radix: avoid ptesync after set_pte and ptep_set_access_flagsNicholas Piggin2018-06-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ISA suggests ptesync after setting a pte, to prevent a table walk initiated by a subsequent access from missing that store and causing a spurious fault. This is an architectual allowance that allows an implementation's page table walker to be incoherent with the store queue. However there is no correctness problem in taking a spurious fault in userspace -- the kernel copes with these at any time, so the updated pte will be found eventually. Spurious kernel faults on vmap memory must be avoided, so a ptesync is put into flush_cache_vmap. On POWER9 so far I have not found a measurable window where this can result in more minor faults, so as an optimisation, remove the costly ptesync from pte updates. If an implementation benefits from ptesync, it would be better to add it back in update_mmu_cache, so it's not done for things like fork(2). fork --fork --exec benchmark improved 5.2% (12400->13100). Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/64s/radix: prefetch user address in update_mmu_cacheNicholas Piggin2018-06-032-2/+5
| | | | | | | | | | | | | | | | | | Prefetch the faulting address in update_mmu_cache to give the page table walker perhaps 100 cycles head start as locks are dropped and the interrupt completed. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/64s/radix: do not flush TLB when relaxing accessNicholas Piggin2018-06-031-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Radix flushes the TLB when updating ptes to increase permissiveness of protection (increase access authority). Book3S does not require TLB flushing in this case, and it is not done on hash. This patch avoids the flush for radix. >From Power ISA v3.0B, p.1090: Setting a Reference or Change Bit or Upgrading Access Authority (PTE Subject to Atomic Hardware Updates) If the only change being made to a valid PTE that is subject to atomic hardware updates is to set the Reference or Change bit to 1 or to add access authorities, a simpler sequence suffices because the translation hardware will refetch the PTE if an access is attempted for which the only problems were reference and/or change bits needing to be set or insufficient access authority. The nest MMU on POWER9 does not re-fetch the PTE after such an access attempt before faulting, so address spaces with a coprocessor attached will continue to flush in these cases. This reduces tlbies for a kernel compile workload from 1.28M to 0.95M, tlbiels from 20.17M 19.68M. fork --fork --exec benchmark improved 2.77% (12000->12300). Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/mm/radix: Change pte relax sequence to handle nest MMU hangAneesh Kumar K.V2018-06-033-7/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | When relaxing access (read -> read_write update), pte needs to be marked invalid to handle a nest MMU bug. We also need to do a tlb flush after the pte is marked invalid before updating the pte with new access bits. We also move tlb flush to platform specific __ptep_set_access_flags. This will help us to gerid of unnecessary tlb flush on BOOK3S 64 later. We don't do that in this patch. This also helps in avoiding multiple tlbies with coprocessor attached. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/mm: Change function prototypeAneesh Kumar K.V2018-06-033-8/+24
| | | | | | | | | | | | | | | | In later patch, we use the vma and psize to do tlb flush. Do the prototype update in separate patch to make the review easy. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/mm/radix: Move function from radix.h to pgtable-radix.cAneesh Kumar K.V2018-06-031-0/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In later patch we will update them which require them to be moved to pgtable-radix.c. Keeping the function in radix.h results in compile warning as below. ./arch/powerpc/include/asm/book3s/64/radix.h: In function ‘radix__ptep_set_access_flags’: ./arch/powerpc/include/asm/book3s/64/radix.h:196:28: error: dereferencing pointer to incomplete type ‘struct vm_area_struct’ struct mm_struct *mm = vma->vm_mm; ^~ ./arch/powerpc/include/asm/book3s/64/radix.h:204:6: error: implicit declaration of function ‘atomic_read’; did you mean ‘__atomic_load’? [-Werror=implicit-function-declaration] atomic_read(&mm->context.copros) > 0) { ^~~~~~~~~~~ __atomic_load ./arch/powerpc/include/asm/book3s/64/radix.h:204:21: error: dereferencing pointer to incomplete type ‘struct mm_struct’ atomic_read(&mm->context.copros) > 0) { Instead of fixing header dependencies, we move the function to pgtable-radix.c Also the function is now large to be a static inline . Doing the move in separate patch helps in review. No functional change in this patch. Only code movement. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/mm/hugetlb: Update huge_ptep_set_access_flags to call ↵Aneesh Kumar K.V2018-06-031-2/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | __ptep_set_access_flags directly In a later patch, we want to update __ptep_set_access_flags take page size arg. This makes ptep_set_access_flags only work with mmu_virtual_psize. To simplify the code make huge_ptep_set_access_flags directly call __ptep_set_access_flags so that we can compute the hugetlb page size in hugetlb function. Now that ptep_set_access_flags won't be called for hugetlb remove the is_vm_hugetlb_page() check and add the assert of pte lock unconditionally. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc: Fix build by disabling attribute-alias warning for SYSCALL_DEFINExChristophe Leroy2018-06-031-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | GCC 8.1 emits warnings such as the following. As arch/powerpc code is built with -Werror, this breaks the build with GCC 8.1. In file included from arch/powerpc/kernel/pci_64.c:23: ./include/linux/syscalls.h:233:18: error: 'sys_pciconfig_iobase' alias between functions of incompatible types 'long int(long int, long unsigned int, long unsigned int)' and 'long int(long int, long int, long int)' [-Werror=attribute-alias] asmlinkage long sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)) \ ^~~ ./include/linux/syscalls.h:222:2: note: in expansion of macro '__SYSCALL_DEFINEx' __SYSCALL_DEFINEx(x, sname, __VA_ARGS__) This patch inhibits those warnings. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [mpe: Trim change log] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * Merge branch 'topic/ppc-kvm' into nextMichael Ellerman2018-06-032-0/+208
| |\ | | | | | | | | | Merge in some commits we're sharing with the kvm-ppc tree.
| | * powerpc: Export tm_enable()/tm_disable/tm_abort() APIsSimon Guo2018-05-241-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch exports tm_enable()/tm_disable/tm_abort() APIs, which will be used for PR KVM transactional memory logic. Signed-off-by: Simon Guo <wei.guo.simon@gmail.com> Reviewed-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| | * powerpc/mm/radix: implement LPID based TLB flushes to be used by KVMNicholas Piggin2018-05-171-0/+207
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement a local TLB flush for invalidating an LPID with variants for process or partition scope. And a global TLB flush for invalidating a partition scoped page of an LPID. These will be used by KVM in subsequent patches. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | powerpc/sparse: Fix plain integer as NULL pointer warningMathieu Malaterre2018-05-252-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Trivial fix to remove the following sparse warnings: arch/powerpc/kernel/module_32.c:112:74: warning: Using plain integer as NULL pointer arch/powerpc/kernel/module_32.c:117:74: warning: Using plain integer as NULL pointer drivers/macintosh/via-pmu.c:1155:28: warning: Using plain integer as NULL pointer drivers/macintosh/via-pmu.c:1230:20: warning: Using plain integer as NULL pointer drivers/macintosh/via-pmu.c:1385:36: warning: Using plain integer as NULL pointer drivers/macintosh/via-pmu.c:1752:23: warning: Using plain integer as NULL pointer drivers/macintosh/via-pmu.c:2084:19: warning: Using plain integer as NULL pointer drivers/macintosh/via-pmu.c:2110:32: warning: Using plain integer as NULL pointer drivers/macintosh/via-pmu.c:2167:19: warning: Using plain integer as NULL pointer drivers/macintosh/via-pmu.c:2183:19: warning: Using plain integer as NULL pointer drivers/macintosh/via-pmu.c:277:20: warning: Using plain integer as NULL pointer arch/powerpc/platforms/powermac/setup.c:155:67: warning: Using plain integer as NULL pointer arch/powerpc/platforms/powermac/setup.c:247:27: warning: Using plain integer as NULL pointer arch/powerpc/platforms/powermac/setup.c:249:27: warning: Using plain integer as NULL pointer arch/powerpc/platforms/powermac/setup.c:252:37: warning: Using plain integer as NULL pointer arch/powerpc/mm/tlb_hash32.c:127:21: warning: Using plain integer as NULL pointer arch/powerpc/mm/tlb_hash32.c:148:21: warning: Using plain integer as NULL pointer arch/powerpc/mm/tlb_hash32.c:44:21: warning: Using plain integer as NULL pointer arch/powerpc/mm/tlb_hash32.c:57:21: warning: Using plain integer as NULL pointer arch/powerpc/mm/tlb_hash32.c:87:21: warning: Using plain integer as NULL pointer arch/powerpc/kernel/btext.c:160:31: warning: Using plain integer as NULL pointer arch/powerpc/kernel/btext.c:167:22: warning: Using plain integer as NULL pointer arch/powerpc/kernel/btext.c:274:21: warning: Using plain integer as NULL pointer arch/powerpc/kernel/btext.c:285:31: warning: Using plain integer as NULL pointer arch/powerpc/include/asm/hugetlb.h:204:16: warning: Using plain integer as NULL pointer arch/powerpc/mm/ppc_mmu_32.c:170:21: warning: Using plain integer as NULL pointer arch/powerpc/platforms/powermac/pci.c:1227:23: warning: Using plain integer as NULL pointer arch/powerpc/platforms/powermac/pci.c:65:24: warning: Using plain integer as NULL pointer Also use `--fix` command line option from `script/checkpatch --strict` to remove the following: CHECK: Comparison to NULL could be written "!dispDeviceBase" #72: FILE: arch/powerpc/kernel/btext.c:160: + if (dispDeviceBase == NULL) CHECK: Comparison to NULL could be written "!vbase" #80: FILE: arch/powerpc/kernel/btext.c:167: + if (vbase == NULL) CHECK: Comparison to NULL could be written "!base" #89: FILE: arch/powerpc/kernel/btext.c:274: + if (base == NULL) CHECK: Comparison to NULL could be written "!dispDeviceBase" #98: FILE: arch/powerpc/kernel/btext.c:285: + if (dispDeviceBase == NULL) CHECK: Comparison to NULL could be written "strstr" #117: FILE: arch/powerpc/kernel/module_32.c:117: + if (strstr(secstrings + sechdrs[i].sh_name, ".debug") != NULL) CHECK: Comparison to NULL could be written "!Hash" #130: FILE: arch/powerpc/mm/ppc_mmu_32.c:170: + if (Hash == NULL) CHECK: Comparison to NULL could be written "Hash" #143: FILE: arch/powerpc/mm/tlb_hash32.c:44: + if (Hash != NULL) { CHECK: Comparison to NULL could be written "!Hash" #152: FILE: arch/powerpc/mm/tlb_hash32.c:57: + if (Hash == NULL) { CHECK: Comparison to NULL could be written "!Hash" #161: FILE: arch/powerpc/mm/tlb_hash32.c:87: + if (Hash == NULL) { CHECK: Comparison to NULL could be written "!Hash" #170: FILE: arch/powerpc/mm/tlb_hash32.c:127: + if (Hash == NULL) { CHECK: Comparison to NULL could be written "!Hash" #179: FILE: arch/powerpc/mm/tlb_hash32.c:148: + if (Hash == NULL) { ERROR: space required after that ';' (ctx:VxV) #192: FILE: arch/powerpc/platforms/powermac/pci.c:65: + for (; node != NULL;node = node->sibling) { CHECK: Comparison to NULL could be written "node" #192: FILE: arch/powerpc/platforms/powermac/pci.c:65: + for (; node != NULL;node = node->sibling) { CHECK: Comparison to NULL could be written "!region" #201: FILE: arch/powerpc/platforms/powermac/pci.c:1227: + if (region == NULL) CHECK: Comparison to NULL could be written "of_get_property" #214: FILE: arch/powerpc/platforms/powermac/setup.c:155: + if (of_get_property(np, "cache-unified", NULL) != NULL && dc) { CHECK: Comparison to NULL could be written "!np" #223: FILE: arch/powerpc/platforms/powermac/setup.c:247: + if (np == NULL) CHECK: Comparison to NULL could be written "np" #226: FILE: arch/powerpc/platforms/powermac/setup.c:249: + if (np != NULL) { CHECK: Comparison to NULL could be written "l2cr" #230: FILE: arch/powerpc/platforms/powermac/setup.c:252: + if (l2cr != NULL) { CHECK: Comparison to NULL could be written "via" #243: FILE: drivers/macintosh/via-pmu.c:277: + if (via != NULL) CHECK: Comparison to NULL could be written "current_req" #252: FILE: drivers/macintosh/via-pmu.c:1155: + if (current_req != NULL) { CHECK: Comparison to NULL could be written "!req" #261: FILE: drivers/macintosh/via-pmu.c:1230: + if (req == NULL || pmu_state != idle CHECK: Comparison to NULL could be written "!req" #270: FILE: drivers/macintosh/via-pmu.c:1385: + if (req == NULL) { CHECK: Comparison to NULL could be written "!pp" #288: FILE: drivers/macintosh/via-pmu.c:2084: + if (pp == NULL) CHECK: Comparison to NULL could be written "!pp" #297: FILE: drivers/macintosh/via-pmu.c:2110: + if (count < 1 || pp == NULL) CHECK: Comparison to NULL could be written "!pp" #306: FILE: drivers/macintosh/via-pmu.c:2167: + if (pp == NULL) CHECK: Comparison to NULL could be written "pp" #315: FILE: drivers/macintosh/via-pmu.c:2183: + if (pp != NULL) { Link: https://github.com/linuxppc/linux/issues/37 Signed-off-by: Mathieu Malaterre <malat@debian.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | powerpc/mm: Only read faulting instruction when necessary in do_page_fault()Christophe Leroy2018-05-251-16/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit a7a9dcd882a67 ("powerpc: Avoid taking a data miss on every userspace instruction miss") has shown that limiting the read of faulting instruction to likely cases improves performance. This patch goes further into this direction by limiting the read of the faulting instruction to the only cases where it is likely needed. On an MPC885, with the same benchmark app as in the commit referred above, we see a reduction of about 3900 dTLB misses (approx 3%): Before the patch: Performance counter stats for './fault 500' (10 runs): 683033312 cpu-cycles ( +- 0.03% ) 134538 dTLB-load-misses ( +- 0.03% ) 46099 iTLB-load-misses ( +- 0.02% ) 19681 faults ( +- 0.02% ) 5.389747878 seconds time elapsed ( +- 0.06% ) With the patch: Performance counter stats for './fault 500' (10 runs): 682112862 cpu-cycles ( +- 0.03% ) 130619 dTLB-load-misses ( +- 0.03% ) 46073 iTLB-load-misses ( +- 0.05% ) 19681 faults ( +- 0.01% ) 5.381342641 seconds time elapsed ( +- 0.07% ) The proper work of the huge stack expansion was tested with the following app: int main(int argc, char **argv) { char buf[1024 * 1025]; sprintf(buf, "Hello world !\n"); printf(buf); exit(0); } Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Add include of pagemap.h to fix build errors] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | powerpc/mm: Use instruction symbolic names in store_updates_sp()Christophe Leroy2018-05-251-13/+13
| | | | | | | | | | | | | | | | | | | | | | | | Use symbolic names defined in asm/ppc-opcode.h instead of hardcoded values. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | powerpc/mm: Use page fragments for allocation page table at PMD levelAneesh Kumar K.V2018-05-154-6/+1
| | | | | | | | | | | | | | | Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | powerpc/mm: Implement helpers for pagetable fragment support at PMD levelAneesh Kumar K.V2018-05-154-6/+119
| | | | | | | | | | | | | | | Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | powerpc/book3s64/mm: Simplify the rcu callback for page table freeAneesh Kumar K.V2018-05-151-19/+26
| | | | | | | | | | | | | | | | | | | | | | | | Instead of encoding shift in the table address, use an enumerated index value. This allow us to do different things in the callback for pte and pmd. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | powerpc/mm/book3s64/4k: Switch 4k pagesize config to use pagetable fragmentAneesh Kumar K.V2018-05-152-13/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4K config use one full page at level 4 of the pagetable. Add support for single fragment allocation in pagetable fragment code and and use that for 4K config. This makes both 4k and 64k use the same code path. Later we will switch pmd to use the page table fragment code. This is done only for 64bit platforms which is using page table fragment support. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | powerpc/mm/nohash: Remove pte fragment dependency from nohashAneesh Kumar K.V2018-05-152-114/+114
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that we have removed 64K page size support, the RCU page table free can be much simpler for nohash. Make a copy of the the rcu callback to pgalloc.h header similar to nohash 32. We could possibly merge 32 and 64 bit there. But that is for a later patch We also move the book3s specific handler to pgtable_book3s64.c. This will be updated in a later patch to handle split pmd ptlock. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | powerpc/mm: Rename pte fragment functionsAneesh Kumar K.V2018-05-151-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We rename the alloc and get_from_cache to indicate they operate on pte fragments. In later patch we will add pmd fragment support. No functional change in this patch. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | powerpc/mm: Use pmd_lockptr instead of opencoding itAneesh Kumar K.V2018-05-153-6/+8
| | | | | | | | | | | | | | | | | | | | | | | | In later patch we switch pmd_lock from mm->page_table_lock to split pmd ptlock. It avoid compilations issues, use pmd_lockptr helper. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | powerpc/mm/book3s64: Move book3s64 code to pgtable-book3s64Aneesh Kumar K.V2018-05-152-56/+54
| | | | | | | | | | | | | | | | | | | | | Only code movement and avoid #ifdef. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | powerpc/syscalls: Switch trivial cases to SYSCALL_DEFINEAl Viro2018-05-101-1/+3
| | | | | | | | | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * | powerpc/fadump: Do not use hugepages when fadump is activeHari Bathini2018-05-032-2/+11
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | FADump capture kernel boots in restricted memory environment preserving the context of previous kernel to save vmcore. Supporting hugepages in such environment makes things unnecessarily complicated, as hugepages need memory set aside for them. This means most of the capture kernel's memory is used in supporting hugepages. In most cases, this results in out-of-memory issues while booting FADump capture kernel. But hugepages are not of much use in capture kernel whose only job is to save vmcore. So, disabling hugepages support, when fadump is active, is a reliable solution for the out of memory issues. Introducing a flag variable to disable HugeTLB support when fadump is active. Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com> Reviewed-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* | Merge branch 'siginfo-linus' of ↵Linus Torvalds2018-06-041-0/+1
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace Pull siginfo updates from Eric Biederman: "This set of changes close the known issues with setting si_code to an invalid value, and with not fully initializing struct siginfo. There remains work to do on nds32, arc, unicore32, powerpc, arm, arm64, ia64 and x86 to get the code that generates siginfo into a simpler and more maintainable state. Most of that work involves refactoring the signal handling code and thus careful code review. Also not included is the work to shrink the in kernel version of struct siginfo. That depends on getting the number of places that directly manipulate struct siginfo under control, as it requires the introduction of struct kernel_siginfo for the in kernel things. Overall this set of changes looks like it is making good progress, and with a little luck I will be wrapping up the siginfo work next development cycle" * 'siginfo-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace: (46 commits) signal/sh: Stop gcc warning about an impossible case in do_divide_error signal/mips: Report FPE_FLTUNK for undiagnosed floating point exceptions signal/um: More carefully relay signals in relay_signal. signal: Extend siginfo_layout with SIL_FAULT_{MCEERR|BNDERR|PKUERR} signal: Remove unncessary #ifdef SEGV_PKUERR in 32bit compat code signal/signalfd: Add support for SIGSYS signal/signalfd: Remove __put_user from signalfd_copyinfo signal/xtensa: Use force_sig_fault where appropriate signal/xtensa: Consistenly use SIGBUS in do_unaligned_user signal/um: Use force_sig_fault where appropriate signal/sparc: Use force_sig_fault where appropriate signal/sparc: Use send_sig_fault where appropriate signal/sh: Use force_sig_fault where appropriate signal/s390: Use force_sig_fault where appropriate signal/riscv: Replace do_trap_siginfo with force_sig_fault signal/riscv: Use force_sig_fault where appropriate signal/parisc: Use force_sig_fault where appropriate signal/parisc: Use force_sig_mceerr where appropriate signal/openrisc: Use force_sig_fault where appropriate signal/nios2: Use force_sig_fault where appropriate ...
| * signal: Ensure every siginfo we send has all bits initializedEric W. Biederman2018-04-251-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Call clear_siginfo to ensure every stack allocated siginfo is properly initialized before being passed to the signal sending functions. Note: It is not safe to depend on C initializers to initialize struct siginfo on the stack because C is allowed to skip holes when initializing a structure. The initialization of struct siginfo in tracehook_report_syscall_exit was moved from the helper user_single_step_siginfo into tracehook_report_syscall_exit itself, to make it clear that the local variable siginfo gets fully initialized. In a few cases the scope of struct siginfo has been reduced to make it clear that siginfo siginfo is not used on other paths in the function in which it is declared. Instances of using memset to initialize siginfo have been replaced with calls clear_siginfo for clarity. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
* | powerpc/mm: Flush cache on memory hot(un)plugBalbir Singh2018-04-241-0/+2
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for flushing potentially dirty cache lines when memory is hot-plugged/hot-un-plugged. The support is currently limited to 64 bit systems. The bug was exposed when mappings for a device were actually hot-unplugged and plugged in back later. A similar issue was observed during the development of memtrace, but memtrace does it's own flushing of region via a custom routine. These patches do a flush both on hotplug/unplug to clear any stale data in the cache w.r.t mappings, there is a small race window where a clean cache line may be created again just prior to tearing down the mapping. The patches were tested by disabling the flush routines in memtrace and doing I/O on the trace file. The system immediately checkstops (quite reliablly if prior to the hot-unplug of the memtrace region, we memset the regions we are about to hot unplug). After these patches no custom flushing is needed in the memtrace code. Fixes: 9d5171a8f248 ("powerpc/powernv: Enable removal of memory for in memory tracing") Cc: stable@vger.kernel.org # v4.14+ Signed-off-by: Balbir Singh <bsingharora@gmail.com> Acked-by: Reza Arbab <arbab@linux.ibm.com> Reviewed-by: Rashmica Gupta <rashmica.g@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* Merge tag 'powerpc-4.17-2' of ↵Linus Torvalds2018-04-152-3/+3
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc fixes from Michael Ellerman: - Fix crashes when loading modules built with a different CONFIG_RELOCATABLE value by adding CONFIG_RELOCATABLE to vermagic. - Fix busy loops in the OPAL NVRAM driver if we get certain error conditions from firmware. - Remove tlbie trace points from KVM code that's called in real mode, because it causes crashes. - Fix checkstops caused by invalid tlbiel on Power9 Radix. - Ensure the set of CPU features we "know" are always enabled is actually the minimal set when we build with support for firmware supplied CPU features. Thanks to: Aneesh Kumar K.V, Anshuman Khandual, Nicholas Piggin. * tag 'powerpc-4.17-2' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: powerpc/64s: Fix CPU_FTRS_ALWAYS vs DT CPU features powerpc/mm/radix: Fix checkstops caused by invalid tlbiel KVM: PPC: Book3S HV: trace_tlbie must not be called in realmode powerpc/8xx: Fix build with hugetlbfs enabled powerpc/powernv: Fix OPAL NVRAM driver OPAL_BUSY loops powerpc/powernv: define a standard delay for OPAL_BUSY type retry loops powerpc/fscr: Enable interrupts earlier before calling get_user() powerpc/64s: Fix section mismatch warnings from setup_rfi_flush() powerpc/modules: Fix crashes by adding CONFIG_RELOCATABLE to vermagic
| * powerpc/mm/radix: Fix checkstops caused by invalid tlbielMichael Ellerman2018-04-121-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In tlbiel_radix_set_isa300() we use the PPC_TLBIEL() macro to construct tlbiel instructions. The instruction takes 5 fields, two of which are registers, and the others are constants. But because it's constructed with inline asm the compiler doesn't know that. We got the constraint wrong on the 'r' field, using "r" tells the compiler to put the value in a register. The value we then get in the macro is the *register number*, not the value of the field. That means when we mask the register number with 0x1 we get 0 or 1 depending on which register the compiler happens to put the constant in, eg: li r10,1 tlbiel r8,r9,2,0,0 li r7,1 tlbiel r10,r6,0,0,1 If we're unlucky we might generate an invalid instruction form, for example RIC=0, PRS=1 and R=0, tlbiel r8,r7,0,1,0, this has been observed to cause machine checks: Oops: Machine check, sig: 7 [#1] CPU: 24 PID: 0 Comm: swapper NIP: 00000000000385f4 LR: 000000000100ed00 CTR: 000000000000007f REGS: c00000000110bb40 TRAP: 0200 MSR: 9000000000201003 <SF,HV,ME,RI,LE> CR: 48002222 XER: 20040000 CFAR: 00000000000385d0 DAR: 0000000000001c00 DSISR: 00000200 SOFTE: 1 If the machine check happens early in boot while we have MSR_ME=0 it will escalate into a checkstop and kill the box entirely. To fix it we could change the inline asm constraint to "i" which tells the compiler the value is a constant. But a better fix is to just pass a literal 1 into the macro, which bypasses any problems with inline asm constraints. Fixes: d4748276ae14 ("powerpc/64s: Improve local TLB flush for boot and MCE on POWER9") Cc: stable@vger.kernel.org # v4.16+ Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
| * powerpc/8xx: Fix build with hugetlbfs enabledAneesh Kumar K.V2018-04-111-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8xx uses the slice code when hugetlbfs is enabled. We missed a header include on 8xx which resulted in the below build failure: config: mpc885_ads_defconfig + CONFIG_HUGETLBFS arch/powerpc/mm/slice.c: In function 'slice_get_unmapped_area': arch/powerpc/mm/slice.c:655:2: error: implicit declaration of function 'need_extra_context' arch/powerpc/mm/slice.c:656:3: error: implicit declaration of function 'alloc_extended_context' on PPC64 the mmu_context.h was included via linux/pkeys.h Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* | exec: pass stack rlimit into mm layout functionsKees Cook2018-04-111-12/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Patch series "exec: Pin stack limit during exec". Attempts to solve problems with the stack limit changing during exec continue to be frustrated[1][2]. In addition to the specific issues around the Stack Clash family of flaws, Andy Lutomirski pointed out[3] other places during exec where the stack limit is used and is assumed to be unchanging. Given the many places it gets used and the fact that it can be manipulated/raced via setrlimit() and prlimit(), I think the only way to handle this is to move away from the "current" view of the stack limit and instead attach it to the bprm, and plumb this down into the functions that need to know the stack limits. This series implements the approach. [1] 04e35f4495dd ("exec: avoid RLIMIT_STACK races with prlimit()") [2] 779f4e1c6c7c ("Revert "exec: avoid RLIMIT_STACK races with prlimit()"") [3] to security@kernel.org, "Subject: existing rlimit races?" This patch (of 3): Since it is possible that the stack rlimit can change externally during exec (either via another thread calling setrlimit() or another process calling prlimit()), provide a way to pass the rlimit down into the per-architecture mm layout functions so that the rlimit can stay in the bprm structure instead of sitting in the signal structure until exec is finalized. Link: http://lkml.kernel.org/r/1518638796-20819-2-git-send-email-keescook@chromium.org Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Ben Hutchings <ben@decadent.org.uk> Cc: Willy Tarreau <w@1wt.eu> Cc: Hugh Dickins <hughd@google.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: "Jason A. Donenfeld" <Jason@zx2c4.com> Cc: Rik van Riel <riel@redhat.com> Cc: Laura Abbott <labbott@redhat.com> Cc: Greg KH <greg@kroah.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ben Hutchings <ben.hutchings@codethink.co.uk> Cc: Brad Spengler <spender@grsecurity.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | mm, migrate: remove reason argument from new_page_tMichal Hocko2018-04-111-2/+1
|/ | | | | | | | | | | | | | | | | | | | | | | | | | No allocation callback is using this argument anymore. new_page_node used to use this parameter to convey node_id resp. migration error up to move_pages code (do_move_page_to_node_array). The error status never made it into the final status field and we have a better way to communicate node id to the status field now. All other allocation callbacks simply ignored the argument so we can drop it finally. [mhocko@suse.com: fix migration callback] Link: http://lkml.kernel.org/r/20180105085259.GH2801@dhcp22.suse.cz [akpm@linux-foundation.org: fix alloc_misplaced_dst_page()] [mhocko@kernel.org: fix build] Link: http://lkml.kernel.org/r/20180103091134.GB11319@dhcp22.suse.cz Link: http://lkml.kernel.org/r/20180103082555.14592-3-mhocko@kernel.org Signed-off-by: Michal Hocko <mhocko@suse.com> Reviewed-by: Zi Yan <zi.yan@cs.rutgers.edu> Cc: Andrea Reale <ar@linux.vnet.ibm.com> Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge tag 'powerpc-4.17-1' of ↵Linus Torvalds2018-04-0724-445/+662
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc updates from Michael Ellerman: "Notable changes: - Support for 4PB user address space on 64-bit, opt-in via mmap(). - Removal of POWER4 support, which was accidentally broken in 2016 and no one noticed, and blocked use of some modern instructions. - Workarounds so that the hypervisor can enable Transactional Memory on Power9. - A series to disable the DAWR (Data Address Watchpoint Register) on Power9. - More information displayed in the meltdown/spectre_v1/v2 sysfs files. - A vpermxor (Power8 Altivec) implementation for the raid6 Q Syndrome. - A big series to make the allocation of our pacas (per cpu area), kernel page tables, and per-cpu stacks NUMA aware when using the Radix MMU on Power9. And as usual many fixes, reworks and cleanups. Thanks to: Aaro Koskinen, Alexandre Belloni, Alexey Kardashevskiy, Alistair Popple, Andy Shevchenko, Aneesh Kumar K.V, Anshuman Khandual, Balbir Singh, Benjamin Herrenschmidt, Christophe Leroy, Christophe Lombard, Cyril Bur, Daniel Axtens, Dave Young, Finn Thain, Frederic Barrat, Gustavo Romero, Horia Geantă, Jonathan Neuschäfer, Kees Cook, Larry Finger, Laurent Dufour, Laurent Vivier, Logan Gunthorpe, Madhavan Srinivasan, Mark Greer, Mark Hairgrove, Markus Elfring, Mathieu Malaterre, Matt Brown, Matt Evans, Mauricio Faria de Oliveira, Michael Neuling, Naveen N. Rao, Nicholas Piggin, Paul Mackerras, Philippe Bergheaud, Ram Pai, Rob Herring, Sam Bobroff, Segher Boessenkool, Simon Guo, Simon Horman, Stewart Smith, Sukadev Bhattiprolu, Suraj Jitindar Singh, Thiago Jung Bauermann, Vaibhav Jain, Vaidyanathan Srinivasan, Vasant Hegde, Wei Yongjun" * tag 'powerpc-4.17-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (207 commits) powerpc/64s/idle: Fix restore of AMOR on POWER9 after deep sleep powerpc/64s: Fix POWER9 DD2.2 and above in cputable features powerpc/64s: Fix pkey support in dt_cpu_ftrs, add CPU_FTR_PKEY bit powerpc/64s: Fix dt_cpu_ftrs to have restore_cpu clear unwanted LPCR bits Revert "powerpc/64s/idle: POWER9 ESL=0 stop avoid save/restore overhead" powerpc: iomap.c: introduce io{read|write}64_{lo_hi|hi_lo} powerpc: io.h: move iomap.h include so that it can use readq/writeq defs cxl: Fix possible deadlock when processing page faults from cxllib powerpc/hw_breakpoint: Only disable hw breakpoint if cpu supports it powerpc/mm/radix: Update command line parsing for disable_radix powerpc/mm/radix: Parse disable_radix commandline correctly. powerpc/mm/hugetlb: initialize the pagetable cache correctly for hugetlb powerpc/mm/radix: Update pte fragment count from 16 to 256 on radix powerpc/mm/keys: Update documentation and remove unnecessary check powerpc/64s/idle: POWER9 ESL=0 stop avoid save/restore overhead powerpc/64s/idle: Consolidate power9_offline_stop()/power9_idle_stop() powerpc/powernv: Always stop secondaries before reboot/shutdown powerpc: hard disable irqs in smp_send_stop loop powerpc: use NMI IPI for smp_send_stop powerpc/powernv: Fix SMT4 forcing idle code ...
| * powerpc/mm/radix: Parse disable_radix commandline correctly.Aneesh Kumar K.V2018-04-041-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | kernel parameter disable_radix takes different options disable_radix=yes|no|1|0 or just disable_radix. When using the later format we get below error. `Malformed early option 'disable_radix'` Fixes: 1fd6c0220710 ("powerpc/mm: Add a CONFIG option to choose if radix is used by default") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/mm/hugetlb: initialize the pagetable cache correctly for hugetlbAneesh Kumar K.V2018-04-041-5/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With 64k page size, we have hugetlb pte entries at the pmd and pud level for book3s64. We don't need to create a separate page table cache for that. With 4k we need to make sure hugepd page table cache for 16M is placed at PUD level and 16G at the PGD level. Simplify all these by not using HUGEPD_PD_SHIFT which is confusing for book3s64. Without this patch, with 64k page size we create pagetable caches with shift value 10 and 7 which are not used at all. Fixes: 419df06eea5b ("powerpc: Reduce the PTE_INDEX_SIZE") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/mm/radix: Update pte fragment count from 16 to 256 on radixAneesh Kumar K.V2018-04-041-6/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With split PTL (page table lock) config, we allocate the level 4 (leaf) page table using pte fragment framework instead of slab cache like other levels. This was done to enable us to have split page table lock at the level 4 of the page table. We use page->plt backing the all the level 4 pte fragment for the lock. Currently with Radix, we use only 16 fragments out of the allocated page. In radix each fragment is 256 bytes which means we use only 4k out of the allocated 64K page wasting 60k of the allocated memory. This was done earlier to keep it closer to hash. This patch update the pte fragment count to 256, thereby using the full 64K page and reducing the memory usage. Performance tests shows really low impact even with THP disabled. With THP disabled we will be contenting further less on level 4 ptl and hence the impact should be further low. 256 threads: without patch (10 runs of ./ebizzy -m -n 1000 -s 131072 -S 100) median = 15678.5 stdev = 42.1209 with patch: median = 15354 stdev = 194.743 This is with THP disabled. With THP enabled the impact of the patch will be less. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/mm/keys: Update documentation and remove unnecessary checkAneesh Kumar K.V2018-04-042-23/+16
| | | | | | | | | | | | | | | | Adds more code comments. We also remove an unnecessary pkey check after we check for pkey error in this patch. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/mm/radix: Fix always false comparison against MMU_NO_CONTEXTMathieu Malaterre2018-04-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | In commit 9690c1574268 ("powerpc/mm/radix: Fix always false comparison against MMU_NO_CONTEXT") an issue was discovered where `mm->context.id` was being truncated to an `unsigned int`, while the PID is actually an `unsigned long`. Update the earlier patch by fixing one remaining occurrence. Discovered during a compilation with W=1: arch/powerpc/mm/tlb-radix.c:702:19: error: comparison is always false due to limited range of data type [-Werror=type-limits] Signed-off-by: Mathieu Malaterre <malat@debian.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/64s: Remove POWER4 supportNicholas Piggin2018-04-011-4/+5
| | | | | | | | | | | | | | | | | | | | POWER4 has been broken since at least the change 49d09bf2a6 ("powerpc/64s: Optimise MSR handling in exception handling"), which requires mtmsrd L=1 support. This was introduced in ISA v2.01, and POWER4 supports ISA v2.00. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/mm/32: Remove the reserved memory hackJonathan Neuschäfer2018-04-013-8/+1
| | | | | | | | | | | | | | | | This hack, introduced in commit c5df7f775148 ("powerpc: allow ioremap within reserved memory regions") is now unnecessary. Signed-off-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/mm/32: Use page_is_ram to check for RAMJonathan Neuschäfer2018-04-011-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On systems where there is MMIO space between different blocks of RAM in the physical address space, __ioremap_caller did not allow mapping these MMIO areas, because they were below the end RAM and thus considered RAM as well. Use the memblock-based page_is_ram function, which returns false for such MMIO holes. v2: Keep the check for p < virt_to_phys(high_memory). On 32-bit systems with high memory (memory above physical address 4GiB), the high memory is expected to be available though ioremap. The high_memory variable marks the end of low memory; comparing against it means that only ioremap requests for low RAM will be denied. Reported by Michael Ellerman. Signed-off-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/mm: Use memblock API for PPC32 page_is_ramJonathan Neuschäfer2018-04-011-4/+0
| | | | | | | | | | | | | | | | | | To support accurate checking for different blocks of memory on PPC32, use the same memblock-based approach that's already used on PPC64 also on PPC32. Signed-off-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
| * powerpc/mm: Simplify page_is_ram by using memblock_is_memoryJonathan Neuschäfer2018-04-011-7/+1
| | | | | | | | | | | | | | Instead of open-coding the search in page_is_ram, call memblock_is_memory. Signed-off-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
OpenPOWER on IntegriCloud