summaryrefslogtreecommitdiffstats
path: root/arch/arm/include/asm/mmu_context.h
Commit message (Collapse)AuthorAgeFilesLines
* sched/headers: Prepare to remove the <linux/mm_types.h> dependency from ↵Ingo Molnar2017-03-021-0/+2
| | | | | | | | | | | | | | | <linux/sched.h> Update code that relied on sched.h including various MM types for them. This will allow us to remove the <linux/mm_types.h> include from <linux/sched.h>. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* ARM: Hide finish_arch_post_lock_switch() from modulesSteven Rostedt2016-05-131-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The introduction of switch_mm_irqs_off() brought back an old bug regarding the use of preempt_enable_no_resched: As part of: 62b94a08da1b ("sched/preempt: Take away preempt_enable_no_resched() from modules") the definition of preempt_enable_no_resched() is only available in built-in code, not in loadable modules, so we can't generally use it from header files. However, the ARM version of finish_arch_post_lock_switch() calls preempt_enable_no_resched() and is defined as a static inline function in asm/mmu_context.h. This in turn means we cannot include asm/mmu_context.h from modules. With today's tip tree, asm/mmu_context.h gets included from linux/mmu_context.h, which is normally the exact pattern one would expect, but unfortunately, linux/mmu_context.h can be included from the vhost driver that is a loadable module, now causing this compile time error with modular configs: In file included from ../include/linux/mmu_context.h:4:0, from ../drivers/vhost/vhost.c:18: ../arch/arm/include/asm/mmu_context.h: In function 'finish_arch_post_lock_switch': ../arch/arm/include/asm/mmu_context.h:88:3: error: implicit declaration of function 'preempt_enable_no_resched' [-Werror=implicit-function-declaration] preempt_enable_no_resched(); Andy already tried to fix the bug by including linux/preempt.h from asm/mmu_context.h, but that didn't help. Arnd suggested reordering the header files, which wasn't popular, so let's use this workaround instead: The finish_arch_post_lock_switch() definition is now also hidden inside of #ifdef MODULE, so we don't see anything referencing preempt_enable_no_resched() from a header file. I've built a few hundred randconfig kernels with this, and did not see any new problems. Tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Borislav Petkov <bp@suse.de> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King - ARM Linux <linux@armlinux.org.uk> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: linux-arm-kernel@lists.infradead.org Fixes: f98db6013c55 ("sched/core: Add switch_mm_irqs_off() and use it in the scheduler") Link: http://lkml.kernel.org/r/1463146234-161304-1-git-send-email-arnd@arndb.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
* sched/core, ARM: Include linux/preempt.h from asm/mmu_context.hAndy Lutomirski2016-04-281-0/+1
| | | | | | | | | | | | | | | | | | | arm's mmu_context.h uses preempt_enable_no_resched and but doesn't include anything that would pull in the declaration. If I start including <asm/mmu_context.h> from <linux/mmu_context.h> without this, the build breaks. Signed-off-by: Andy Lutomirski <luto@kernel.org> Reviewed-by: Borislav Petkov <bp@suse.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/5b95730a70f2dafe12d4fbf38d20eb7330d67ba3.1461688545.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* ARM: 8531/1: turn init_new_context into an inline functionArnd Bergmann2016-02-221-2/+12
| | | | | | | | | | | | | | | | | | Almost all architectures define init_new_context() as a function, but on ARM, it's a macro and that causes a compiler warning when its return code is not used: drivers/firmware/efi/arm-runtime.c: In function 'efi_virtmap_init': arch/arm/include/asm/mmu_context.h:88:34: warning: statement with no effect [-Wunused-value] #define init_new_context(tsk,mm) 0 drivers/firmware/efi/arm-runtime.c:47:2: note: in expansion of macro 'init_new_context' init_new_context(NULL, &efi_mm); This changes the definition into an inline function, which gcc does not warn about. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: wire up UEFI init and runtime supportArd Biesheuvel2015-12-131-1/+1
| | | | | | | | | | | | | This adds support to the kernel proper for booting via UEFI. It shares most of the code with arm64, so this patch mostly just wires it up for use with ARM. Note that this does not include the EFI stub, it is added in a subsequent patch. Tested-by: Ryan Harkin <ryan.harkin@linaro.org> Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
* ARM: 7790/1: Fix deferred mm switch on VIVT processorsCatalin Marinas2013-07-261-4/+16
| | | | | | | | | | | | | | | | | | | | | As of commit b9d4d42ad9 (ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on pre-ARMv6 CPUs), the mm switching on VIVT processors is done in the finish_arch_post_lock_switch() function to avoid whole cache flushing with interrupts disabled. The need for deferred mm switch is stored as a thread flag (TIF_SWITCH_MM). However, with preemption enabled, we can have another thread switch before finish_arch_post_lock_switch(). If the new thread has the same mm as the previous 'next' thread, the scheduler will not call switch_mm() and the TIF_SWITCH_MM flag won't be set for the new thread. This patch moves the switch pending flag to the mm_context_t structure since this is specific to the mm rather than thread. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Marc Kleine-Budde <mkl@pengutronix.de> Tested-by: Marc Kleine-Budde <mkl@pengutronix.de> Cc: <stable@vger.kernel.org> # 3.5+ Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 7769/1: Cortex-A15: fix erratum 798181 implementationMarc Zyngier2013-06-241-1/+9
| | | | | | | | | | | | | | | | | | | | Looking into the active_asids array is not enough, as we also need to look into the reserved_asids array (they both represent processes that are currently running). Also, not holding the ASID allocator lock is racy, as another CPU could schedule that process and trigger a rollover, making the erratum workaround miss an IPI. Exposing this outside of context.c is a little ugly on the side, so let's define a new entry point that the erratum workaround can call to obtain the cpumask. Cc: <stable@vger.kernel.org> # 3.9 Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 7757/1: mm: don't flush icache in switch_mm with hardware broadcastingWill Deacon2013-06-171-4/+9
| | | | | | | | | | | | | | | | | | When scheduling an mm on a CPU where it hasn't previously been used, we flush the icache on that CPU so that any code loaded previously on a different core can be safely executed. For cores with hardware broadcasting of cache maintenance operations, this is clearly unnecessary, since the inner-shareable invalidation in __sync_icache_dcache will affect all CPUs. This patch conditionalises the icache flush in switch_mm based on cache_ops_need_broadcast(). Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Albin Tonnerre <albin.tonnerre@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 7684/1: errata: Workaround for Cortex-A15 erratum 798181 (TLBI/DSB ↵Catalin Marinas2013-04-031-0/+2
| | | | | | | | | | | | | | | | | operations) On Cortex-A15 (r0p0..r3p2) the TLBI/DSB are not adequately shooting down all use of the old entries. This patch implements the erratum workaround which consists of: 1. Dummy TLBIMVAIS and DSB on the CPU doing the TLBI operation. 2. Send IPI to the CPUs that are running the same mm (and ASID) as the one being invalidated (or all the online CPUs for global pages). 3. CPU receiving the IPI executes a DMB and CLREX (part of the exception return code already). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 7659/1: mm: make mm->context.id an atomic64_t variableWill Deacon2013-03-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | mm->context.id is updated under asid_lock when a new ASID is allocated to an mm_struct. However, it is also read without the lock when a task is being scheduled and checking whether or not the current ASID generation is up-to-date. If two threads of the same process are being scheduled in parallel and the bottom bits of the generation in their mm->context.id match the current generation (that is, the mm_struct has not been used for ~2^24 rollovers) then the non-atomic, lockless access to mm->context.id may yield the incorrect ASID. This patch fixes this issue by making mm->context.id and atomic64_t, ensuring that the generation is always read consistently. For code that only requires access to the ASID bits (e.g. TLB flushing by mm), then the value is accessed directly, which GCC converts to an ldrb. Cc: <stable@vger.kernel.org> # 3.8 Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 7582/2: rename kvm_seq to vmalloc_seq so to avoid confusion with KVMNicolas Pitre2012-11-261-3/+3
| | | | | | | | | | The kvm_seq value has nothing to do what so ever with this other KVM. Given that KVM support on ARM is imminent, it's best to rename kvm_seq into something else to clearly identify what it is about i.e. a sequence number for vmalloc section mappings. Signed-off-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: mm: remove IPI broadcasting on ASID rolloverWill Deacon2012-11-051-79/+3
| | | | | | | | | | | | | | | | | | | ASIDs are allocated to MMU contexts based on a rolling counter. This means that after 255 allocations we must invalidate all existing ASIDs via an expensive IPI mechanism to synchronise all of the online CPUs and ensure that all tasks execute with an ASID from the new generation. This patch changes the rollover behaviour so that we rely instead on the hardware broadcasting of the TLB invalidation to avoid the IPI calls. This works by keeping track of the active ASID on each core, which is then reserved in the case of a rollover so that currently scheduled tasks can continue to run. For cores without hardware TLB broadcasting, we keep track of pending flushes in a cpumask, so cores can flush their local TLB before scheduling a new mm. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on pre-ARMv6 CPUsCatalin Marinas2012-04-171-5/+26
| | | | | | | | | | | | | | This patch removes the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition for ARMv5 and earlier processors. On such processors, the context switch requires a full cache flush. To avoid high interrupt latencies, this patch defers the mm switching to the post-lock switch hook if the interrupts are disabled. Reviewed-by: Will Deacon <will.deacon@arm.com> Tested-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Frank Rowand <frank.rowand@am.sony.com> Tested-by: Marc Zyngier <Marc.Zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* ARM: Remove current_mm per-cpu variableCatalin Marinas2012-04-171-7/+0
| | | | | | | | | | | | | | The current_mm variable was used to store the new mm between the switch_mm() and switch_to() calls where an IPI to reset the context could have set the wrong mm. Since the interrupts are disabled during context switch, there is no need for this variable, current->active_mm already points to the current mm when interrupts are re-enabled. Reviewed-by: Will Deacon <will.deacon@arm.com> Tested-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Frank Rowand <frank.rowand@am.sony.com> Tested-by: Marc Zyngier <Marc.Zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on ASID-capable CPUsCatalin Marinas2012-04-171-16/+56
| | | | | | | | | | | | | | | | | | | | | | Since the ASIDs must be unique to an mm across all the CPUs in a system, the __new_context() function needs to broadcast a context reset event to all the CPUs during ASID allocation if a roll-over occurred. Such IPIs cannot be issued with interrupts disabled and ARM had to define __ARCH_WANT_INTERRUPTS_ON_CTXSW. This patch changes the check_context() function to check_and_switch_context() called from switch_mm(). In case of ASID-capable CPUs (ARMv6 onwards), if a new ASID is needed and the interrupts are disabled, it defers the __new_context() and cpu_switch_mm() calls to the post-lock switch hook where the interrupts are enabled. Setting the reserved TTBR0 was also moved to check_and_switch_context() from cpu_v7_switch_mm(). Reviewed-by: Will Deacon <will.deacon@arm.com> Tested-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Frank Rowand <frank.rowand@am.sony.com> Tested-by: Marc Zyngier <Marc.Zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* ARM: 7294/1: vectors: use gate_vma for vectors user mappingWill Deacon2012-03-241-28/+1
| | | | | | | | | | | | | | | | | The current user mapping for the vectors page is inserted as a `horrible hack vma' into each task via arch_setup_additional_pages. This causes problems with the MM subsystem and vm_normal_page, as described here: https://lkml.org/lkml/2012/1/14/55 Following the suggestion from Hugh in the above thread, this patch uses the gate_vma for the vectors user mapping, therefore consolidating the horrible hack VMAs into one. Acked-and-Tested-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: add a vma entry for the user accessible vector pageNicolas Pitre2010-10-011-1/+28
| | | | | | | | | | | | | | | | | | | | | | | | The kernel makes the high vector page visible to user space. This page contains (amongst others) small code segments that can be executed in user space. Make this page visible through ptrace and /proc/<pid>/mem in order to let gdb perform code parsing needed for proper unwinding. For example, the ERESTART_RESTARTBLOCK handler actually has a stack frame -- it returns to a PC value stored on the user's stack. To unwind after a "sleep" system call was interrupted twice, GDB would have to recognize this situation and understand that stack frame layout -- which it currently cannot do. We could fix this by hard-coding addresses in the vector page range into GDB, but that isn't really portable as not all of those addresses are guaranteed to remain stable across kernel releases. And having the gdb process make an exception for this page and get content from its own address space for it looks strange, and it is not future proof either. Being located above PAGE_OFFSET, this vma cannot be deleted by user space code. Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
* ARM: 5905/1: ARM: Global ASID allocation on SMPCatalin Marinas2010-02-151-0/+15
| | | | | | | | | | | | | | | | The current ASID allocation algorithm doesn't ensure the notification of the other CPUs when the ASID rolls over. This may lead to two processes using the same ASID (but different generation) or multiple threads of the same process using different ASIDs. This patch adds the broadcasting of the ASID rollover event to the other CPUs. To avoid a race on multiple CPUs modifying "cpu_last_asid" during the handling of the broadcast, the ASID numbering now starts at "smp_processor_id() + 1". At rollover, the cpu_last_asid will be set to NR_CPUS. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* cpumask: use mm_cpumask() wrapper: armRusty Russell2009-09-241-3/+4
| | | | | | | | | Makes code futureproof against the impending change to mm->cpu_vm_mask. It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
* nommu: Remove the context.id from asm-offsets.c when !MMUCatalin Marinas2009-07-241-0/+2
| | | | | | | | There is no MMU context switching on MMU-less systems. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* [ARM] Remove linux/sched.h from asm/cacheflush.h and asm/uaccess.hRussell King2008-11-291-0/+1
| | | | | | | ... and fix those drivers that were incorrectly relying upon that include. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] cachetype: move definitions to separate headerRussell King2008-09-011-0/+1
| | | | | | | | Rather than pollute asm/cacheflush.h with the cache type definitions, move them to asm/cachetype.h, and include this new header where necessary. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] move include/asm-arm to arch/arm/include/asmRussell King2008-08-021-0/+117
Move platform independent header files to arch/arm/include/asm, leaving those in asm/arch* and asm/plat* alone. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
OpenPOWER on IntegriCloud