| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The current ASID allocation algorithm doesn't ensure the notification
of the other CPUs when the ASID rolls over. This may lead to two
processes using the same ASID (but different generation) or multiple
threads of the same process using different ASIDs.
This patch adds the broadcasting of the ASID rollover event to the
other CPUs. To avoid a race on multiple CPUs modifying "cpu_last_asid"
during the handling of the broadcast, the ASID numbering now starts at
"smp_processor_id() + 1". At rollover, the cpu_last_asid will be set
to NR_CPUS.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
|
|
|
|
|
|
|
|
| |
Errata 411920 indicates that any "invalidate entire instruction cache"
operation can fail if the right conditions are present. This is not
limited just to those operations in flush.c, but elsewhere. Place the
workaround in the already existing __flush_icache_all() function
instead.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
|
|
|
|
|
|
|
| |
Makes code futureproof against the impending change to mm->cpu_vm_mask.
It's also a chance to use the new cpumask_ ops which take a pointer
(the older ones are deprecated, but there's no hurry for arch code).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Close a hole in the ASID version switch, particularly the following
scenario:
CPU0 MM PID CPU1 MM PID
idle
A pid(A)
A idle(lazy tlb)
* new asid version triggered by B *
B pid(B)
A pid(A)
* MM A gets new asid version *
A idle(lazy tlb)
A pid(A)
* CPU1 doesn't see the new ASID *
The result is that CPU1 continues running with the hardware set
for the original (stale) ASID value, but mm->context.id contains
the new ASID value. The result is that the next MM fault on CPU1
updates the page table entries, but flush_tlb_page() fails due to
wrong ASID.
There is a related case with a threaded application is allocated
a new ASID on one CPU while another of its threads is running on
some different CPU. This scenario is not fixed by this commit.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|/
|
|
|
|
|
|
|
| |
ARMv7 can have VIPT, PIPT or ASID-tagged VIVT I-cache. This patch
adds the necessary invalidation of the I-cache when the ASID numbers
are re-used.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
|
|
|
|
|
|
|
|
|
| |
On newer architectures (ARMv6, ARMv7), the depth of the prefetch and
branch prediction is implementation defined and there is a small risk
of wrong ASID tagging when changing TTBR0 before setting the new
context id. The recommended solution is to set a reserved ASID during
TTBR changing. This patch reserves ASID 0.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Rename mmu.c to context.c - it's the ARMv6 ASID context handling
code rather than generic "mmu" handling code.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|