diff options
author | Ralf Baechle <ralf@linux-mips.org> | 2007-10-08 16:38:37 +0100 |
---|---|---|
committer | Ralf Baechle <ralf@linux-mips.org> | 2007-10-29 19:35:37 +0000 |
commit | a76ab5c10d99bdf458067cb495e72c0ee5f09909 (patch) | |
tree | 6623ea1ee8c9a273afe1a05069f54b6ca745b91b /arch/mips/mm | |
parent | a370605594bc9f375d2912096f01643c46b4b709 (diff) | |
download | blackbird-op-linux-a76ab5c10d99bdf458067cb495e72c0ee5f09909.tar.gz blackbird-op-linux-a76ab5c10d99bdf458067cb495e72c0ee5f09909.zip |
[MIPS] MT: Fix bug in multithreaded kernels.
When GDB writes a breakpoint into address area of inferior process the
kernel needs to invalidate the modified memory in the inferior which
is done by calling flush_cache_page which in turns calls
r4k_flush_cache_page and local_r4k_flush_cache_page for VSMP or SMTC
kernel via r4k_on_each_cpu().
As the VSMP and SMTC SMP kernels for 34K are running on a single shared
caches it is possible to get away without interprocessor function calls.
This optimization is implemented in r4k_on_each_cpu, so
local_r4k_flush_cache_page is only ever called on the local CPU.
This is where the following code in local_r4k_flush_cache_page() strikes:
/*
* If ownes no valid ASID yet, cannot possibly have gotten
* this page into the cache.
*/
if (cpu_context(smp_processor_id(), mm) == 0)
return;
On VSMP and SMTC had a function of cpu_context() for each CPU(TC).
So in case another CPU than the CPU executing local_r4k_cache_flush_page
has not accessed the mm but one of the other CPUs has there may be data
to be flushed in the cache yet local_r4k_cache_flush_page will falsely
return leaving the I-cache inconsistent for the breakpoint.
While the issue was discovered with GDB it also exists in
local_r4k_flush_cache_range() and local_r4k_flush_cache().
Fixed by introducing a new function has_valid_asid which on MT kernels
returns true if a mm is active on any processor in the system.
This is relativly expensive since for memory acccesses in that loop
cache misses have to be assumed but it seems the most viable solution
for 2.6.23 and older -stable kernels.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Diffstat (limited to 'arch/mips/mm')
-rw-r--r-- | arch/mips/mm/c-r4k.c | 21 |
1 files changed, 18 insertions, 3 deletions
diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c index d7088331fb0f..6806d58211b2 100644 --- a/arch/mips/mm/c-r4k.c +++ b/arch/mips/mm/c-r4k.c @@ -345,11 +345,26 @@ static void r4k___flush_cache_all(void) r4k_on_each_cpu(local_r4k___flush_cache_all, NULL, 1, 1); } +static inline int has_valid_asid(const struct mm_struct *mm) +{ +#if defined(CONFIG_MIPS_MT_SMP) || defined(CONFIG_MIPS_MT_SMTC) + int i; + + for_each_online_cpu(i) + if (cpu_context(i, mm)) + return 1; + + return 0; +#else + return cpu_context(smp_processor_id(), mm); +#endif +} + static inline void local_r4k_flush_cache_range(void * args) { struct vm_area_struct *vma = args; - if (!(cpu_context(smp_processor_id(), vma->vm_mm))) + if (!(has_valid_asid(vma->vm_mm))) return; r4k_blast_dcache(); @@ -368,7 +383,7 @@ static inline void local_r4k_flush_cache_mm(void * args) { struct mm_struct *mm = args; - if (!cpu_context(smp_processor_id(), mm)) + if (!has_valid_asid(mm)) return; /* @@ -420,7 +435,7 @@ static inline void local_r4k_flush_cache_page(void *args) * If ownes no valid ASID yet, cannot possibly have gotten * this page into the cache. */ - if (cpu_context(smp_processor_id(), mm) == 0) + if (!has_valid_asid(mm)) return; addr &= PAGE_MASK; |