diff options
author | Benjamin Herrenschmidt <benh@kernel.crashing.org> | 2012-03-07 16:48:45 +1100 |
---|---|---|
committer | Benjamin Herrenschmidt <benh@kernel.crashing.org> | 2012-03-09 10:55:08 +1100 |
commit | a546498f3bf9aac311c66f965186373aee2ca0b0 (patch) | |
tree | 86fb9a778aba26df3810acd8e52a921a2d84489b /arch/powerpc/include/asm/hw_irq.h | |
parent | 1b70117924a4f254840ed70fbe3020d4519a1a9a (diff) | |
download | blackbird-obmc-linux-a546498f3bf9aac311c66f965186373aee2ca0b0.tar.gz blackbird-obmc-linux-a546498f3bf9aac311c66f965186373aee2ca0b0.zip |
powerpc: Call do_page_fault() with interrupts off
We currently turn interrupts back to their previous state before
calling do_page_fault(). This can be annoying when debugging as
a bad fault will potentially have lost some processor state before
getting into the debugger.
We also end up calling some generic code with interrupts enabled
such as notify_page_fault() with interrupts enabled, which could
be unexpected.
This changes our code to behave more like other architectures,
and make the assembly entry code call into do_page_faults() with
interrupts disabled. They are conditionally re-enabled from
within do_page_fault() in the same spot x86 does it.
While there, add the might_sleep() test in the case of a successful
trylock of the mmap semaphore, again like x86.
Also fix a bug in the existing assembly where r12 (_MSR) could get
clobbered by C calls (the DTL accounting in the exception common
macro and DISABLE_INTS) in some cases.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2. Add the r12 clobber fix
Diffstat (limited to 'arch/powerpc/include/asm/hw_irq.h')
-rw-r--r-- | arch/powerpc/include/asm/hw_irq.h | 10 |
1 files changed, 10 insertions, 0 deletions
diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h index bb712c9488b3..531ba00fcbab 100644 --- a/arch/powerpc/include/asm/hw_irq.h +++ b/arch/powerpc/include/asm/hw_irq.h @@ -79,6 +79,11 @@ static inline bool arch_irqs_disabled(void) get_paca()->hard_enabled = 0; \ } while(0) +static inline bool arch_irq_disabled_regs(struct pt_regs *regs) +{ + return !regs->softe; +} + #else /* CONFIG_PPC64 */ #define SET_MSR_EE(x) mtmsr(x) @@ -139,6 +144,11 @@ static inline bool arch_irqs_disabled(void) #define hard_irq_disable() arch_local_irq_disable() +static inline bool arch_irq_disabled_regs(struct pt_regs *regs) +{ + return !(regs->msr & MSR_EE); +} + #endif /* CONFIG_PPC64 */ #define ARCH_IRQ_INIT_FLAGS IRQ_NOREQUEST |