summaryrefslogtreecommitdiffstats
path: root/arch/powerpc/kernel/exceptions-64s.S
diff options
context:
space:
mode:
authorPaul Mackerras <paulus@samba.org>2009-08-18 08:25:32 +1000
committerPaul Mackerras <paulus@samba.org>2009-08-18 14:48:47 +1000
commit20002ded4d937ca87aca6253b874920a96a763c4 (patch)
tree1253f60bc20c95598ae1d1e1578d9aac9ea7a6b0 /arch/powerpc/kernel/exceptions-64s.S
parent9c1e105238c474d19905af504f2e7f42d4f71f9e (diff)
downloadblackbird-op-linux-20002ded4d937ca87aca6253b874920a96a763c4.tar.gz
blackbird-op-linux-20002ded4d937ca87aca6253b874920a96a763c4.zip
perf_counter: powerpc: Add callchain support
This adds support for tracing callchains for powerpc, both 32-bit and 64-bit, and both in the kernel and userspace, from PMU interrupt context. The first three entries stored for each callchain are the NIP (next instruction pointer), LR (link register), and the contents of the LR save area in the second stack frame (the first is ignored because the ABI convention on powerpc is that functions save their return address in their caller's stack frame). Because leaf functions don't have to save their return address (LR value) and don't have to establish a stack frame, it's possible for either or both of LR and the second stack frame's LR save area to have valid return addresses in them. This is basically impossible to disambiguate without either reading the code or looking at auxiliary information such as CFI tables. Since we don't want to do either of those things at interrupt time, we store both LR and the second stack frame's LR save area. Once we get past the second stack frame, there is no ambiguity; all return addresses we get are reliable. For kernel traces, we check whether they are valid kernel instruction addresses and store zero instead if they are not (rather than omitting them, which would make it impossible for userspace to know which was which). We also store zero instead of the second stack frame's LR save area value if it is the same as LR. For kernel traces, we check for interrupt frames, and for user traces, we check for signal frames. In each case, since we're starting a new trace, we store a PERF_CONTEXT_KERNEL/USER marker so that userspace knows that the next three entries are NIP, LR and the second stack frame for the interrupted context. We read user memory with __get_user_inatomic. On 64-bit, if this PMU interrupt occurred while interrupts are soft-disabled, and there is no MMU hash table entry for the page, we will get an -EFAULT return from __get_user_inatomic even if there is a valid Linux PTE for the page, since hash_page isn't reentrant. Thus we have code here to read the Linux PTE and access the page via the kernel linear mapping. Since 64-bit doesn't use (or need) highmem there is no need to do kmap_atomic. On 32-bit, we don't do soft interrupt disabling, so this complication doesn't occur and there is no need to fall back to reading the Linux PTE, since hash_page (or the TLB miss handler) will get called automatically if necessary. Note that we cannot get PMU interrupts in the interval during context switch between switch_mm (which switches the user address space) and switch_to (which actually changes current to the new process). On 64-bit this is because interrupts are hard-disabled in switch_mm and stay hard-disabled until they are soft-enabled later, after switch_to has returned. So there is no possibility of trying to do a user stack trace when the user address space is not current's address space. Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
Diffstat (limited to 'arch/powerpc/kernel/exceptions-64s.S')
0 files changed, 0 insertions, 0 deletions
OpenPOWER on IntegriCloud