summaryrefslogtreecommitdiffstats
path: root/arch/powerpc/kernel/entry_64.S
Commit message (Collapse)AuthorAgeFilesLines
* [POWERPC] irqtrace support for 64-bit powerpcBenjamin Herrenschmidt2008-04-181-2/+25
| | | | | | | | | | | | | This adds the low level irq tracing hooks to the powerpc architecture needed to enable full lockdep functionality. This is partly based on Johannes Berg's initial version. I removed the asm trampoline that isn't needed (thus improving performance) and modified all sorts of bits and pieces, reworking most of the assembly, etc... Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Move stackframe definitions to common headerBenjamin Herrenschmidt2008-04-181-1/+2
| | | | | | | | This moves various definitions used all over the place to parse stack frames to ptrace.h so only one definition is needed. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Add 1TB workaround for PA6TOlof Johansson2007-10-171-0/+6
| | | | | | | | | | | | | | | PA6T has a bug where the slbie instruction does not honor the large segment bit. As a result, we have to always use slbia when switching context. We don't have to worry about changing the slbie's during fault processing, since they should never be replacing one VSID with another using the same ESID. I.e. there's no risk for inserting duplicate entries due to a failed slbie of the old entry. So as long as we clear it out on context switch we should be fine. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Use 1TB segmentsPaul Mackerras2007-10-121-1/+13
| | | | | | | | | | | | | | | | | | | | This makes the kernel use 1TB segments for all kernel mappings and for user addresses of 1TB and above, on machines which support them (currently POWER5+, POWER6 and PA6T). We detect that the machine supports 1TB segments by looking at the ibm,processor-segment-sizes property in the device tree. We don't currently use 1TB segments for user addresses < 1T, since that would effectively prevent 32-bit processes from using huge pages unless we also had a way to revert to using 256MB segments. That would be possible but would involve extra complications (such as keeping track of which segment size was used when HPTEs were inserted) and is not addressed here. Parts of this patch were originally written by Ben Herrenschmidt. Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Remove barriers from the SLB shadow buffer updateMichael Neuling2007-09-191-4/+4
| | | | | | | | | | | | After talking to an IBM POWER hypervisor (PHYP) design and development guy, there seems to be no need for memory barriers when updating the SLB shadow buffer provided we only update it from the current CPU, which we do. Also, these guys see no need in the future for these barriers. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Fixes for the SLB shadow buffer codeMichael Neuling2007-08-031-0/+3
| | | | | | | | | | | | | | On a machine with hardware 64kB pages and a kernel configured for a 64kB base page size, we need to change the vmalloc segment from 64kB pages to 4kB pages if some driver creates a non-cacheable mapping in the vmalloc area. However, we never updated with SLB shadow buffer. This fixes it. Thanks to paulus for finding this. Also added some write barriers to ensure the shadow buffer contents are always consistent. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* remove unused TIF_NOTIFY_RESUME flagStephane Eranian2007-07-311-1/+0
| | | | | | | | | | | | Remove unused TIF_NOTIFY_RESUME flag for all processor architectures. The flag was not used excecpt on IA-64 where the patch replaces it with TIF_PERFMON_WORK. Signed-off-by: stephane eranian <eranian@hpl.hp.com> Cc: <linux-arch@vger.kernel.org> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [POWERPC] Clear RI bit in MSR before restoring r13 when returning to userspacePaul Mackerras2007-02-071-26/+33
| | | | | | | | | | | Some instruction tracing tools use the RI (recoverable interrupt) bit in the MSR to indicate when it's safe to single-step. Currently we clear RI after restoring r13 when returning to userspace. However, if we single-step past the point where r13 is restored, we'll corrupt r13 in the exception entry code and not restore it. This moves the clearing of RI to just before r13 is restored so this doesn't happen. Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Fix manual assembly WARN_ON() in enter_rtas().David Woodhouse2007-01-091-8/+5
| | | | | | | | | | | | | | | | | | When we switched over to the generic BUG mechanism we forgot to change the assembly code which open-codes a WARN_ON() in enter_rtas(), so the bug table got corrupted. This patch provides an EMIT_BUG_ENTRY macro for use in assembly code, and uses it in entry_64.S. Tested with CONFIG_DEBUG_BUGVERBOSE on ppc64 but not without -- I tried to turn it off but it wouldn't go away; I suspect Aunt Tillie probably needed it. This version gets __FILE__ and __LINE__ right in the assembly version -- rather than saying include/asm-powerpc/bug.h line 21 every time which is a little suboptimal. Signed-off-by: David Woodhouse <dwmw2@infradead.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] iSeries: Eliminate "exceeds stub group size" warningsStephen Rothwell2006-12-041-1/+3
| | | | | | | | Commit 3ccfc65c5004e5fe5cfbffe43b8acc686680b53e missed the same fixes for legacy iSeries specific code, so make some more symbols no longer global. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] Remove occurences of PPC_MULTIPLATFORM in head_64.Ss.hauer@pengutronix.de2006-11-131-4/+0
| | | | | | | | | | Since iSeries is merged to MULTIPLATFORM, there is no way to build a 64bit kernel without MULTIPLATFORM, so PPC_MULTIPLATFORM can be removed in 64bit-only files. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Make sure interrupt enable gets restored properlyPaul Mackerras2006-10-181-0/+4
| | | | | | | | | | | | | The lazy IRQ disable patch missed a couple of places where the interrupt enable flags need to be restored correctly. First, we weren't restoring the paca->hard_enabled flag on interrupt exit. Instead of saving it on entry, we compute it from the MSR_EE bit in the MSR we are restoring at exit. Secondly, the MMU hash miss code was clearing both paca->soft_enabled and paca->hard_enabled but not restoring them in the case where hash_page was able to resolve the miss from the Linux page tables. Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Lazy interrupt disabling for 64-bit machinesPaul Mackerras2006-10-161-21/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This implements a lazy strategy for disabling interrupts. This means that local_irq_disable() et al. just clear the 'interrupts are enabled' flag in the paca. If an interrupt comes along, the interrupt entry code notices that interrupts are supposed to be disabled, and clears the EE bit in SRR1, clears the 'interrupts are hard-enabled' flag in the paca, and returns. This means that interrupts only actually get disabled in the processor when an interrupt comes along. When interrupts are enabled by local_irq_enable() et al., the code sets the interrupts-enabled flag in the paca, and then checks whether interrupts got hard-disabled. If so, it also sets the EE bit in the MSR to hard-enable the interrupts. This has the potential to improve performance, and also makes it easier to make a kernel that can boot on iSeries and on other 64-bit machines, since this lazy-disable strategy is very similar to the soft-disable strategy that iSeries already uses. This version renames paca->proc_enabled to paca->soft_enabled, and changes a couple of soft-disables in the kexec code to hard-disables, which should fix the crash that Michael Ellerman saw. This doesn't yet use a reserved CR field for the soft_enabled and hard_enabled flags. This applies on top of Stephen Rothwell's patches to make it possible to build a combined iSeries/other kernel. Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] implement BEGIN/END_FW_FTR_SECTIONStephen Rothwell2006-10-031-6/+12
| | | | | | and use it an all the obvious places in assembler code. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
* [POWERPC] SLB shadow buffer cleanupMichael Neuling2006-08-251-9/+4
| | | | | | | Cleanup some of the #define magic as suggested by Milton. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Implement SLB shadow bufferMichael Neuling2006-08-081-0/+13
| | | | | | | | | | | | This adds a shadow buffer for the SLBs and regsiters it with PHYP. Only the bolted SLB entries (top 3) are shadowed. The SLB shadow buffer tells the hypervisor what the kernel needs to have in the SLB for the kernel to be able to function. The hypervisor can use this information to speed up partition context switches. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* Remove obsolete #include <linux/config.h>Jörn Engel2006-06-301-1/+0
| | | | | Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>
* [POWERPC] system call micro optimisationAnton Blanchard2006-06-151-1/+1
| | | | | | | | | | | | In the syscall path we currently have: crclr so mfcr r9 If we shift the crclr up we can avoid a stall on some CPUs. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] powerpc: Workaround for pSeries RTAS bugMike Kravetz2006-03-281-0/+6
| | | | | | | | | | | | | | | | | | | | A bug in the RTAS services incorrectly interprets some bits in the CR when called from the OS. Specifically, bits in CR4. The result could be a firmware crash that also takes down the partition. A firmware fix is in the works. We have seen this situation when performing DLPAR operations. As a temporary workaround, clear the CR in enter_rtas(). Note that enter_rtas() will not set any bits in CR4 before calling RTAS. Also note that the 32 bit version of enter_rtas() should have the same work around even though the chances of hitting the bug are much smaller due to the lack of DLPAR on 32 bit kernels. However, my assembly skills are a bit rusty and the 32 bit code doesn't seem to follow the conventions for where things should be saved. In addition, I don't have a system to test 32 bit kernels. Help creating and at least touch testing the same workaround for 32 bit would be appreciated. Signed-off-by: Mike Kravetz <kravetz@us.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* Merge ../linux-2.6Paul Mackerras2006-03-091-73/+21
|\
| * powerpc: Fix various syscall/signal/swapcontext bugsPaul Mackerras2006-03-081-73/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A careful reading of the recent changes to the system call entry/exit paths revealed several problems, plus some things that could be simplified and improved: * 32-bit wasn't testing the _TIF_NOERROR bit in the syscall fast exit path, so it was only doing anything with it once it saw some other bit being set. In other words, the noerror behaviour would apply to the next system call where we had to reschedule or deliver a signal, which is not necessarily the current system call. * 32-bit wasn't doing the call to ptrace_notify in the syscall exit path when the _TIF_SINGLESTEP bit was set. * _TIF_RESTOREALL was in both _TIF_USER_WORK_MASK and _TIF_PERSYSCALL_MASK, which is odd since _TIF_RESTOREALL is only set by system calls. I took it out of _TIF_USER_WORK_MASK. * On 64-bit, _TIF_RESTOREALL wasn't causing the non-volatile registers to be restored (unless perhaps a signal was delivered or the syscall was traced or single-stepped). Thus the non-volatile registers weren't restored on exit from a signal handler. We probably got away with it mostly because signal handlers written in C wouldn't alter the non-volatile registers. * On 32-bit I simplified the code and made it more like 64-bit by making the syscall exit path jump to ret_from_except to handle preemption and signal delivery. * 32-bit was calling do_signal unnecessarily when _TIF_RESTOREALL was set - but I think because of that 32-bit was actually restoring the non-volatile registers on exit from a signal handler. * I changed the order of enabling interrupts and saving the non-volatile registers before calling do_syscall_trace_leave; now we enable interrupts first. Signed-off-by: Paul Mackerras <paulus@samba.org>
* | powerpc: Implement accurate task and CPU time accountingPaul Mackerras2006-02-241-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This implements accurate task and cpu time accounting for 64-bit powerpc kernels. Instead of accounting a whole jiffy of time to a task on a timer interrupt because that task happened to be running at the time, we now account time in units of timebase ticks according to the actual time spent by the task in user mode and kernel mode. We also count the time spent processing hardware and software interrupts accurately. This is conditional on CONFIG_VIRT_CPU_ACCOUNTING. If that is not set, we do tick-based approximate accounting as before. To get this accurate information, we read either the PURR (processor utilization of resources register) on POWER5 machines, or the timebase on other machines on * each entry to the kernel from usermode * each exit to usermode * transitions between process context, hard irq context and soft irq context in kernel mode * context switches. On POWER5 systems with shared-processor logical partitioning we also read both the PURR and the timebase at each timer interrupt and context switch in order to determine how much time has been taken by the hypervisor to run other partitions ("steal" time). Unfortunately, since we need values of the PURR on both threads at the same time to accurately calculate the steal time, and since we can only calculate steal time on a per-core basis, the apportioning of the steal time between idle time (time which we ceded to the hypervisor in the idle loop) and actual stolen time is somewhat approximate at the moment. This is all based quite heavily on what s390 does, and it uses the generic interfaces that were added by the s390 developers, i.e. account_system_time(), account_user_time(), etc. This patch doesn't add any new interfaces between the kernel and userspace, and doesn't change the units in which time is reported to userspace by things such as /proc/stat, /proc/<pid>/stat, getrusage(), times(), etc. Internally the various task and cpu times are stored in timebase units, but they are converted to USER_HZ units (1/100th of a second) when reported to userspace. Some precision is therefore lost but there should not be any accumulating error, since the internal accumulation is at full precision. Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [PATCH] powerpc: trivial: modify comments to refer to new location of filesJon Mason2006-02-101-3/+1
|/ | | | | | | | | This patch removes all self references and fixes references to files in the now defunct arch/ppc64 tree. I think this accomplises everything wanted, though there might be a few references I missed. Signed-off-by: Jon Mason <jdmason@us.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] TIF_RESTORE_SIGMASK support for arch/powerpcDavid Woodhouse2006-01-181-1/+1
| | | | | | | | | | | Implement the TIF_RESTORE_SIGMASK flag in the new arch/powerpc kernel, for both 32-bit and 64-bit system call paths. Signed-off-by: David Woodhouse <dwmw2@infradead.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] powerpc: Remove lppaca structure from the PACADavid Gibson2006-01-131-1/+2
| | | | | | | | | | | | | | | | | | At present the lppaca - the structure shared with the iSeries hypervisor and phyp - is contained within the PACA, our own low-level per-cpu structure. This doesn't have to be so, the patch below removes it, making a separate array of lppaca structures. This saves approximately 500*NR_CPUS bytes of image size and kernel memory, because we don't need aligning gap between the Linux and hypervisor portions of every PACA. On the other hand it means an extra level of dereference in many accesses to the lppaca. The patch also gets rid of several places where we assign the paca address to a local variable for no particular reason. Signed-off-by: David Gibson <dwg@au1.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] powerpc: Cleanup LOADADDR etc. asm macrosDavid Gibson2006-01-131-7/+5
| | | | | | | | | | | | | | | | | | This patch consolidates the variety of macros used for loading 32 or 64-bit constants in assembler (LOADADDR, LOADBASE, SET_REG_TO_*). The idea is to make the set of macros consistent across 32 and 64 bit and to make it more obvious which is the appropriate one to use in a given situation. The new macros and their semantics are described in the comments in ppc_asm.h. In the process, we change several places that were unnecessarily using immediate loads on ppc64 to use the GOT/TOC. Likewise we cleanup a couple of places where we were clumsily subtracting PAGE_OFFSET with asm instructions to use assemble-time arithmetic or the toreal() macro instead. Signed-off-by: David Gibson <dwg@au1.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] powerpc: Separate usage of KERNELBASE and PAGE_OFFSETMichael Ellerman2006-01-091-2/+2
| | | | | | | | | | | | | | | | | | | | | This patch separates usage of KERNELBASE and PAGE_OFFSET. I haven't looked at any of the PPC32 code, if we ever want to support Kdump on PPC we'll have to do another audit, ditto for iSeries. This patch makes PAGE_OFFSET the constant, it'll always be 0xC * 1 gazillion for 64-bit. To get a physical address from a virtual one you subtract PAGE_OFFSET, _not_ KERNELBASE. KERNELBASE is the virtual address of the start of the kernel, it's often the same as PAGE_OFFSET, but _might not be_. If you want to know something's offset from the start of the kernel you should subtract KERNELBASE. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] ppc64 syscall_exit_work: call the save_nvgprs function, not its ↵David Woodhouse2006-01-091-1/+1
| | | | | | | | | | | | | | | | descriptor. On Tue, 2005-11-15 at 18:52 +0000, David Woodhouse wrote: > This cleanup patch speeds up the null syscall path on ppc64 by about 3%, > and brings the ppc32 and ppc64 code slightly closer together. Needs this unless your binutils, like mine, are clever enough to notice my stupidity and fix it up automatically... Spotted by Paul. Signed-off-by: David Woodhouse <dwmw2@infradead.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] syscall entry/exit revampDavid Woodhouse2006-01-091-99/+115
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This cleanup patch speeds up the null syscall path on ppc64 by about 3%, and brings the ppc32 and ppc64 code slightly closer together. The ppc64 code was checking current_thread_info()->flags twice in the syscall exit path; once for TIF_SYSCALL_T_OR_A before disabling interrupts, and then again for TIF_SIGPENDING|TIF_NEED_RESCHED etc after disabling interrupts. Now we do the same as ppc32 -- check the flags only once in the fast path, and re-enable interrupts if necessary in the ptrace case. The patch abolishes the 'syscall_noerror' member of struct thread_info and replaces it with a TIF_NOERROR bit in the flags, which is handled in the slow path. This shortens the syscall entry code, which no longer needs to clear syscall_noerror. The patch adds a TIF_SAVE_NVGPRS flag which causes the syscall exit slow path to save the non-volatile GPRs into a signal frame. This removes the need for the assembly wrappers around sys_sigsuspend(), sys_rt_sigsuspend(), et al which existed solely to save those registers in advance. It also means I don't have to add new wrappers for ppoll() and pselect(), which is what I was supposed to be doing when I got distracted into this... Finally, it unifies the ppc64 and ppc32 methods of handling syscall exit directly into a signal handler (as required by sigsuspend et al) by introducing a TIF_RESTOREALL flag which causes _all_ the registers to be reloaded from the pt_regs by taking the ret_from_exception path, instead of the normal syscall exit path which stomps on the callee-saved GPRs. It appears to pass an LTP test run on ppc64, and passes basic testing on ppc32 too. Brief tests of ptrace functionality with strace and gdb also appear OK. I wouldn't send it to Linus for 2.6.15 just yet though :) Signed-off-by: David Woodhouse <dwmw2@infradead.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* powerpc: correct register usage in 64-bit syscall exit pathPaul Mackerras2005-12-201-2/+2
| | | | | | | | | | Since we don't restore the volatile registers in the syscall exit path, we need to make sure we don't leak any potentially interesting values from the kernel to userspace. This was already the case for all except r11. This makes it use r11 for an MSR value, so r11 will have an (uninteresting) MSR value in it on return to userspace. Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] audit_sysctl_exit can only be used with CONF_AUDIT_SYSCTLHorms2005-11-011-1/+1
| | | | | | | | | | | | | This section of code calls .audit_syscal_exit, but is inside CONFIG_AUDIT, so it will fail to build if CONFIG_AUDITSYSCALL is not defined. After discussion with David Woodhouse, change the ifdef to CONFIG_AUDITSYSCALL Signed-off-by: Horms <horms@verge.net.au> Acked-by: David Woodhouse <dwmw2@infradead.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* powerpc: change sys32_ to compat_sys_Stephen Rothwell2005-10-181-5/+5
| | | | | | | | | This allows us to get rid of one type of entry in systbl.S. In passing we remove the duplicate compat_sys_getdents and compat_sys_utimes for which there are generic versions. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
* powerpc: Introduce entry_{32,64}.S, misc_{32,64}.S, systbl.SPaul Mackerras2005-10-101-0/+842
The system call table has been consolidated into systbl.S. We have separate 32-bit and 64-bit versions of entry.S and misc.S since the code is mostly sufficiently different to be not worth merging. There are some common bits that will be extracted in future. Signed-off-by: Paul Mackerras <paulus@samba.org>
OpenPOWER on IntegriCloud