summaryrefslogtreecommitdiffstats
path: root/arch/powerpc/include
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'kvm-updates/2.6.35' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds2010-05-2115-117/+490
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'kvm-updates/2.6.35' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (269 commits) KVM: x86: Add missing locking to arch specific vcpu ioctls KVM: PPC: Add missing vcpu_load()/vcpu_put() in vcpu ioctls KVM: MMU: Segregate shadow pages with different cr0.wp KVM: x86: Check LMA bit before set_efer KVM: Don't allow lmsw to clear cr0.pe KVM: Add cpuid.txt file KVM: x86: Tell the guest we'll warn it about tsc stability x86, paravirt: don't compute pvclock adjustments if we trust the tsc x86: KVM guest: Try using new kvm clock msrs KVM: x86: export paravirtual cpuid flags in KVM_GET_SUPPORTED_CPUID KVM: x86: add new KVMCLOCK cpuid feature KVM: x86: change msr numbers for kvmclock x86, paravirt: Add a global synchronization point for pvclock x86, paravirt: Enable pvclock flags in vcpu_time_info structure KVM: x86: Inject #GP with the right rip on efer writes KVM: SVM: Don't allow nested guest to VMMCALL into host KVM: x86: Fix exception reinjection forced to true KVM: Fix wallclock version writing race KVM: MMU: Don't read pdptrs with mmu spinlock held in mmu_alloc_roots KVM: VMX: enable VMXON check with SMX enabled (Intel TXT) ...
| * KVM: PPC: Enable native paired singlesAlexander Graf2010-05-171-0/+1
| | | | | | | | | | | | | | | | | | | | | | When we're on a paired single capable host, we can just always enable paired singles and expose them to the guest directly. This approach breaks when multiple VMs run and access PS concurrently, but this should suffice until we get a proper framework for it in Linux. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Improve split modeAlexander Graf2010-05-171-5/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When in split mode, instruction relocation and data relocation are not equal. So far we implemented this mode by reserving a special pseudo-VSID for the two cases and flushing all PTEs when going into split mode, which is slow. Unfortunately 32bit Linux and Mac OS X use split mode extensively. So to not slow down things too much, I came up with a different idea: Mark the split mode with a bit in the VSID and then treat it like any other segment. This means we can just flush the shadow segment cache, but keep the PTEs intact. I verified that this works with ppc32 Linux and Mac OS X 10.4 guests and does speed them up. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Convert u64 -> ulongAlexander Graf2010-05-172-5/+5
| | | | | | | | | | | | | | | | | | There are some pieces in the code that I overlooked that still use u64s instead of longs. This slows down 32 bit hosts unnecessarily, so let's just move them to ulong. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Add SVCPU to Book3S_32Alexander Graf2010-05-171-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | We need to keep the pointer to the shadow vcpu somewhere accessible from within really early interrupt code. The best fit I found was the thread struct, as that resides in an SPRG. So let's put a pointer to the shadow vcpu in the thread struct and add an asm-offset so we can find it. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Extract MMU initAlexander Graf2010-05-171-0/+1
| | | | | | | | | | | | | | | | | | | | | | The host shadow mmu code needs to get initialized. It needs to fetch a segment it can use to put shadow PTEs into. That initialization code was in generic code, which is icky. Let's move it over to the respective MMU file. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Use now shadowed vcpu fieldsAlexander Graf2010-05-172-11/+3
| | | | | | | | | | | | | | | | | | | | | | The shadow vcpu now contains some fields we don't use from the vcpu anymore. Access to them happens using inline functions that happily use the shadow vcpu fields. So let's now ifdef them out to booke only and add asm-offsets. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * PPC: Add STLUAlexander Graf2010-05-171-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | For assembly code there are several "long" load and store defines already. The one that's missing is the typical stack store, stdu/stwu. So let's add that define as well, making my KVM code happy. CC: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Alexander Graf <agraf@suse.de> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Use CONFIG_PPC_BOOK3S defineAlexander Graf2010-05-171-4/+4
| | | | | | | | | | | | | | | | | | | | Upstream recently added a new name for PPC64: Book3S_64. So instead of using CONFIG_PPC64 we should use CONFIG_PPC_BOOK3S consotently. That makes understanding the code easier (I hope). Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Use KVM_BOOK3S_HANDLERAlexander Graf2010-05-172-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | So far we had a lot of conditional code on CONFIG_KVM_BOOK3S_64_HANDLER. As we're moving towards common code between 32 and 64 bits, most of these ifdefs can be moved to a more generic term define, called CONFIG_KVM_BOOK3S_HANDLER. This patch adds the new generic config option and moves ifdefs over. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Improve indirect svcpu accessorsAlexander Graf2010-05-173-78/+195
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We already have some inline fuctions we use to access vcpu or svcpu structs, depending on whether we're on booke or book3s. Since we just put a few more registers into the svcpu, we also need to make sure the respective callbacks are available and get used. So this patch moves direct use of the now in the svcpu struct fields to inline function calls. While at it, it also moves the definition of those inline function calls to respective header files for booke and book3s, greatly improving readability. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Add fields to shadow vcpuAlexander Graf2010-05-171-0/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | After a lot of thought on how to make the entry / exit code easier, I figured it'd be clever to put even more register state into the shadow vcpu. That way we have more registers available to use, making the code easier to read. So this patch adds a few new fields to that shadow vcpu. Later on we will remove the originals from the vcpu and paca. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Add kvm_book3s_32.hAlexander Graf2010-05-171-0/+42
| | | | | | | | | | | | | | | | | | In analogy to the 64 bit specific header file, this is the 32 bit pendant. With this in place we can just always call to_svcpu and be assured we get the right pointer anywhere. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Add kvm_book3s_64.hAlexander Graf2010-05-171-0/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | In the process of generalizing as much code as possible, I also moved the shadow vcpu code together to a generic book3s file. Unfortunately the location of the shadow vcpu is different on 32 and 64 bit, so we need a wrapper function to tell us where it is. That sounded like a perfect fit for a subarch specific header file. Here we can put anything that needs to be different between those two. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * PPC: Split context init/destroy functionsAlexander Graf2010-05-171-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | We need to reserve a context from KVM to make sure we have our own segment space. While we did that split for Book3S_64 already, 32 bit is still outstanding. So let's split it now. Signed-off-by: Alexander Graf <agraf@suse.de> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> CC: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Name generic 64-bit code genericAlexander Graf2010-05-173-2/+2
| | | | | | | | | | | | | | | | | | We have quite some code that can be used by Book3S_32 and Book3S_64 alike, so let's call it "Book3S" instead of "Book3S_64", so we can later on use it from the 32 bit port too. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Make bools bitfieldsAlexander Graf2010-05-172-17/+17
| | | | | | | | | | | | | | | | | | Bool defaults to at least byte width. We usually only want to waste a single bit on this. So let's move all the bool values to bitfields, potentially saving memory. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Use ULL for big numbersAlexander Graf2010-05-171-6/+6
| | | | | | | | | | | | | | | | Some constants were bigger than ints. Let's mark them as such so we don't accidently truncate them. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Add OSI hypercall interfaceAlexander Graf2010-05-172-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | MOL uses its own hypercall interface to call back into userspace when the guest wants to do something. So let's implement that as an exit reason, specify it with a CAP and only really use it when userspace wants us to. The only user of it so far is MOL. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Implement alignment interruptAlexander Graf2010-05-171-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | Mac OS X has some applications - namely the Finder - that require alignment interrupts to work properly. So we need to implement them. But the spec for 970 and 750 also looks different. While 750 requires the DSISR and DAR fields to reflect some instruction bits (DSISR) and the fault address (DAR), the 970 declares this as an optional feature. So we need to reconstruct DSISR and DAR manually. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Book3S_32 guest MMU fixesAlexander Graf2010-05-171-0/+1
| | | | | | | | | | | | | | | | | | | | | | This patch makes the VSID of mapped pages always reflecting all special cases we have, like split mode. It also changes the tlbie mask to 0x0ffff000 according to the spec. The mask we used before was incorrect. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Make DSISR 32 bits wideAlexander Graf2010-05-172-2/+2
| | | | | | | | | | | | | | | | DSISR is only defined as 32 bits wide. So let's reflect that in the structs too. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Allow userspace to unset the IRQ lineAlexander Graf2010-05-172-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Userspace can tell us that it wants to trigger an interrupt. But so far it can't tell us that it wants to stop triggering one. So let's interpret the parameter to the ioctl that we have anyways to tell us if we want to raise or lower the interrupt line. Signed-off-by: Alexander Graf <agraf@suse.de> v2 -> v3: - Add CAP for unset irq Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Ensure split mode worksAlexander Graf2010-05-171-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | On PowerPC we can go into MMU Split Mode. That means that either data relocation is on but instruction relocation is off or vice versa. That mode didn't work properly, as we weren't always flushing entries when going into a new split mode, potentially mapping different code or data that we're supposed to. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Implement Paired Single emulationAlexander Graf2010-04-251-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The one big thing about the Gekko is paired singles. Paired singles are an extension to the instruction set, that adds 32 single precision floating point registers (qprs), some SPRs to modify the behavior of paired singled operations and instructions to deal with qprs to the instruction set. Unfortunately, it also changes semantics of existing operations that affect single values in FPRs. In most cases they get mirrored to the coresponding QPR. Thanks to that we need to emulate all FPU operations and all the new paired single operations too. In order to achieve that, we use the just introduced FPU call helpers to call the real FPU whenever the guest wants to modify an FPR. Additionally we also fix up the QPR values along the way. That way we can execute paired single FPU operations without implementing a soft fpu. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Add helpers to modify ppc fieldsAlexander Graf2010-04-251-0/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | The PowerPC specification always lists bits from MSB to LSB. That is really confusing when you're trying to write C code, because it fits in pretty badly with the normal (1 << xx) schemes. So I came up with some nice wrappers that allow to get and set fields in a u64 with bit numbers exactly as given in the spec. That makes the code in KVM and the spec easier comparable. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Add helpers to call FPU instructionsAlexander Graf2010-04-251-0/+85
| | | | | | | | | | | | | | | | | | | | | | | | | | To emulate paired single instructions, we need to be able to call FPU operations from within the kernel. Since we don't want gcc to spill arbitrary FPU code everywhere, we tell it to use a soft fpu. Since we know we can really call the FPU in safe areas, let's also add some calls that we can later use to actually execute real world FPU operations on the host's FPU. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Make ext giveup non-staticAlexander Graf2010-04-251-0/+1
| | | | | | | | | | | | | | | | We need to call the ext giveup handlers from code outside of book3s.c. So let's make it non-static. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Make software load/store return eaddrAlexander Graf2010-04-251-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | The Book3S KVM implementation contains some helper functions to load and store data from and to virtual addresses. Unfortunately, this helper used to keep the physical address it so nicely found out for us to itself. So let's change that and make it return the physical address it resolved. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Add Gekko SPRsAlexander Graf2010-04-252-0/+11
| | | | | | | | | | | | | | | | | | | | The Gekko has some SPR values that differ from other PPC core values and also some additional ones. Let's add support for them in our mfspr/mtspr emulator. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Add hidden flag for paired singlesAlexander Graf2010-04-251-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | The Gekko implements an extension called paired singles. When the guest wants to use that extension, we need to make sure we're not running the host FPU, because all FPU instructions need to get emulated to accomodate for additional operations that occur. This patch adds an hflag to track if we're in paired single mode or not. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Add AGAIN type for emulation returnAlexander Graf2010-04-251-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Emulation of an instruction can have different outcomes. It can succeed, fail, require MMIO, do funky BookE stuff - or it can just realize something's odd and will be fixed the next time around. Exactly that is what EMULATE_AGAIN means. Using that flag we can now tell the caller that nothing happened, but we still want to go back to the guest and see what happens next time we come around. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Teach MMIO SignednessAlexander Graf2010-04-252-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | The guest I was trying to get to run uses the LHA and LHAU instructions. Those instructions basically do a load, but also sign extend the result. Since we need to fill our registers by hand when doing MMIO, we also need to sign extend manually. This patch implements sign extended MMIO and the LHA(U) instructions. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Enable MMIO to do 64 bits, fprs and qprsAlexander Graf2010-04-252-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | Right now MMIO access can only happen for GPRs and is at most 32 bit wide. That's actually enough for almost all types of hardware out there. Unfortunately, the guest I was using used FPU writes to MMIO regions, so it ended up writing 64 bit MMIOs using FPRs and QPRs. So let's add code to handle those odd cases too. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Make fpscr 64-bitAlexander Graf2010-04-251-1/+1
| | | | | | | | | | | | | | | | Modern PowerPCs have a 64 bit wide FPSCR register. Let's accomodate for that and make it 64 bits in our vcpu struct too. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: PPC: Add QPR registersAlexander Graf2010-04-251-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | The Gekko has GPRs, SPRs and FPRs like normal PowerPC codes, but it also has QPRs which are basically single precision only FPU registers that get used when in paired single mode. The following patches depend on them being around, so let's add the definitions early. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* | Merge branch 'next' of ↵Linus Torvalds2010-05-2117-17/+213
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc * 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc: (92 commits) powerpc: Remove unused 'protect4gb' boot parameter powerpc: Build-in e1000e for pseries & ppc64_defconfig powerpc/pseries: Make request_ras_irqs() available to other pseries code powerpc/numa: Use ibm,architecture-vec-5 to detect form 1 affinity powerpc/numa: Set a smaller value for RECLAIM_DISTANCE to enable zone reclaim powerpc: Use smt_snooze_delay=-1 to always busy loop powerpc: Remove check of ibm,smt-snooze-delay OF property powerpc/kdump: Fix race in kdump shutdown powerpc/kexec: Fix race in kexec shutdown powerpc/kexec: Speedup kexec hash PTE tear down powerpc/pseries: Add hcall to read 4 ptes at a time in real mode powerpc: Use more accurate limit for first segment memory allocations powerpc/kdump: Use chip->shutdown to disable IRQs powerpc/kdump: CPUs assume the context of the oopsing CPU powerpc/crashdump: Do not fail on NULL pointer dereferencing powerpc/eeh: Fix oops when probing in early boot powerpc/pci: Check devices status property when scanning OF tree powerpc/vio: Switch VIO Bus PM to use generic helpers powerpc: Avoid bad relocations in iSeries code powerpc: Use common cpu_die (fixes SMP+SUSPEND build) ...
| * | powerpc/numa: Set a smaller value for RECLAIM_DISTANCE to enable zone reclaimAnton Blanchard2010-05-211-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I noticed /proc/sys/vm/zone_reclaim_mode was 0 on a ppc64 NUMA box. It gets enabled via this: /* * If another node is sufficiently far away then it is better * to reclaim pages in a zone before going off node. */ if (distance > RECLAIM_DISTANCE) zone_reclaim_mode = 1; Since we use the default value of 20 for REMOTE_DISTANCE and 20 for RECLAIM_DISTANCE it never kicks in. The local to remote bandwidth ratios can be quite large on System p machines so it makes sense for us to reclaim clean pagecache locally before going off node. The patch below sets a smaller value for RECLAIM_DISTANCE and thus enables zone reclaim. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc/kexec: Fix race in kexec shutdownMichael Neuling2010-05-212-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In kexec_prepare_cpus, the primary CPU IPIs the secondary CPUs to kexec_smp_down(). kexec_smp_down() calls kexec_smp_wait() which sets the hw_cpu_id() to -1. The primary does this while leaving IRQs on which means the primary can take a timer interrupt which can lead to the IPIing one of the secondary CPUs (say, for a scheduler re-balance) but since the secondary CPU now has a hw_cpu_id = -1, we IPI CPU -1... Kaboom! We are hitting this case regularly on POWER7 machines. There is also a second race, where the primary will tear down the MMU mappings before knowing the secondaries have entered real mode. Also, the secondaries are clearing out any pending IPIs before guaranteeing that no more will be received. This changes kexec_prepare_cpus() so that we turn off IRQs in the primary CPU much earlier. It adds a paca flag to say that the secondaries have entered the kexec_smp_down() IPI and turned off IRQs, rather than overloading hw_cpu_id with -1. This new paca flag is again used to in indicate when the secondaries has entered real mode. It also ensures that all CPUs have their IRQs off before we clear out any pending IPI requests (in kexec_cpu_down()) to ensure there are no trailing IPIs left unacknowledged. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc/pseries: Add hcall to read 4 ptes at a time in real modeMichael Neuling2010-05-211-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds plpar_pte_read_4_raw() which can be used read 4 PTEs from PHYP at a time, while in real mode. It also creates a new hcall9 which can be used in real mode. It's the same as plpar_hcall9 but minus the tracing hcall statistics which may require variables outside the RMO. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | Merge commit 'origin/master' into nextBenjamin Herrenschmidt2010-05-071-1/+14
| |\ \
| * | | powerpc/cpumask: Convert mpic driver to new cpumask APIBenjamin Herrenschmidt2010-05-061-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert to the new cpumask API. irq_choose_cpu can be simplified by using cpumask_next and cpumask_first. smp_mpic_message_pass was doing open coded cpumask manipulation and passing an int for a cpumask into mpic_send_ipi. Since mpic_send_ipi is only used locally, make it static and convert it to take a cpumask. This allows us to clean up the mess in smp_mpic_message_pass. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/cpumask: Convert NUMA code to new cpumask APIAnton Blanchard2010-05-062-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert NUMA code to new cpumask API. We shift the node to cpumask setup code until after we complete bootmem allocation so we can dynamically allocate the cpumasks. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/cpumask: Dynamically allocate cpu_sibling_map and cpu_core_map cpumasksAnton Blanchard2010-05-062-5/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Dynamically allocate cpu_sibling_map and cpu_core_map cpumasks. We don't need to set_cpu_online() the boot cpu in smp_prepare_boot_cpu, init/main.c does it for us. We also postpone setting of the boot cpu in cpu_sibling_map and cpu_core_map until when the memory allocator is available (smp_prepare_cpus), similar to x86. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/cpumask: Convert fixup_irqs to new cpumask APIAnton Blanchard2010-05-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Use new cpumask_* functions, and dynamically allocate cpumask in fixup_irqs. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/mm: Track backing pages allocated by vmemmap_populate()Mark Nelson2010-05-061-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We need to keep track of the backing pages that get allocated by vmemmap_populate() so that when we use kdump, the dump-capture kernel knows where these pages are. We use a simple linked list of structures that contain the physical address of the backing page and corresponding virtual address to track the backing pages. To save space, we just use a pointer to the next struct vmemmap_backing. We can also do this because we never remove nodes. We call the pointer "list" to be compatible with changes made to the crash utility. vmemmap_populate() is called either at boot-time or on a memory hotplug operation. We don't have to worry about the boot-time calls because they will be inherently single-threaded, and for a memory hotplug operation vmemmap_populate() is called through: sparse_add_one_section() | V kmalloc_section_memmap() | V sparse_mem_map_populate() | V vmemmap_populate() and in sparse_add_one_section() we're protected by pgdat_resize_lock(). So, we don't need a spinlock to protect the vmemmap_list. We allocate space for the vmemmap_backing structs by allocating whole pages in vmemmap_list_alloc() and then handing out chunks of this to vmemmap_list_populate(). This means that we waste at most just under one page, but this keeps the code is simple. Signed-off-by: Mark Nelson <markn@au1.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc: Correct parport interrupt parsingMartyn Welch2010-05-061-3/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the parsing of the device tree in arch/powerpc/include/asm/parport.h assumes that the interrupt provided in the parallel port node is a valid virtual irq. The values for the interrupts provided in the device tree should have meaning in the context of the driver for the specific interrupt controller to which the interrupt is connected and irq_of_parse_and_map() should be used to determine the correct virtual irq. Signed-off-by: Martyn Welch <martyn.welch@ge.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/4xx: Simple platform for the ISS 4xx simulatorTorez Smith2010-05-051-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a trivial 4xx plaform that uses the new simple bsp from Josh and is handy to use in simulators such as ISS or even Mambo who don't properly implement most of the actual devices in the SoC but really only the core. Signed-off-by: Torez Smith <lnxtorez@linux.vnet.ibm.com> Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
| * | | powerpc/476: add machine check handler for 47x coreDave Kleikamp2010-05-051-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The 47x core's MCSR varies from 44x, so it needs it's own machine check handler. Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
| * | | powerpc/47x: Base ppc476 supportDave Kleikamp2010-05-056-2/+85
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the base support for the 476 processor. The code was primarily written by Ben Herrenschmidt and Torez Smith, but I've been maintaining it for a while. The goal is to have a single binary that will run on 44x and 47x, but we still have some details to work out. The biggest is that the L1 cache line size differs on the two platforms, but it's currently a compile-time option. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Torez Smith <lnxtorez@linux.vnet.ibm.com> Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
OpenPOWER on IntegriCloud