summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/PowerPC/PPCFrameLowering.cpp
Commit message (Collapse)AuthorAgeFilesLines
* Revert "r225811 - Revert "r225808 - [PowerPC] Add StackMap/PatchPoint support""Hal Finkel2015-01-141-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This re-applies r225808, fixed to avoid problems with SDAG dependencies along with the preceding fix to ScheduleDAGSDNodes::RegDefIter::InitNodeNumDefs. These problems caused the original regression tests to assert/segfault on many (but not all) systems. Original commit message: This commit does two things: 1. Refactors PPCFastISel to use more of the common infrastructure for call lowering (this lets us take advantage of this common code for lowering some common intrinsics, stackmap/patchpoint among them). 2. Adds support for stackmap/patchpoint lowering. For the most part, this is very similar to the support in the AArch64 target, with the obvious differences (different registers, NOP instructions, etc.). The test cases are adapted from the AArch64 test cases. One difference of note is that the patchpoint call sequence takes 24 bytes, so you can't use less than that (on AArch64 you can go down to 16). Also, as noted in the docs, we take the patchpoint address to be the actual code address (assuming the call is local in the TOC-sharing sense), which should yield higher performance than generating the full cross-DSO indirect-call sequence and is likely just as useful for JITed code (if not, we'll change it). StackMaps and Patchpoints are still marked as experimental, and so this support is doubly experimental. So go ahead and experiment! llvm-svn: 225909
* Revert "r225808 - [PowerPC] Add StackMap/PatchPoint support"Hal Finkel2015-01-131-1/+0
| | | | | | | Reverting this while I investiage buildbot failures (segfaulting in GetCostForDef at ScheduleDAGRRList.cpp:314). llvm-svn: 225811
* [PowerPC] Add StackMap/PatchPoint supportHal Finkel2015-01-131-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | This commit does two things: 1. Refactors PPCFastISel to use more of the common infrastructure for call lowering (this lets us take advantage of this common code for lowering some common intrinsics, stackmap/patchpoint among them). 2. Adds support for stackmap/patchpoint lowering. For the most part, this is very similar to the support in the AArch64 target, with the obvious differences (different registers, NOP instructions, etc.). The test cases are adapted from the AArch64 test cases. One difference of note is that the patchpoint call sequence takes 24 bytes, so you can't use less than that (on AArch64 you can go down to 16). Also, as noted in the docs, we take the patchpoint address to be the actual code address (assuming the call is local in the TOC-sharing sense), which should yield higher performance than generating the full cross-DSO indirect-call sequence and is likely just as useful for JITed code (if not, we'll change it). StackMaps and Patchpoints are still marked as experimental, and so this support is doubly experimental. So go ahead and experiment! llvm-svn: 225808
* [PowerPC] Split the blr definition into BLR and BLR8Hal Finkel2015-01-131-1/+3
| | | | | | | | | | We really need a separate 64-bit version of this instruction so that it can be marked as clobbering LR8 (instead of just LR). No change in functionality (although the verifier might be slightly happier), however, it is required for stackmap/patchpoint support. Thus, this will be covered by stackmap test cases once those are added. llvm-svn: 225804
* Fully fix Bug #22115.Justin Hibbits2015-01-101-6/+46
| | | | | | | | | | | | | | | | | Summary: In the previous commit, the register was saved, but space was not allocated. This resulted in the parameter save area potentially clobbering r30, leading to nasty results. Test Plan: Tests updated Reviewers: hfinkel Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D6906 llvm-svn: 225573
* Add saving and restoring of r30 to the prologue and epilogue, respectivelyJustin Hibbits2015-01-081-0/+16
| | | | | | | | | | | | | | | | Summary: The PIC additions didn't update the prologue and epilogue code to save and restore r30 (PIC base register). This does that. Test Plan: Tests updated. Reviewers: hfinkel Reviewed By: hfinkel Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D6876 llvm-svn: 225450
* [PowerPC] Fix unwind info with dynamic stack realignmentJay Foad2014-12-011-12/+27
| | | | | | | | | | | | | | | | | | | | | | Summary: PowerPC DWARF unwind info defined CFA as SP + offset even in a function where the stack had been dynamically realigned. This clearly doesn't work because the offset from SP to CFA is not a constant. Fix it by defining CFA as BP instead. This was causing the AddressSanitizer null_deref test to fail 50% of the time, depending on whether SP happened to be 32-byte aligned on entry to a particular function or not. Reviewers: willschm, uweigand, hfinkel Reviewed By: hfinkel Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D6410 llvm-svn: 222996
* Have MachineFunction cache a pointer to the subtarget to make lookupsEric Christopher2014-08-051-28/+20
| | | | | | | | | | | shorter/easier and have the DAG use that to do the same lookup. This can be used in the future for TargetMachine based caching lookups from the MachineFunction easily. Update the MIPS subtarget switching machinery to update this pointer at the same time it runs. llvm-svn: 214838
* Remove the TargetMachine forwards for TargetSubtargetInfo basedEric Christopher2014-08-041-20/+28
| | | | | | information and update all callers. No functional change. llvm-svn: 214781
* [PowerPC] ELFv2 explicit CFI for CR fieldsUlrich Weigand2014-07-211-1/+9
| | | | | | | | | | | | | | | | | | | | This is a minor improvement in the ELFv2 ABI. In ELFv1, DWARF CFI would represent a saved CR word (holding CR fields CR2, CR3, and CR4) using just a single CFI record refering to CR2. In ELFv2 instead, each of the CR fields is represented by its own CFI record. The advantage is that the compiler can now chose to save just a single (or two) CR fields instead of all of them, if those are the only ones that actually need saving. That can lead to more efficient code using mf(o)crf instead of the (slow) mfcr instruction. Note that this patch does not (yet) implement this more efficient code generation, but it does implement the part that is required to be ABI compliant: creating multiple CFI records if multiple CR fields are saved. Reviewed by Hal Finkel. llvm-svn: 213492
* [PowerPC] ELFv2 stack space reductionUlrich Weigand2014-07-201-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ELFv2 ABI reduces the amount of stack required to implement an ABI-compliant function call in two ways: * the "linkage area" is reduced from 48 bytes to 32 bytes by eliminating two unused doublewords * the 64-byte "parameter save area" is now optional and need not be present in certain cases (it remains mandatory in functions with variable arguments, and functions that have any parameter that is passed on the stack) The following patch implements this required changes: - reducing the linkage area, and associated relocation of the TOC save slot, in getLinkageSize / getTOCSaveOffset (this requires updating all callers of these routines to pass in the isELFv2ABI flag). - (partially) handling the case where the parameter save are is optional This latter part requires some extra explanation: Currently, we still always allocate the parameter save area when *calling* a function. That is certainly always compliant with the ABI, but may cause code to allocate stack unnecessarily. This can be addressed by a follow-on optimization patch. On the *callee* side, in LowerFormalArguments, we *must* track correctly whether the ABI guarantees that the caller has allocated the parameter save area for our use, and the patch does so. However, there is one complication: the code that handles incoming "byval" arguments will currently *always* write to the parameter save area, because it has to force incoming register arguments to the stack since it must return an *address* to implement the byval semantics. To fix this, the patch changes the LowerFormalArguments code to write arguments to a freshly allocated stack slot on the function's own stack frame instead of the argument save area in those cases where that area is not present. Reviewed by Hal Finkel. llvm-svn: 213490
* [PowerPC] 32-bit ELF PIC supportHal Finkel2014-07-181-6/+13
| | | | | | | | | | This adds initial support for PPC32 ELF PIC (Position Independent Code; the -fPIC variety), thus rectifying a long-standing deficiency in the PowerPC backend. Patch by Justin Hibbits! llvm-svn: 213427
* [PowerPC] Refactor getMinCallFrameSize / getMinCallArgumentsSizeUlrich Weigand2014-06-231-24/+0
| | | | | | | | | | | | | | | | | | | | As of r211495, the only remaining users of getMinCallFrameSize are in core ABI code (LowerFormalParameter / LowerCall). This is actually a good thing, since the details of the parameter save area are ABI specific. With the new ELFv2 ABI in particular, the rules defining the size of the save area will become significantly more complex, so it wouldn't make sense to implement those outside ABI code that has all required information. In preparation, this patch eliminates the getMinCallFrameSize (and associated getMinCallArgumentsSize) routines, and inlines them into all callers. Note that since nearly all call arguments are constant, this allows simplifying the inlined copies to a single line everywhere. No change in generate code expected. llvm-svn: 211497
* [PowerPC] Allow stack frames without parameter save areaUlrich Weigand2014-06-231-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The PPCFrameLowering::determineFrameLayout routine currently ensures that every function that allocates a stack frame provides space for the parameter save area (via PPCFrameLowering::getMinCallFrameSize). This is actually not necessary. There may be functions that never call another routine but still allocate a frame; those do not require the parameter save area. In the future, with the ELFv2 ABI, even some routines that do call other functions do not need to allocate the parameter save area. While it is not a bug to allocate the parameter area when it is not needed, it is better to avoid it to save stack space. Note that when any particular function call requires the parameter save area, this space will already have been included by ABI code in the size the CALLSEQ_START insn is annotated with, and therefore included in the size returned by MFI->getMaxCallFrameSize(). This means that determineFrameLayout simply does not need to care about the parameter save area. (It still needs to ensure that every frame provides the linkage area.) This is implemented by this patch. Note that this exposed a bug in the new fast-isel code where the parameter area was *not* included in the CALLSEQ_START size; this is also fixed. A couple of test cases needed to be adapted for the new (smaller) stack frame size those tests now see. llvm-svn: 211495
* Move PPCFrameLowering into PPCSubtarget from PPCTargetMachine. UseEric Christopher2014-06-121-0/+186
| | | | | | | | the initializeSubtargetDependencies code to obtain an initialized subtarget and migrate a couple of subtarget using functions to the .cpp file to avoid circular includes. llvm-svn: 210822
* None of these targets actually define their own CFI_INSTRUCTIONEric Christopher2014-04-291-7/+8
| | | | | | | opcode so there's no reason to use the target namespace for it rather than TargetOpcode. llvm-svn: 207475
* 80-column, tab characters, comment fixups.Eric Christopher2014-04-291-43/+44
| | | | llvm-svn: 207473
* Replace PROLOG_LABEL with a new CFI_INSTRUCTION.Rafael Espindola2014-03-071-28/+28
| | | | | | | | | | | | | | | | | | | | | | | The old system was fairly convoluted: * A temporary label was created. * A single PROLOG_LABEL was created with it. * A few MCCFIInstructions were created with the same label. The semantics were that the cfi instructions were mapped to the PROLOG_LABEL via the temporary label. The output position was that of the PROLOG_LABEL. The temporary label itself was used only for doing the mapping. The new CFI_INSTRUCTION has a 1:1 mapping to MCCFIInstructions and points to one by holding an index into the CFI instructions of this function. I did consider removing MMI.getFrameInstructions completelly and having CFI_INSTRUCTION own a MCCFIInstruction, but MCCFIInstructions have non trivial constructors and destructors and are somewhat big, so the this setup is probably better. The net result is that we don't create temporary labels that are never used. llvm-svn: 203204
* [PowerPC] More refactoring prior to real PPC emitPrologue/Epilogue changes.Bill Schmidt2013-08-201-271/+194
| | | | | | | | | | | | | | | | | | | | | | | | | | | | (Patch committed on behalf of Mark Minich, whose log entry follows.) This is a continuation of the refactorings performed in svn rev 188573 (see that rev's comments for more detail). This is my stage 2 refactoring: I combined the emitPrologue() & emitEpilogue() PPC32 & PPC64 code into a single flow, simplifying a lot of the code since in essence the PPC32 & PPC64 code generation logic is the same, only the instruction forms are different (in most cases). This simplification is necessary because my functional changes (yet to come) add significant complexity, and without the simplification of my stage 2 refactoring, the overall complexity of both emitPrologue() & emitEpilogue() would have become almost intractable for most mortal programmers (like me). This submission was intended to be a pure refactoring (no functional changes whatsoever). However, in the process of combining the PPC32 & PPC64 flows, I spotted a difference that I believe is a bug (see svn rev 186478 line 863, or svn rev 188573 line 888): This line appears to be restoring the BP with the original FP content, not the original BP content. When I merged the 32-bit and 64-bit code, I used the corresponding code from the 64-bit flow, which I believe uses the correct offset (BPOffset) for this operation. llvm-svn: 188741
* [PowerPC] Preparatory refactoring for making prologue and epilogueBill Schmidt2013-08-161-85/+102
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | safe on PPC32 SVR4 ABI [Patch and following text by Mark Minich; committing on his behalf.] There are FIXME's in PowerPC/PPCFrameLowering.cpp, method PPCFrameLowering::emitPrologue() related to "negative offsets of R1" on PPC32 SVR4. They're true, but the real issue is that on PPC32 SVR4 (and any ABI without a Red Zone), no spills may be made until after the stackframe is claimed, which also includes the LR spill which is at a positive offset. The same problem exists in emitEpilogue(), though there's no FIXME for it. I intend to fix this issue, making LLVM-compiled code finally safe for use on SVR4/EABI/e500 32-bit platforms (including in particular, OS-free embedded systems & kernel code, where interrupts may share the same stack as user code). In preparation for making these changes, to make the diffs for the functional changes less cluttered, I am providing the non-functional refactorings in two stages: Stage 1 does some minor fluffy refactorings to pull multiple method calls up into a single bool, creating named bools for repeated uses of obscure logic, moving some code up earlier because either stage 2 or my final version will require it earlier, and rewording/adding some comments. My stage 1 changes can be characterized as primarily fluffy cleanup, the purpose of which may be unclear until the stage 2 or final changes are made. My stage 2 refactorings combine the separate PPC32 & PPC64 logic, which is currently performed by largely duplicate code, into a single flow, with the differences handled by a group of constants initialized early in the methods. This submission is for my stage 1 changes. There should be no functional changes whatsoever; this is a pure refactoring. llvm-svn: 188573
* PPC: Support dynamic allocas with large alignmentHal Finkel2013-07-181-1/+5
| | | | | | | | | | | | | | | Support for dynamic stack alignments in the PPC backend has been unfinished, in part because it depends on dynamic stack realignment (which I only just recently implemented fully). Now we can also support dynamic allocas with higher than the default target stack alignment (16 bytes). In order to round-up the requested size to the maximum requested alignment, we need an additional register to hold the rounded-up size. We're already using one scavenged register to hold the previous stack-pointer value (which needs to be stored with the signal-safe stdux update), and so when we have dynamic allocas and a large alignment, we allocate two emergency spill slots for the scavenger. llvm-svn: 186562
* PPC: Add base-pointer support to builtin setjmp/longjmpHal Finkel2013-07-171-15/+23
| | | | | | | | | | | | | | | | | | First, this changes the base-pointer implementation to remove an unnecessary complication (and one that is incompatible with how builtin SjLj is implemented): instead of using r31 as the base pointer when it is not needed as a frame pointer, now the base pointer will always be r30 when needed. Second, we introduce another pseudo register, BP, which is used just like the FP pseudo register to refer to the base register before we know for certain what register it will be. Third, we now save BP into the jmp_buf, and restore r30 from that slot in longjmp. If the function that called setjmp did not use a base pointer, then r30 will be overwritten by the setjmp-calling-function's restore code. FP restoration (which is restored into r31) works the same way. llvm-svn: 186545
* PPC: Implement base pointer and stack realignmentHal Finkel2013-07-171-30/+143
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This builds on some frame-lowering code that has existed since 2005 (r24224) but was disabled in 2008 (r48188) because it needed base pointer support to function correctly. This implementation follows the strategy suggested by Dale Johannesen in r48188 where the following comment was added: This does not currently work, because the delta between old and new stack pointers is added to offsets that reference incoming parameters after the prolog is generated, and the code that does that doesn't handle a variable delta. You don't want to do that anyway; a better approach is to reserve another register that retains to the incoming stack pointer, and reference parameters relative to that. And now we do exactly that. If we don't need a frame pointer, then we use r31 as a base pointer. If we do need a frame pointer, then we use r30 as a base pointer. The base pointer retains the value of the stack pointer before it was decremented in the prologue. We then use the base pointer to resolve all negative frame indicies. The basic scheme follows that for base pointers in the X86 backend. We use a base pointer when we need to dynamically realign the incoming stack pointer. This currently applies only to static objects (dynamic allocas with large alignments, and base-pointer support in SjLj lowering will come in future commits). llvm-svn: 186478
* Use SmallVectorImpl& instead of SmallVector to avoid repeating small vector ↵Craig Topper2013-07-141-2/+2
| | | | | | size. llvm-svn: 186274
* [PowerPC] Use mtocrf when availableUlrich Weigand2013-07-031-2/+2
| | | | | | | | | | | | | | | | | | | | Just as with mfocrf, it is also preferable to use mtocrf instead of mtcrf when only a single CR register is to be written. Current code however always emits mtcrf. This probably does not matter when using an external assembler, since the GNU assembler will in fact automatically replace mtcrf with mtocrf when possible. It does create inefficient code with the integrated assembler, however. To fix this, this patch adds MTOCRF/MTOCRF8 instruction patterns and uses those instead of MTCRF/MTCRF8 everything. Just as done in the MFOCRF patch committed as 185556, these patterns will be converted back to MTCRF if MTOCRF is not available on the machine. As a side effect, this allows to modify the MTCRF pattern to accept the full range of mask operands for the benefit of the asm parser. llvm-svn: 185561
* PPC: Ignore spill/restore requests for VRSAVE (except on Darwin)Hal Finkel2013-06-281-0/+12
| | | | | | | | | | | | This fixes PR16418, which reports that a function calling __builtin_unwind_init() asserts. The cause is that this generates a spill/restore for VRSAVE, and we support that only on Darwin (because VRSAVE is only really used on Darwin). The test case checks only that we don't crash. We can add correctness checks once someone verifies what behavior the function is supposed to have. llvm-svn: 185235
* Use pointers to the MCAsmInfo and MCRegInfo.Bill Wendling2013-06-181-6/+6
| | | | | | | | | Someone may want to do something crazy, like replace these objects if they change or something. No functionality change intended. llvm-svn: 184175
* Remove addFrameMove.Rafael Espindola2013-05-161-19/+19
| | | | | | | Now that we have good testing, remove addFrameMove and create cfi instructions directly. llvm-svn: 182052
* [PowerPC] Use true offset value in "memrix" machine operandsUlrich Weigand2013-05-161-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is the second part of the change to always return "true" offset values from getPreIndexedAddressParts, tackling the case of "memrix" type operands. This is about instructions like LD/STD that only have a 14-bit field to encode immediate offsets, which are implicitly extended by two zero bits by the machine, so that in effect we can access 16-bit offsets as long as they are a multiple of 4. The PowerPC back end currently handles such instructions by carrying the 14-bit value (as it will get encoded into the actual machine instructions) in the machine operand fields for such instructions. This means that those values are in fact not the true offset, but rather the offset divided by 4 (and then truncated to an unsigned 14-bit value). Like in the case fixed in r182012, this makes common code operations on such offset values not work as expected. Furthermore, there doesn't really appear to be any strong reason why we should encode machine operands this way. This patch therefore changes the encoding of "memrix" type machine operands to simply contain the "true" offset value as a signed immediate value, while enforcing the rules that it must fit in a 16-bit signed value and must also be a multiple of 4. This change must be made simultaneously in all places that access machine operands of this type. However, just about all those changes make the code simpler; in many cases we can now just share the same code for memri and memrix operands. llvm-svn: 182032
* Removed dead code.Rafael Espindola2013-05-161-8/+4
| | | | llvm-svn: 181975
* PPC32: Fix stack collision between FP and CR save areas.Bill Schmidt2013-05-141-0/+1
| | | | | | | | | | | | | | | | | The changes to CR spill handling missed a case for 32-bit PowerPC. The code in PPCFrameLowering::processFunctionBeforeFrameFinalized() checks whether CR spill has occurred using a flag in the function info. This flag is only set by storeRegToStackSlot and loadRegFromStackSlot. spillCalleeSavedRegisters does not call storeRegToStackSlot, but instead produces MI directly. Thus we don't see the CR is spilled when assigning frame offsets, and the CR spill ends up colliding with some other location (generally the FP slot). This patch sets the flag in spillCalleeSavedRegisters for PPC32 so that the CR spill is properly detected and gets its own slot in the stack frame. llvm-svn: 181800
* Change getFrameMoves to return a const reference.Rafael Espindola2013-05-111-9/+7
| | | | | | | To add a frame now there is a dedicated addFrameMove which also takes care of constructing the move itself. llvm-svn: 181657
* Fix PPC64 CR spill location for callee-saved registersHal Finkel2013-04-151-35/+41
| | | | | | | | | | | | | | | | This fixes an ABI bug for non-Darwin PPC64. For the callee-saved condition registers, the spill location is specified relative to the stack pointer (SP + 8). However, this is not relative to the SP after the new stack frame is established, but instead relative to the caller's stack pointer (it is stored into the linkage area of the parent's stack frame). So, like with the link register, we don't directly spill the CRs with other callee-saved registers, but just mark them to be spilled during prologue generation. In practice, this reverts r179457 for PPC64 (but leaves it in place for PPC32). llvm-svn: 179500
* Mark all PPC CR registers to be spilled as live-in and tag MFCR appropriatelyHal Finkel2013-04-131-5/+14
| | | | | | | | | | | | Leaving MFCR has having unmodeled side effects is not enough to prevent unwanted instruction reordering post-RA. We could probably apply a stronger barrier attribute, but there is a better way: Add all (not just the first) CR to be spilled as live-in to the entry block, and add all CRs to the MFCR instruction as implicitly killed. Unfortunately, I don't have a small test case. llvm-svn: 179465
* Spill and restore PPC CR registers using the FP when we have oneHal Finkel2013-04-131-6/+14
| | | | | | | | | | For functions that need to spill CRs, and have dynamic stack allocations, the value of the SP during the restore is not what it was during the save, and so we need to use the FP in these cases (as for all of the other spills and restores, but the CR restore has a special code path because its reserved slot, like the link register, is specified directly relative to the adjusted SP). llvm-svn: 179457
* Cleanup PPC CR-spill kill flags and 32- vs. 64-bit instructionsHal Finkel2013-03-281-6/+6
| | | | | | | | There were a few places where kill flags were not being set correctly, and where 32-bit instruction variants were being used with 64-bit registers. After r178180, this code was being triggered causing llc to assert. llvm-svn: 178220
* PPC: Use HWEncoding and TRI->getEncodingValueHal Finkel2013-03-261-5/+7
| | | | | | | | | | | As pointed out by Jakob, we don't need to maintain a separate register-numbering table. Instead we should let TableGen generate the table for us from the information (already present) in PPCRegisterInfo.td. TRI->getEncodingValue is now used to access register-encoding values. No functionality change intended. llvm-svn: 178067
* Use multiple virtual registers in PPC CR spillingHal Finkel2013-03-261-0/+7
| | | | | | | | | | | | Now that the register scavenger can support multiple spill slots, and PEI can use virtual-register-based scavenging for multiple simultaneous registers, we can use a virtual register for the transfer register in the CR spilling code. This should eliminate the last place (outside of the prologue/epilogue) where we depend on the unconditional availability of the r0 register. We will soon be able to allocate it (in a somewhat restricted sense) as a GPR. llvm-svn: 178060
* Note in PPCFunctionInfo VRSAVE spillsHal Finkel2013-03-231-2/+7
| | | | | | | | | | | | In preparation for using the new register scavenger capability for providing more than one register simultaneously, specifically note functions that have spilled VRSAVE (currently, this can happen only in functions that use the setjmp intrinsic). As with CR spilling, such functions will need to provide two emergency spill slots to the scavenger. No functionality change intended. llvm-svn: 177832
* Allow the register scavenger to spill multiple registersHal Finkel2013-03-221-1/+1
| | | | | | | | | | | | | | | | | | This patch lets the register scavenger make use of multiple spill slots in order to guarantee that it will be able to provide multiple registers simultaneously. To support this, the RS's API has changed slightly: setScavengingFrameIndex / getScavengingFrameIndex have been replaced by addScavengingFrameIndex / isScavengingFrameIndex / getScavengingFrameIndices. In forthcoming commits, the PowerPC backend will use this capability in order to implement the spilling of condition registers, and some special-purpose registers, without relying on r0 being reserved. In some cases, spilling these registers requires two GPRs: one for addressing and one to hold the value being transferred. llvm-svn: 177774
* Correct PPC FRAMEADDR lowering using a pseudo-registerHal Finkel2013-03-211-0/+28
| | | | | | | | | | | | | | | | The old code used to lower FRAMEADDR tried to replicate the logic in the real frame-lowering code that determines whether or not the frame pointer (r31) will be used. When it seemed as through the frame pointer would not be used, the stack pointer (r1) was used instead. Unfortunately, because the stack size is not yet known, this does not work. Instead, this change introduces new always-reserved pseudo-registers (FP and FP8) that are replaced during prologue insertion with the real frame-pointer register (either r1 or r31). It is important that this intrinsic always return a valid frame address because it is used by Clang to store the frame address as part of code generation for __builtin_setjmp. llvm-svn: 177653
* Improve PPC VR (Altivec) register spillingHal Finkel2013-03-171-1/+6
| | | | | | | | | | | | | | | | This change cleans up two issues with Altivec register spilling: 1. The spilling code was inefficient (using two instructions, and add and a load, when just one would do) 2. The code assumed that r0 would always be available (true for now, but this will change) The new code handles VR spilling just like GPR spills but forced into r+r mode. As a result, when any VR spills are present, we must now always allocate the register-scavenger spill slot. llvm-svn: 177231
* Allocate the RS spill slot for any PPC function with spills and a large ↵Hal Finkel2013-03-151-31/+57
| | | | | | | | | | | | | | | | stack frame For spills into a large stack frame, the FI-elimination code uses the register scavenger to obtain a free GPR for use with an r+r-addressed load or store. When there are no available GPRs, the scavenger gets one by using its spill slot. Previously, we were not always allocating that spill slot and the RS would assert when the spill slot was needed. I don't currently have a small test that triggered the assert, but I've created a small regression test that verifies that the spill slot is now added when the stack frame is sufficiently large. llvm-svn: 177140
* Provide the register scavenger to processFunctionBeforeFrameFinalizedHal Finkel2013-03-141-2/+2
| | | | | | | | | | | | | Add the current PEI register scavenger as a parameter to the processFunctionBeforeFrameFinalized callback. This change is necessary in order to allow the PowerPC target code to set the register scavenger frame index after the save-area offset adjustments performed by processFunctionBeforeFrameFinalized. Only after these adjustments have been made is it possible to estimate the size of the stack frame. llvm-svn: 177108
* Not all PPC functions with a frame pointer need a RS spill slotHal Finkel2013-03-141-1/+1
| | | | | | | | | | | | | We used to add a spill slot for the register scavenger whenever the function has a frame pointer. This is unnecessarily conservative: We may need the spill slot for dynamic stack allocations, and functions with dynamic stack allocations always have a FP, but we might also have a FP for other reasons (such as the user explicitly disabling frame-pointer elimination), and we don't necessarily need a spill slot for those functions. The structsinregs test needed adjustment because it disables FP elimination. llvm-svn: 177106
* Fix PR15332 (patch by Florian Zeitz).Bill Schmidt2013-02-261-4/+5
| | | | | | | | There's no need to generate a stack frame for PPC32 SVR4 when there are no local variables assigned to the stack, i.e., when no red zone is needed. (PPC64 supports a red zone, but PPC32 does not.) llvm-svn: 176124
* Fix PR14364.Bill Schmidt2013-02-241-1/+12
| | | | | | | | | | | This removes a const_cast hack from PPCRegisterInfo::hasReservedSpillSlot(). The proper place to save the frame index for the CR spill slot is in the PPCFunctionInfo object, not the PPCRegisterInfo object. No new test cases, as this just reimplements existing function. Existing tests such as test/CodeGen/PowerPC/crsave.ll are sufficient. llvm-svn: 175998
* Move the eliminateCallFramePseudoInstr method from TargetRegisterInfoEli Bendersky2013-02-211-0/+41
| | | | | | | | | | | | | | | to TargetFrameLowering, where it belongs. Incidentally, this allows us to delete some duplicated (and slightly different!) code in TRI. There are potentially other layering problems that can be cleaned up as a result, or in a similar manner. The refactoring was OK'd by Anton Korobeynikov on llvmdev. Note: this touches the target interfaces, so out-of-tree targets may be affected. llvm-svn: 175788
* Avoid using MRI::liveout_iterator for computing VRSAVEs.Jakob Stoklund Olesen2013-02-051-6/+15
| | | | | | | | | | The liveout lists are about to be removed from MRI, this is the only place they were used after register allocation. Get the live out V registers directly from the return instructions instead. llvm-svn: 174399
* Move all of the header files which are involved in modelling the LLVM IRChandler Carruth2013-01-021-1/+1
| | | | | | | | | | | | | | | | | | | | | into their new header subdirectory: include/llvm/IR. This matches the directory structure of lib, and begins to correct a long standing point of file layout clutter in LLVM. There are still more header files to move here, but I wanted to handle them in separate commits to make tracking what files make sense at each layer easier. The only really questionable files here are the target intrinsic tablegen files. But that's a battle I'd rather not fight today. I've updated both CMake and Makefile build systems (I think, and my tests think, but I may have missed something). I've also re-sorted the includes throughout the project. I'll be committing updates to Clang, DragonEgg, and Polly momentarily. llvm-svn: 171366
OpenPOWER on IntegriCloud