summaryrefslogtreecommitdiffstats
path: root/llvm/test/CodeGen/PowerPC
Commit message (Collapse)AuthorAgeFilesLines
* Disambiguate function names in some CodeGen tests. (Some tests were using ↵Stephen Lin2013-07-181-28/+28
| | | | | | function names that also were names of instructions and/or doing other unusual things that were making the test not amenable to otherwise scriptable pattern matching.) No functionality change. llvm-svn: 186621
* PPC: Support dynamic allocas with large alignmentHal Finkel2013-07-181-0/+39
| | | | | | | | | | | | | | | Support for dynamic stack alignments in the PPC backend has been unfinished, in part because it depends on dynamic stack realignment (which I only just recently implemented fully). Now we can also support dynamic allocas with higher than the default target stack alignment (16 bytes). In order to round-up the requested size to the maximum requested alignment, we need an additional register to hold the rounded-up size. We're already using one scavenged register to hold the previous stack-pointer value (which needs to be stored with the signal-safe stdux update), and so when we have dynamic allocas and a large alignment, we allocate two emergency spill slots for the scavenger. llvm-svn: 186562
* PPC: Add base-pointer support to builtin setjmp/longjmpHal Finkel2013-07-172-11/+58
| | | | | | | | | | | | | | | | | | First, this changes the base-pointer implementation to remove an unnecessary complication (and one that is incompatible with how builtin SjLj is implemented): instead of using r31 as the base pointer when it is not needed as a frame pointer, now the base pointer will always be r30 when needed. Second, we introduce another pseudo register, BP, which is used just like the FP pseudo register to refer to the base register before we know for certain what register it will be. Third, we now save BP into the jmp_buf, and restore r30 from that slot in longjmp. If the function that called setjmp did not use a base pointer, then r30 will be overwritten by the setjmp-calling-function's restore code. FP restoration (which is restored into r31) works the same way. llvm-svn: 186545
* PPC: Add CTR-register clobber to builtin setjmpHal Finkel2013-07-171-0/+50
| | | | | | | | | | | | | Because the builtin longjmp implementation uses a CTR-based indirect jump, when the control flow arrives at the builtin setjmp call, the CTR register has necessarily been clobbered. Correspondingly, this adds CTR to the list of implicit definitions of the builtin setjmp pseudo instruction. We don't need to add CTR to the implicit definitions of builtin longjmp because, even though it does clobber the CTR register, the control flow cannot return to inside the loop unless there is also a builtin setjmp call. llvm-svn: 186488
* PPC: Implement base pointer and stack realignmentHal Finkel2013-07-171-0/+151
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This builds on some frame-lowering code that has existed since 2005 (r24224) but was disabled in 2008 (r48188) because it needed base pointer support to function correctly. This implementation follows the strategy suggested by Dale Johannesen in r48188 where the following comment was added: This does not currently work, because the delta between old and new stack pointers is added to offsets that reference incoming parameters after the prolog is generated, and the code that does that doesn't handle a variable delta. You don't want to do that anyway; a better approach is to reserve another register that retains to the incoming stack pointer, and reference parameters relative to that. And now we do exactly that. If we don't need a frame pointer, then we use r31 as a base pointer. If we do need a frame pointer, then we use r30 as a base pointer. The base pointer retains the value of the stack pointer before it was decremented in the prologue. We then use the base pointer to resolve all negative frame indicies. The basic scheme follows that for base pointers in the X86 backend. We use a base pointer when we need to dynamically realign the incoming stack pointer. This currently applies only to static objects (dynamic allocas with large alignments, and base-pointer support in SjLj lowering will come in future commits). llvm-svn: 186478
* [APFloat] PR16573: Avoid losing mantissa bits in ppc_fp128 to double truncationUlrich Weigand2013-07-161-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When truncating to a format with fewer mantissa bits, APFloat::convert will perform a right shift of the mantissa by the difference of the precision of the two formats. Usually, this will result in just the mantissa bits needed for the target format. One special situation is if the input number is denormal. In this case, the right shift may discard significant bits. This is usually not a problem, since truncating a denormal usually results in zero (underflow) after normalization anyway, since the result format's exponent range is usually smaller than the target format's. However, there is one case where the latter property does not hold: when truncating from ppc_fp128 to double. In particular, truncating a ppc_fp128 whose first double of the pair is denormal should result in just that first double, not zero. The current code however performs an excessive right shift, resulting in lost result bits. This is then caught in the APFloat::normalize call performed by APFloat::convert and causes an assertion failure. This patch checks for the scenario of truncating a denormal, and attempts to (possibly partially) replace the initial mantissa right shift by decrementing the exponent, if doing so will still result in a valid *target format* exponent. Index: test/CodeGen/PowerPC/pr16573.ll =================================================================== --- test/CodeGen/PowerPC/pr16573.ll (revision 0) +++ test/CodeGen/PowerPC/pr16573.ll (revision 0) @@ -0,0 +1,11 @@ +; RUN: llc < %s | FileCheck %s + +target triple = "powerpc64-unknown-linux-gnu" + +define double @test() { + %1 = fptrunc ppc_fp128 0xM818F2887B9295809800000000032D000 to double + ret double %1 +} + +; CHECK: .quad -9111018957755033591 + Index: lib/Support/APFloat.cpp =================================================================== --- lib/Support/APFloat.cpp (revision 185817) +++ lib/Support/APFloat.cpp (working copy) @@ -1956,6 +1956,23 @@ X86SpecialNan = true; } + // If this is a truncation of a denormal number, and the target semantics + // has larger exponent range than the source semantics (this can happen + // when truncating from PowerPC double-double to double format), the + // right shift could lose result mantissa bits. Adjust exponent instead + // of performing excessive shift. + if (shift < 0 && isFiniteNonZero()) { + int exponentChange = significandMSB() + 1 - fromSemantics.precision; + if (exponent + exponentChange < toSemantics.minExponent) + exponentChange = toSemantics.minExponent - exponent; + if (exponentChange < shift) + exponentChange = shift; + if (exponentChange < 0) { + shift -= exponentChange; + exponent += exponentChange; + } + } + // If this is a truncation, perform the shift before we narrow the storage. if (shift < 0 && (isFiniteNonZero() || category==fcNaN)) lostFraction = shiftRight(significandParts(), oldPartCount, -shift); llvm-svn: 186409
* Fix register subclass handling in PPCInstrInfo::insertSelectHal Finkel2013-07-151-0/+50
| | | | | | | | | | | | | | | | | | | | | | | | | | PPCInstrInfo::insertSelect and PPCInstrInfo::canInsertSelect were computing the common subclass of the true and false inputs, and then selecting either the 32-bit or the 64-bit isel variant based on the result of calling PPC::GPRCRegClass.hasSubClassEq(RC) and PPC::G8RCRegClass.hasSubClassEq(RC) (where RC is the common subclass). Unfortunately, this is not quite right: if we have something like this: %vreg8<def> = SELECT_CC_I8 %vreg4<kill>, %vreg7<kill>, %vreg6<kill>, 76; G8RC_and_G8RC_NOX0:%vreg8 CRRC:%vreg4 G8RC_NOX0:%vreg7,%vreg6 then the common subclass of G8RC_and_G8RC_NOX0 and G8RC_NOX0 is G8RC_NOX0, and G8RC_NOX0 is not a subclass of G8RC (because it also contains the ZERO8 pseudo-register). As a result, we also need to check the common subclass against GPRC_NOR0 and G8RC_NOX0 explicitly. This had not been a problem for clients of insertSelect that called canInsertSelect first (because it had a compensating mistake), but insertSelect is also used by the PPC pseudo-instruction expander, and this error was causing a problem in that context. This problem was found by csmith. llvm-svn: 186343
* Remove invalid assert in DAGTypeLegalizer::RemapValueHal Finkel2013-07-151-0/+57
| | | | | | | | | | | | | | | | | | There is a comment at the top of DAGTypeLegalizer::PerformExpensiveChecks which, in part, says: // Note that these invariants may not hold momentarily when processing a node: // the node being processed may be put in a map before being marked Processed. Unfortunately, this assert would be valid only if the above-mentioned invariant held unconditionally. This was causing llc to assert when, in fact, everything was fine. Thanks to Richard Sandiford for investigating this issue! Fixes PR16562. llvm-svn: 186338
* Mass update to CodeGen tests to use CHECK-LABEL for labels corresponding to ↵Stephen Lin2013-07-1421-128/+128
| | | | | | | | | | | | | | | | | | | | | | | | | | function definitions for more informative error messages. No functionality change and all updated tests passed locally. This update was done with the following bash script: find test/CodeGen -name "*.ll" | \ while read NAME; do echo "$NAME" if ! grep -q "^; *RUN: *llc.*debug" $NAME; then TEMP=`mktemp -t temp` cp $NAME $TEMP sed -n "s/^define [^@]*@\([A-Za-z0-9_]*\)(.*$/\1/p" < $NAME | \ while read FUNC; do sed -i '' "s/;\(.*\)\([A-Za-z0-9_-]*\):\( *\)$FUNC: *\$/;\1\2-LABEL:\3$FUNC:/g" $TEMP done sed -i '' "s/;\(.*\)-LABEL-LABEL:/;\1-LABEL:/" $TEMP sed -i '' "s/;\(.*\)-NEXT-LABEL:/;\1-NEXT:/" $TEMP sed -i '' "s/;\(.*\)-NOT-LABEL:/;\1-NOT:/" $TEMP sed -i '' "s/;\(.*\)-DAG-LABEL:/;\1-DAG:/" $TEMP mv $TEMP $NAME fi done llvm-svn: 186280
* Convert Windows to Unix line endings, no functionality change.Stephen Lin2013-07-131-41/+41
| | | | llvm-svn: 186264
* Convert CodeGen/*/*.ll tests to use the new CHECK-LABEL for easier ↵Stephen Lin2013-07-1322-68/+68
| | | | | | | | | | debugging. No functionality change and all tests pass after conversion. This was done with the following sed invocation to catch label lines demarking function boundaries: sed -i '' "s/^;\( *\)\([A-Z0-9_]*\):\( *\)test\([A-Za-z0-9_-]*\):\( *\)$/;\1\2-LABEL:\3test\4:\5/g" test/CodeGen/*/*.ll which was written conservatively to avoid false positives rather than false negatives. I scanned through all the changes and everything looks correct. llvm-svn: 186258
* Start using CHECK-LABEL in some tests.Stephen Lin2013-07-121-5/+5
| | | | llvm-svn: 186163
* PPC: Add some missing V_SET0 patternsHal Finkel2013-07-111-0/+18
| | | | | | | | | | We had patterns to match v4i32 immAllZerosV -> V_SET0, but not patterns for v8i16 (which occurs in the test case) or v16i8. The same was true for V_SETALLONES (so I added the associated patterns for those as well). Another bug found by llvm-stress. llvm-svn: 186108
* PPCDAGToDAGISel::isRunOfOnes should return false on zeroHal Finkel2013-07-111-0/+27
| | | | | | | | | | | | This fixes a bug (found by csmith) at -O0 where we attempt to create a RLWIMI with an out-of-range operand. Most uses of the isRunOfOnes function are guarded by a condition that the value is not zero. This was not true in two places, and in both places a zero input would result in an out-of-rage MB value (= 32). To fix this, isRunOfOnes returns false on a zero input (and I've remove one now-redundant guard). llvm-svn: 186101
* RegScavenger should not exclude undef usesHal Finkel2013-07-111-0/+48
| | | | | | | | | | | | | | When computing currently-live registers, the register scavenger excludes undef uses. As a result, undef uses are ignored when computing the restore points of registers spilled into the emergency slots. While the register scavenger normally excludes from consideration, when scavenging, registers used by the current instruction, we need to not exclude undef uses. Otherwise, we might end up requiring more emergency spill slots than we have (in the case where the undef use *is* the currently-spilled register). Another bug found by llvm-stress. llvm-svn: 186067
* WidenVecRes_BUILD_VECTOR must use the first operand's typeHal Finkel2013-07-091-0/+23
| | | | | | | | | | | Because integer BUILD_VECTOR operands may have a larger type than the result's vector element type, and all operands must have the same type, when widening a BUILD_VECTOR node by adding UNDEFs, we cannot use the vector element type, but rather must use the type of the existing operands. Another bug found by llvm-stress. llvm-svn: 185960
* [PowerPC] Better fix for PR16556.Bill Schmidt2013-07-091-0/+41
| | | | | | | | | | | | | | | | | A more complete example of the bug in PR16556 was recently provided, showing that the previous fix was not sufficient. The previous fix is reverted herein. The real problem is that ReplaceNodeResults() uses LowerFP_TO_INT as custom lowering for FP_TO_SINT during type legalization, without checking whether the input type is handled by that routine. LowerFP_TO_INT requires the input to be f32 or f64, so we fail when the input is ppcf128. I'm leaving the test case from the initial fix (r185821) in place, and adding the new test as another crash-only check. llvm-svn: 185959
* AArch64/PowerPC/SystemZ/X86: This patch fixes the interface, usage, and allStephen Lin2013-07-091-0/+56
| | | | | | | | | | | | | | | | | | | | | | | in-tree implementations of TargetLoweringBase::isFMAFasterThanMulAndAdd in order to resolve the following issues with fmuladd (i.e. optional FMA) intrinsics: 1. On X86(-64) targets, ISD::FMA nodes are formed when lowering fmuladd intrinsics even if the subtarget does not support FMA instructions, leading to laughably bad code generation in some situations. 2. On AArch64 targets, ISD::FMA nodes are formed for operations on fp128, resulting in a call to a software fp128 FMA implementation. 3. On PowerPC targets, FMAs are not generated from fmuladd intrinsics on types like v2f32, v8f32, v4f64, etc., even though they promote, split, scalarize, etc. to types that support hardware FMAs. The function has also been slightly renamed for consistency and to force a merge/build conflict for any out-of-tree target implementing it. To resolve, see comments and fixed in-tree examples. llvm-svn: 185956
* Don't crash in SE dealing with ashr x, -1Hal Finkel2013-07-091-0/+18
| | | | | | | | | | | | | | ScalarEvolution::getSignedRange uses ComputeNumSignBits from ValueTracking on ashr instructions. ComputeNumSignBits can return zero, but this case was not handled correctly by the code in getSignedRange which was calling: APInt::getSignedMinValue(BitWidth).ashr(NS - 1) with NS = 0, resulting in an assertion failure in APInt::ashr. Now, we just return the conservative result (as with NS == 1). Another bug found by llvm-stress. llvm-svn: 185955
* DAGCombine tryFoldToZero cannot create illegal types after type legalizationHal Finkel2013-07-091-0/+17
| | | | | | | | | | | When folding sub x, x (and other similar constructs), where x is a vector, the result is a vector of zeros. After type legalization, make sure that the input zero elements have a legal type. This type may be larger than the result's vector element type. This was another bug found by llvm-stress. llvm-svn: 185949
* [PowerPC] Revert r185476 and fix up TLS variant kindsUlrich Weigand2013-07-093-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the commit message to r185476 I wrote: >The PowerPC-specific modifiers VK_PPC_TLSGD and VK_PPC_TLSLD >correspond exactly to the generic modifiers VK_TLSGD and VK_TLSLD. >This causes some confusion with the asm parser, since VK_PPC_TLSGD >is output as @tlsgd, which is then read back in as VK_TLSGD. > >To avoid this confusion, this patch removes the PowerPC-specific >modifiers and uses the generic modifiers throughout. (The only >drawback is that the generic modifiers are printed in upper case >while the usual convention on PowerPC is to use lower-case modifiers. >But this is just a cosmetic issue.) This was unfortunately incorrect, there is is fact another, serious drawback to using the default VK_TLSLD/VK_TLSGD variant kinds: using these causes ELFObjectWriter::RelocNeedsGOT to return true, which in turn causes the ELFObjectWriter to emit an undefined reference to _GLOBAL_OFFSET_TABLE_. This is a problem on powerpc64, because it uses the TOC instead of the GOT, and the linker does not provide _GLOBAL_OFFSET_TABLE_, so the symbol remains undefined. This means shared libraries using TLS built with the integrated assembler are currently broken. While the whole RelocNeedsGOT / _GLOBAL_OFFSET_TABLE_ situation probably ought to be properly fixed at some point, for now I'm simply reverting the r185476 commit. Now this in turn exposes the breakage of handling @tlsgd/@tlsld in the asm parser that this check-in was originally intended to fix. To avoid this regression, I'm also adding a different fix for this problem: while common code now parses @tlsgd as VK_TLSGD, a special hack in the asm parser translates this code to the platform-specific VK_PPC_TLSGD that the back-end now expects. While this is not really pretty, it's self-contained and shouldn't hurt anything else for now. One the underlying problem is fixed, this hack can be reverted again. llvm-svn: 185945
* PPC: Allocate RS spill slot for unaligned i64 load/storeHal Finkel2013-07-091-0/+119
| | | | | | | | | | | | | | | | | | | This fixes another bug found by llvm-stress! If we happen to be doing an i64 load or store into a stack slot that has less than a 4-byte alignment, then the frame-index elimination may need to use an indexed load or store instruction (because the offset may not be a multiple of 4, a requirement of the STD/LD instructions). The extra register needed to hold the offset comes from the register scavenger, and it is possible that the scavenger will need to use an emergency spill slot. As a result, we need to make sure that a spill slot is allocated when doing an i64 load/store into a less-than-4-byte-aligned stack slot. Because test cases for things like this tend to be fairly fragile, I've concatenated a few small bugpoint-reduced test cases together to form the regression test. llvm-svn: 185907
* [PowerPC] Always use "assembler dialect" 1Ulrich Weigand2013-07-081-0/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A setting in MCAsmInfo defines the "assembler dialect" to use. This is used by common code to choose between alternatives in a multi-alternative GNU inline asm statement like the following: __asm__ ("{sfe|subfe} %0,%1,%2" : "=r" (out) : "r" (in1), "r" (in2)); The meaning of these dialects is platform specific, and GCC defines those for PowerPC to use dialect 0 for old-style (POWER) mnemonics and 1 for new-style (PowerPC) mnemonics, like in the example above. To be compatible with inline asm used with GCC, LLVM ought to do the same. Specifically, this means we should always use assembler dialect 1 since old-style mnemonics really aren't supported on any current platform. However, the current LLVM back-end uses: AssemblerDialect = 1; // New-Style mnemonics. in PPCMCAsmInfoDarwin, and AssemblerDialect = 0; // Old-Style mnemonics. in PPCLinuxMCAsmInfo. The Linux setting really isn't correct, we should be using new-style mnemonics everywhere. This is changed by this commit. Unfortunately, the setting of this variable is overloaded in the back-end to decide whether or not we are on a Darwin target. This is done in PPCInstPrinter (the "SyntaxVariant" is initialized from the MCAsmInfo AssemblerDialect setting), and also in PPCMCExpr. Setting AssemblerDialect to 1 for both Darwin and Linux no longer allows us to make this distinction. Instead, this patch uses the MCSubtargetInfo passed to createPPCMCInstPrinter to distinguish Darwin targets, and ignores the SyntaxVariant parameter. As to PPCMCExpr, this patch adds an explicit isDarwin argument that needs to be passed in by the caller when creating a target MCExpr. (To do so this patch implicitly also reverts commit 184441.) llvm-svn: 185858
* PPC: Mark vector CC action for SETO and SETONE as ExpandHal Finkel2013-07-081-0/+17
| | | | | | | | Another bug found by llvm-stress! This fixes hitting llvm_unreachable("Invalid integer vector compare condition"); at the end of getVCmpInst in PPCISelDAGToDAG. llvm-svn: 185855
* PPC: Mark vector FREM as Expand by defaultHal Finkel2013-07-081-0/+8
| | | | | | | Another bug found by llvm-stress! This fixes crashing with: LLVM ERROR: Cannot select: v4f32 = frem ... llvm-svn: 185840
* [PowerPC] Fix PR16556 (handle undef ppcf128 in LowerFP_TO_INT).Bill Schmidt2013-07-081-0/+20
| | | | | | | | | | | | | | | | | | | | | PPCTargetLowering::LowerFP_TO_INT() expects its source operand to be either an f32 or f64, but this is not checked. A long double (ppcf128) operand will normally be custom-lowered to a conversion to f64 in this context. However, this isn't the case for an UNDEF node. This patch recognizes a ppcf128 as a legal source operand for FP_TO_INT only if it's an undef, in which case it creates an undef of the target type. At some point we might want to do a wholesale custom lowering of ISD::UNDEF when the type is ppcf128, but it's not really clear that's a great idea, and probably more work than it's worth for a situation that only arises in the case of a programming error. At this point I think simple is best. The test case comes from PR16556, and is a crash-test only. llvm-svn: 185821
* Fix PromoteIntRes_BUILD_VECTOR crash with i1 vectorsHal Finkel2013-07-081-0/+39
| | | | | | | | | | | | | This fixes a bug (found by llvm-stress) in DAGTypeLegalizer::PromoteIntRes_BUILD_VECTOR where it assumed that the result type would always be larger than the original operands. This is not always true, however, with boolean vectors. For example, promoting a node of type v8i1 (where the operands will be of type i32, the type to which i1 is promoted) will yield a node with a result vector element type of i16 (and operands of type i32). As a result, we cannot blindly assume that we can ANY_EXTEND the operands to the result type. llvm-svn: 185794
* [PowerPC] Use mtocrf when availableUlrich Weigand2013-07-031-10/+10
| | | | | | | | | | | | | | | | | | | | Just as with mfocrf, it is also preferable to use mtocrf instead of mtcrf when only a single CR register is to be written. Current code however always emits mtcrf. This probably does not matter when using an external assembler, since the GNU assembler will in fact automatically replace mtcrf with mtocrf when possible. It does create inefficient code with the integrated assembler, however. To fix this, this patch adds MTOCRF/MTOCRF8 instruction patterns and uses those instead of MTCRF/MTCRF8 everything. Just as done in the MFOCRF patch committed as 185556, these patterns will be converted back to MTCRF if MTOCRF is not available on the machine. As a side effect, this allows to modify the MTCRF pattern to accept the full range of mask operands for the benefit of the asm parser. llvm-svn: 185561
* [PowerPC] Remove VK_PPC_TLSGD and VK_PPC_TLSLDUlrich Weigand2013-07-023-3/+3
| | | | | | | | | | | | | | | The PowerPC-specific modifiers VK_PPC_TLSGD and VK_PPC_TLSLD correspond exactly to the generic modifiers VK_TLSGD and VK_TLSLD. This causes some confusion with the asm parser, since VK_PPC_TLSGD is output as @tlsgd, which is then read back in as VK_TLSGD. To avoid this confusion, this patch removes the PowerPC-specific modifiers and uses the generic modifiers throughout. (The only drawback is that the generic modifiers are printed in upper case while the usual convention on PowerPC is to use lower-case modifiers. But this is just a cosmetic issue.) llvm-svn: 185476
* Cleanup PPC Altivec registers in CSR lists and improve VRSAVE handlingHal Finkel2013-07-021-0/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are a couple of (small) related changes here: 1. The printed name of the VRSAVE register has been changed from VRsave to vrsave in order to match the name accepted by GNU binutils. 2. Support for parsing vrsave has been added to the asm parser (it seems that there was no test case specifically covering this code, so I've added one). 3. The list of Altivec registers, which was common to all calling conventions, has been separated out. This allows us to define the base CSR lists, and then lists for each ABI with Altivec included. This allows SjLj, for example, to work correctly on non-Altivec targets without using unnatural definitions of the NoRegs CSR list. 4. VRSAVE is now always reserved on non-Darwin targets and all Altivec registers are reserved when Altivec is disabled. With these changes, it is now possible to compile a function containing __builtin_unwind_init() on Linux/PPC64 with debugging information. This did not work previously because GNU binutils assumes that all .cfi_offset offsets will be 8-byte aligned on PPC64 (and errors out if you provide a non-8-byte-aligned offset). This is not true for the vrsave register, however, because this register is used only on Darwin, GCC does not bother printing a .cfi_offset entry for it (even though there is a slot in the stack frame for it as specified by the ABI). This change allows us to do the same: we will also not print .cfi_offset directives for vrsave. llvm-svn: 185409
* Index: test/CodeGen/PowerPC/reloc-align.llBill Schmidt2013-07-011-0/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | =================================================================== --- test/CodeGen/PowerPC/reloc-align.ll (revision 0) +++ test/CodeGen/PowerPC/reloc-align.ll (revision 0) @@ -0,0 +1,34 @@ +; RUN: llc -mcpu=pwr7 -O1 < %s | FileCheck %s + +; This test verifies that the peephole optimization of address accesses +; does not produce a load or store with a relocation that can't be +; satisfied for a given instruction encoding. Reduced from a test supplied +; by Hal Finkel. + +target datalayout = "E-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-f128:128:128-v128:128:128-n32:64" +target triple = "powerpc64-unknown-linux-gnu" + +%struct.S1 = type { [8 x i8] } + +@main.l_1554 = internal global { i8, i8, i8, i8, i8, i8, i8, i8 } { i8 -1, i8 -6, i8 57, i8 62, i8 -48, i8 0, i8 58, i8 80 }, align 1 + +; Function Attrs: nounwind readonly +define signext i32 @main() #0 { +entry: + %call = tail call fastcc signext i32 @func_90(%struct.S1* byval bitcast ({ i8, i8, i8, i8, i8, i8, i8, i8 }* @main.l_1554 to %struct.S1*)) +; CHECK-NOT: ld {{[0-9]+}}, main.l_1554@toc@l + ret i32 %call +} + +; Function Attrs: nounwind readonly +define internal fastcc signext i32 @func_90(%struct.S1* byval nocapture %p_91) #0 { +entry: + %0 = bitcast %struct.S1* %p_91 to i64* + %bf.load = load i64* %0, align 1 + %bf.shl = shl i64 %bf.load, 26 + %bf.ashr = ashr i64 %bf.shl, 54 + %bf.cast = trunc i64 %bf.ashr to i32 + ret i32 %bf.cast +} + +attributes #0 = { nounwind readonly "less-precise-fpmad"="false" "no-frame-pointer-elim"="true" "no-frame-pointer-elim-non-leaf"="true" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "unsafe-fp-math"="false" "use-soft-float"="false" } Index: lib/Target/PowerPC/PPCAsmPrinter.cpp =================================================================== --- lib/Target/PowerPC/PPCAsmPrinter.cpp (revision 185327) +++ lib/Target/PowerPC/PPCAsmPrinter.cpp (working copy) @@ -679,7 +679,26 @@ void PPCAsmPrinter::EmitInstruction(const MachineI OutStreamer.EmitRawText(StringRef("\tmsync")); return; } + break; + case PPC::LD: + case PPC::STD: + case PPC::LWA: { + // Verify alignment is legal, so we don't create relocations + // that can't be supported. + // FIXME: This test is currently disabled for Darwin. The test + // suite shows a handful of test cases that fail this check for + // Darwin. Those need to be investigated before this sanity test + // can be enabled for those subtargets. + if (!Subtarget.isDarwin()) { + unsigned OpNum = (MI->getOpcode() == PPC::STD) ? 2 : 1; + const MachineOperand &MO = MI->getOperand(OpNum); + if (MO.isGlobal() && MO.getGlobal()->getAlignment() < 4) + llvm_unreachable("Global must be word-aligned for LD, STD, LWA!"); + } + // Now process the instruction normally. + break; } + } LowerPPCMachineInstrToMCInst(MI, TmpInst, *this); OutStreamer.EmitInstruction(TmpInst); Index: lib/Target/PowerPC/PPCISelDAGToDAG.cpp =================================================================== --- lib/Target/PowerPC/PPCISelDAGToDAG.cpp (revision 185327) +++ lib/Target/PowerPC/PPCISelDAGToDAG.cpp (working copy) @@ -1530,6 +1530,14 @@ void PPCDAGToDAGISel::PostprocessISelDAG() { if (GlobalAddressSDNode *GA = dyn_cast<GlobalAddressSDNode>(ImmOpnd)) { SDLoc dl(GA); const GlobalValue *GV = GA->getGlobal(); + // We can't perform this optimization for data whose alignment + // is insufficient for the instruction encoding. + if (GV->getAlignment() < 4 && + (StorageOpcode == PPC::LD || StorageOpcode == PPC::STD || + StorageOpcode == PPC::LWA)) { + DEBUG(dbgs() << "Rejected this candidate for alignment.\n\n"); + continue; + } ImmOpnd = CurDAG->getTargetGlobalAddress(GV, dl, MVT::i64, 0, Flags); } else if (ConstantPoolSDNode *CP = dyn_cast<ConstantPoolSDNode>(ImmOpnd)) { llvm-svn: 185380
* Fix PR16508.Cameron Zwarich2013-07-011-0/+28
| | | | | | | | | | | | When phis get lowered, destination copies are inserted using an iterator that is determined once for all phis in the block, which BuildMI interprets as a request to insert an instruction directly before the iterator. In the case of a cyclic phi, source copies may also be inserted directly before this iterator, which can cause source copies to be inserted before destination copies. The fix is to keep an iterator to the last phi and then advance it while lowering each phi in order to insert destination copies directly after the phis. llvm-svn: 185363
* Don't form PPC CTR loops for over-sized exit countsHal Finkel2013-07-011-0/+23
| | | | | | | | | | Although you can't generate this from C on PPC64, if you have a loop using a 64-bit counter on PPC32 then you can't form a CTR-based loop for it. This had been cauing the PPCCTRLoops pass to assert. Thanks to Joerg Sonnenberger for providing a test case! llvm-svn: 185361
* PPC: Ignore spill/restore requests for VRSAVE (except on Darwin)Hal Finkel2013-06-281-0/+15
| | | | | | | | | | | | This fixes PR16418, which reports that a function calling __builtin_unwind_init() asserts. The cause is that this generates a spill/restore for VRSAVE, and we support that only on Darwin (because VRSAVE is only really used on Darwin). The test case checks only that we don't crash. We can add correctness checks once someone verifies what behavior the function is supposed to have. llvm-svn: 185235
* Fix CodeGen/PowerPC/stack-protector.ll on OpenBSDHal Finkel2013-06-281-2/+3
| | | | | | | | | | | | | | On OpenBSD, the stack-smash protection transform uses "__guard_local" and "__stack_smash_handler" instead of "__stack_chk_guard" and "__stack_chk_fail". However, CodeGen/PowerPC/stack-protector.ll doesn't specify a target OS, so on OpenBSD it fails. Add -mtriple=ppc32-unknown-linux to make the test host-OS agnostic. While there, convert to FileCheck. Patch by Matthew Dempsky. llvm-svn: 185206
* Fix a PPC rlwimi instruction-selection bugHal Finkel2013-06-281-0/+44
| | | | | | | | | Under certain (evidently rare) circumstances, this code used to convert OR(a, AND(x, y)) into OR(a, x). This was incorrect. While there, I've added a comment to the code immediately above. llvm-svn: 185201
* [PowerPC] Disable fast-isel for existing -O0 tests for PowerPC.Bill Schmidt2013-06-1310-34/+31
| | | | | | | | | | | | | | | | | | | | | | | | This is a preliminary patch for fast instruction selection on PowerPC. Code generation can differ between DAG isel and fast isel. Existing tests that specify -O0 were written to expect DAG isel. Make this explicit by adding -fast-isel=false to the tests. In some cases specifying -fast-isel=false produces different code even when there isn't a fast instruction selector specified. This is because TM.Options.EnableFastISel = 1 at -O0 whether or not a FastISel object exists. Thus disabling fast isel can actually produce less conservative code. Because of this, some of the expected code generation in the -O0 tests needs to be adjusted. In particular, handling of function arguments is less conservative with -fast-isel=false (see isOnlyUsedInEntryBlock() in SelectionDAGBuilder.cpp). This results in fewer stack accesses and, in some cases, reduced stack size as uselessly loaded values are no longer stored back to spill locations in the stack. No functional change with this patch; test case adjustments only. llvm-svn: 183939
* Disallow i64 div/rem in PPC32 counter loopsHal Finkel2013-06-071-0/+93
| | | | | | | | On PPC32, [su]div,rem on i64 types are transformed into runtime library function calls. As a result, they are not allowed in counter-based loops (the counter-loops verification pass caught this error; this change fixes PR16169). llvm-svn: 183581
* Change how we iterate over relocations on ELF.Rafael Espindola2013-05-305-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | For COFF and MachO, sections semantically have relocations that apply to them. That is not the case on ELF. In relocatable objects (.o), a section with relocations in ELF has offsets to another section where the relocations should be applied. In dynamic objects and executables, relocations don't have an offset, they have a virtual address. The section sh_info may or may not point to another section, but that is not actually used for resolving the relocations. This patch exposes that in the ObjectFile API. It has the following advantages: * Most (all?) clients can handle this more efficiently. They will normally walk all relocations, so doing an effort to iterate in a particular order doesn't save time. * llvm-readobj now prints relocations in the same way the native readelf does. * probably most important, relocations that don't point to any section are now visible. This is the case of relocations in the rela.dyn section. See the updated relocation-executable.test for example. llvm-svn: 182908
* Prefer to duplicate PPC Altivec loads when expanding unaligned loadsHal Finkel2013-05-261-12/+13
| | | | | | | | | | | | | | | | | | | When expanding unaligned Altivec loads, we use the decremented offset trick to prevent page faults. Unfortunately, if we have a sequence of consecutive unaligned loads, this leads to suboptimal code generation because the 'extra' load from the first unaligned load can be combined with the base load from the second (but only if the decremented offset trick is not used for the first). Search up and down the chain, through loads and token factors, looking for consecutive loads, and if one is found, don't use the offset reduction trick. These duplicate loads are later combined to yield the desired sequence (in the future, we might want a more-powerful chain search, but that will require some changes to allow the combiner routines to access the AA object). This should complete the initial implementation of the optimized unaligned Altivec load expansion. There is some refactoring that should be done, but that will happen when the unaligned store expansion is added. llvm-svn: 182719
* PPC: Combine duplicate (offset) lvsl Altivec intrinsicsHal Finkel2013-05-251-0/+6
| | | | | | | | | The lvsl permutation control instruction is a function only of the alignment of the pointer operand (relative to the 16-byte natural alignment of Altivec vectors). As a result, multiple lvsl intrinsics where the operands differ by a multiple of 16 can be combined. llvm-svn: 182708
* PPC: Initial support for permutation-based unaligned Altivec loadsHal Finkel2013-05-241-0/+45
| | | | | | | | | | | | | | | | | | Altivec only directly supports aligned loads, but the loads have a strange property: If given an unaligned address, they truncate the address to the next lower aligned address, and load from there. This property, along with an extra load and some special-purpose permutation-control instructions that generate the appropriate permutations from the original unaligned address, allow efficient lowering of aligned loads. This code uses the trick explained in the Apple Velocity Engine optimization overview document to prevent the needed extra load from possibly causing a page fault if the original address happens to be aligned. As noted in the FIXMEs, there are several additional optimizations that can be performed to reduce the cost of these loads even more. These will be implemented in future commits. llvm-svn: 182691
* Check InlineAsm clobbers in PPCCTRLoopsHal Finkel2013-05-181-0/+38
| | | | | | | | We don't need to reject all inline asm as using the counter register (most does not). Only those that explicitly clobber the counter register need to prevent the transformation. llvm-svn: 182191
* Fix cpu on test CodeGen/PowerPC/ctrloop-fp64.llHal Finkel2013-05-161-1/+1
| | | | | | We need ppc instead of generic to override native features on ppc machines. llvm-svn: 182049
* Create an new preheader in PPCCTRLoops to avoid counter register clobbersHal Finkel2013-05-161-0/+32
| | | | | | | | | | | | | Some IR-level instructions (such as FP <-> i64 conversions) are not chained w.r.t. the mtctr intrinsic and yet may become function calls that clobber the counter register. At the selection-DAG level, these might be reordered with the mtctr intrinsic causing miscompiles. To avoid this situation, if an existing preheader has instructions that might use the counter register, create a new preheader for the mtctr intrinsic. This extra block will be remerged with the old preheader at the MI level, but will prevent unwanted reordering at the selection-DAG level. llvm-svn: 182045
* [PowerPC] Use true offset value in "memrix" machine operandsUlrich Weigand2013-05-161-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is the second part of the change to always return "true" offset values from getPreIndexedAddressParts, tackling the case of "memrix" type operands. This is about instructions like LD/STD that only have a 14-bit field to encode immediate offsets, which are implicitly extended by two zero bits by the machine, so that in effect we can access 16-bit offsets as long as they are a multiple of 4. The PowerPC back end currently handles such instructions by carrying the 14-bit value (as it will get encoded into the actual machine instructions) in the machine operand fields for such instructions. This means that those values are in fact not the true offset, but rather the offset divided by 4 (and then truncated to an unsigned 14-bit value). Like in the case fixed in r182012, this makes common code operations on such offset values not work as expected. Furthermore, there doesn't really appear to be any strong reason why we should encode machine operands this way. This patch therefore changes the encoding of "memrix" type machine operands to simply contain the "true" offset value as a signed immediate value, while enforcing the rules that it must fit in a 16-bit signed value and must also be a multiple of 4. This change must be made simultaneously in all places that access machine operands of this type. However, just about all those changes make the code simpler; in many cases we can now just share the same code for memri and memrix operands. llvm-svn: 182032
* PPC32 cannot form counter loops around i64 FP conversionsHal Finkel2013-05-161-0/+28
| | | | | | | On PPC32, i64 FP conversions are implemented using runtime calls (which clobber the counter register). These must be excluded. llvm-svn: 182023
* Use new CHECK-DAG support to stabilize CodeGen/PowerPC/recipest.llBill Schmidt2013-05-161-16/+16
| | | | | | | | | While testing some experimental code to add vector-scalar registers to PowerPC, I noticed that a couple of independent instructions were flipped by the scheduler. The new CHECK-DAG support is perfect for avoiding this problem. llvm-svn: 182020
* [PowerPC] Report true displacement value from getPreIndexedAddressPartsUlrich Weigand2013-05-161-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | DAGCombiner::CombineToPreIndexedLoadStore calls a target routine to decompose a memory address into a base/offset pair. It expects the offset (if constant) to be the true displacement value in order to perform optional additional optimizations; in particular, to convert other uses of the original pointer into uses of the new base pointer after pre-increment. The PowerPC implementation of getPreIndexedAddressParts, however, simply calls SelectAddressRegImm, which returns a TargetConstant. This value is appropriate for encoding into the instruction, but it is not always usable as true displacement value: - Its type is always MVT::i32, even on 64-bit, where addresses ought to be i64 ... this causes the optimization to simply always fail on 64-bit due to this line in DAGCombiner: // FIXME: In some cases, we can be smarter about this. if (Op1.getValueType() != Offset.getValueType()) { - Its value is truncated to an unsigned 16-bit value if negative. This causes the above opimization to generate wrong code. This patch fixes both problems by simply returning the true displacement value (in its original type). This doesn't affect any other user of the displacement. llvm-svn: 182012
* Extend test to check the .cfi instructions.Rafael Espindola2013-05-161-1/+15
| | | | | | | I am about to refactor the calls to addFrameMove and some of the ppc ones were not being tested. llvm-svn: 182009
OpenPOWER on IntegriCloud