summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
* [PowerPC] Fix calls to non-function objectsHal Finkel2015-01-121-5/+22
| | | | | | | | | | | | | | | | | | Looking at r225438 inspired me to see how the PowerPC backend handled the situation (calling a bitcasted TLS global), and it turns out we also produced an error (cannot select ...). What it means to "call" something that is not a function is implementation and platform specific, but in the name of doing something (besides crashing), this makes sure we do what GCC does (treat all such calls as calls through a function pointer -- meaning that the pointer is assumed, as is the convention on PPC, to point to a function descriptor structure holding the actual code address along with the function's TOC pointer and environment pointer). As GCC does, we now do the same for calling regular (non-TLS) non-function globals too. I'm not sure whether this is the most useful way to define the behavior, but at least we won't be alone. llvm-svn: 225617
* [X86][SSE] Minor fix to VPBLENDW AVX2 commutation.Simon Pilgrim2015-01-111-3/+3
| | | | | | D6015 / rL221313 enabled commutation for SSE immediate blend instructions, but due to a typo the AVX2 VPBLENDW ymm instructions weren't flagged as commutative along with the others in the tables, but were still being commuted in code and tested for. llvm-svn: 225612
* Revert most of r225597David Majnemer2015-01-114-71/+44
| | | | | | We can't rely on a DataLayout enlightened constant folder. llvm-svn: 225599
* X86: Properly decode shuffle masks when the constant pool type is weirdDavid Majnemer2015-01-114-66/+80
| | | | | | | | | | | | | It's possible for the constant pool entry for the shuffle mask to come from a completely different operation. This occurs when Constants have the same bit pattern but have different types. Make DecodePSHUFBMask tolerant of types which, after a bitcast, are appropriately sized vector types. This fixes PR22188. llvm-svn: 225597
* X86: teach X86TargetLowering about L,M,O constraintsSaleem Abdulrasool2015-01-111-0/+25
| | | | | | | | | Teach the ISelLowering for X86 about the L,M,O target specific constraints. Although, for the moment, clang performs constraint validation and prevents passing along inline asm which may have immediate constant constraints violated, the backend should be able to cope with the invalid inline asm a bit better. llvm-svn: 225596
* ARM: add support for segment base relocations (SBREL)Saleem Abdulrasool2015-01-113-0/+9
| | | | | | | | This adds support for parsing and emitting the SBREL relocation variant for the ARM target. Handling this relocation variant is necessary for supporting the full ARM ELF specification. Addresses PR22128. llvm-svn: 225595
* Fix PR22179.Sanjoy Das2015-01-101-42/+33
| | | | | | | | | | | We were incorrectly inferring nsw for certain SCEVs. We can be more aggressive here (see Richard Smith's comment on http://llvm.org/bugs/show_bug.cgi?id=22179) but this change just focuses on correctness. Differential Revision: http://reviews.llvm.org/D6914 llvm-svn: 225591
* Revert r225500, it leads to infinite loops.Joerg Sonnenberger2015-01-101-9/+15
| | | | llvm-svn: 225590
* [X86][SSE] Improved (v)insertps shuffle matchingSimon Pilgrim2015-01-101-42/+82
| | | | | | | | | | | | In the current code we only attempt to match against insertps if we have exactly one element from the second input vector, irrespective of how much of the shuffle result is zeroable. This patch checks to see if there is a single non-zeroable element from either input that requires insertion. It also supports matching of cases where only one of the inputs need to be referenced. We also split insertps shuffle matching off into a new lowerVectorShuffleAsInsertPS function. Differential Revision: http://reviews.llvm.org/D6879 llvm-svn: 225589
* [PowerPC] Mark zext of a small scalar load as freeHal Finkel2015-01-102-0/+22
| | | | | | | | | | | | | This initial implementation of PPCTargetLowering::isZExtFree marks as free zexts of small scalar loads (that are not sign-extending). This callback is used by SelectionDAGBuilder's RegsForValue::getCopyToRegs, and thus to determine whether a zext or an anyext is used to lower illegally-typed PHIs. Because later truncates of zero-extended values are nops, this allows for the elimination of later unnecessary truncations. Fixes the initial complaint associated with PR22120. llvm-svn: 225584
* Remove some whitespace.Justin Hibbits2015-01-101-1/+1
| | | | llvm-svn: 225583
* Fully fix Bug #22115.Justin Hibbits2015-01-102-6/+53
| | | | | | | | | | | | | | | | | Summary: In the previous commit, the register was saved, but space was not allocated. This resulted in the parameter save area potentially clobbering r30, leading to nasty results. Test Plan: Tests updated Reviewers: hfinkel Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D6906 llvm-svn: 225573
* Fix undefined behavior (shift of negative value) in ↵Alexey Samsonov2015-01-101-2/+2
| | | | | | | | | | | | | | RuntimeDyldMachOAArch64::encodeAddend. Test Plan: regression test suite with/without UBSan. Reviewers: lhames, ributzka Subscribers: aemerson, llvm-commits Differential Revision: http://reviews.llvm.org/D6908 llvm-svn: 225568
* [PowerPC] Readjust the loop unrolling thresholdHal Finkel2015-01-102-4/+4
| | | | | | | | Now that the way that the partial unrolling threshold for small loops is used to compute the unrolling factor as been corrected, a slightly smaller threshold is preferable. This is expected; other targets may need to re-tune as well. llvm-svn: 225566
* [LoopUnroll] Fix the partial unrolling threshold for small loop sizesHal Finkel2015-01-101-5/+12
| | | | | | | | | | | | | When we compute the size of a loop, we include the branch on the backedge and the comparison feeding the conditional branch. Under normal circumstances, these don't get replicated with the rest of the loop body when we unroll. This led to the somewhat surprising behavior that really small loops would not get unrolled enough -- they could be unrolled more and the resulting loop would be below the threshold, because we were assuming they'd take (LoopSize * UnrollingFactor) instructions after unrolling, instead of (((LoopSize-2) * UnrollingFactor)+2) instructions. This fixes that computation. llvm-svn: 225565
* Use the DiagnosticHandler to print diagnostics when reading bitcode.Rafael Espindola2015-01-104-309/+355
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The bitcode reading interface used std::error_code to report an error to the callers and it is the callers job to print diagnostics. This is not ideal for error handling or diagnostic reporting: * For error handling, all that the callers care about is 3 possibilities: * It worked * The bitcode file is corrupted/invalid. * The file is not bitcode at all. * For diagnostic, it is user friendly to include far more information about the invalid case so the user can find out what is wrong with the bitcode file. This comes up, for example, when a developer introduces a bug while extending the format. The compromise we had was to have a lot of error codes. With this patch we use the DiagnosticHandler to communicate with the human and std::error_code to communicate with the caller. This allows us to have far fewer error codes and adds the infrastructure to print better diagnostics. This is so because the diagnostics are printed when he issue is found. The code that detected the problem in alive in the stack and can pass down as much context as needed. As an example the patch updates test/Bitcode/invalid.ll. Using a DiagnosticHandler also moves the fatal/non-fatal error decision to the caller. A simple one like llvm-dis can just use fatal errors. The gold plugin needs a bit more complex treatment because of being passed non-bitcode files. An hypothetical interactive tool would make all bitcode errors non-fatal. llvm-svn: 225562
* Fix the JIT event listeners and replace the associated tests.Andrew Kaylor2015-01-091-67/+68
| | | | | | | | | | The changes to EventListenerCommon.h were contributed by Arch Robison. This fixes bug 22095. http://reviews.llvm.org/D6905 llvm-svn: 225554
* Update comment.Michael Zolotukhin2015-01-091-2/+2
| | | | llvm-svn: 225553
* SimplifyCFG: check uses of constant-foldable instrs in switch destinations ↵Hans Wennborg2015-01-091-6/+15
| | | | | | | | | | | (PR20210) The previous code assumed that such instructions could not have any uses outside CaseDest, with the motivation that the instruction could not dominate CommonDest because CommonDest has phi nodes in it. That simply isn't true; e.g., CommonDest could have an edge back to itself. llvm-svn: 225552
* [X86][SSE] Avoid vector byte shuffles with zero by using pshufb to create zerosSimon Pilgrim2015-01-091-12/+30
| | | | | | | | pshufb can shuffle in zero bytes as well as bytes from a source vector - we can use this to avoid having to shuffle 2 vectors and ORing the result when the used inputs from a vector are all zeroable. Differential Revision: http://reviews.llvm.org/D6878 llvm-svn: 225551
* Remove duplicating code. NFC.Michael Zolotukhin2015-01-091-2/+2
| | | | | | The removed condition is checked in the previous loop. llvm-svn: 225542
* Re-reapply r221924: "[GVN] Perform Scalar PRE on gep indices that feed loads ↵Tim Northover2015-01-091-161/+175
| | | | | | | | | | | | | | | before doing Load PRE" It's not really expected to stick around, last time it provoked a weird LTO build failure that I can't reproduce now, and the bot logs are long gone. I'll re-revert it if the failures recur. Original description: Perform Scalar PRE on gep indices that feed loads before doing Load PRE. llvm-svn: 225536
* Recommit r224935 with a fix for the ObjC++/AArch64 bug that that revisionLang Hames2015-01-0910-87/+76
| | | | | | | | | | introduced. A test case for the bug was already committed in r225385. Patch by Rafael Espindola. llvm-svn: 225534
* Revert "Bitcode: Move the DEBUG_LOC record to DEBUG_LOC_OLD"Duncan P. N. Exon Smith2015-01-092-2/+2
| | | | | | | | | | | | | | | This reverts commit r225498 (but leaves r225499, which was a worthy cleanup). My plan was to change `DEBUG_LOC` to store the `MDNode` directly rather than its operands (patch was to go out this morning), but on reflection it's not clear that it's strictly better. (I had missed that the current code is unlikely to emit the `MDNode` at all.) Conflicts: lib/Bitcode/Reader/BitcodeReader.cpp (due to r225499) llvm-svn: 225531
* [mips] Add support for accessing $gp as a named register.Daniel Sanders2015-01-092-0/+24
| | | | | | | | | | | | | | | | | | | | | Summary: Mips Linux uses $gp to hold a pointer to thread info structure and accesses it with a named register. This makes this work for LLVM. The N32 ABI doesn't quite work yet since the frontend generates incorrect IR for this case. It neglects to truncate the 64-bit GPR to a 32-bit value before converting to a pointer. Given correct IR (as in the testcase in this patch), it works correctly. Reviewers: sstankovic, vmedic, atanasyan Reviewed By: atanasyan Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D6893 llvm-svn: 225529
* remove names from comments; NFCSanjay Patel2015-01-091-40/+35
| | | | llvm-svn: 225526
* fix typos; NFCSanjay Patel2015-01-091-3/+3
| | | | llvm-svn: 225525
* fix typo; NFCSanjay Patel2015-01-091-1/+1
| | | | llvm-svn: 225524
* more efficient use of a dyn_cast; no functional change intendedSanjay Patel2015-01-091-3/+3
| | | | llvm-svn: 225523
* [PowerPC] Enable late partial unrolling on the POWER7Hal Finkel2015-01-093-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | The P7 benefits from not have really-small loops so that we either have multiple dispatch groups in the loop and/or the ability to form more-full dispatch groups during scheduling. Setting the partial unrolling threshold to 44 seems good, empirically, for the P7. Compared to using no late partial unrolling, this yields the following test-suite speedups: SingleSource/Benchmarks/Adobe-C++/simple_types_constant_folding -66.3253% +/- 24.1975% SingleSource/Benchmarks/Misc-C++/oopack_v1p8 -44.0169% +/- 29.4881% SingleSource/Benchmarks/Misc/pi -27.8351% +/- 12.2712% SingleSource/Benchmarks/Stanford/Bubblesort -30.9898% +/- 22.4647% I've speculatively added a similar setting for the P8. Also, I've noticed that the unroller does not quite calculate the unrolling factor correctly for really tiny loops because it neglects to account for the fact that not every loop body replicant contains an ending branch and counter increment. I'll fix that later. llvm-svn: 225522
* [mips] Add comment which explains why we need to change the assembler ↵Toma Tabacu2015-01-091-0/+6
| | | | | | options before and after inline asm blocks. NFC. llvm-svn: 225521
* Assumption that "VectorizedValue" will always be an Instruction is not correct.Suyog Sarda2015-01-091-2/+1
| | | | | | | | | | | | | | | | | | | | It can be a constant or a vector argument. ex : define i32 @hadd(<4 x i32> %a) #0 { entry: %vecext = extractelement <4 x i32> %a, i32 0 %vecext1 = extractelement <4 x i32> %a, i32 1 %add = add i32 %vecext, %vecext1 %vecext2 = extractelement <4 x i32> %a, i32 2 %add3 = add i32 %add, %vecext2 %vecext4 = extractelement <4 x i32> %a, i32 3 %add5 = add i32 %add3, %vecext4 ret i32 %add5 } llvm-svn: 225517
* ARM: add support for R_ARM_ABS16Saleem Abdulrasool2015-01-091-0/+8
| | | | | | Add support for R_ARM_ABS16 relocation mapping. Addresses PR22156. llvm-svn: 225510
* ARM: add support for R_ARM_ABS8 relocationsSaleem Abdulrasool2015-01-091-0/+8
| | | | | | Add support for R_ARM_ABS8 relocation. Addresses PR22126. llvm-svn: 225507
* RegisterCoalescer: Fix removeCopyByCommutingDef with subreg livenessMatthias Braun2015-01-091-1/+3
| | | | | | | | | The code that eliminated additional coalescable copies in removeCopyByCommutingDef() used MergeValueNumberInto() which internally may merge A into B or B into A. In this case A and B had different Def points, so we have to reset ValNo.Def to the intended one after merging. llvm-svn: 225503
* RegisterCoalescer: Some cleanup in removeCopyByCommutingDef(), NFCMatthias Braun2015-01-091-15/+19
| | | | llvm-svn: 225502
* RegisterCoalescer: No need to set kill flags, they are recompute later anywayMatthias Braun2015-01-091-2/+0
| | | | llvm-svn: 225501
* RegisterCoalescer: Turn some impossible conditions into assertsMatthias Braun2015-01-091-15/+9
| | | | llvm-svn: 225500
* Bitcode: Share logic for last instruction, NFCDuncan P. N. Exon Smith2015-01-091-14/+10
| | | | | | Share logic for getting the last instruction emitted. llvm-svn: 225499
* Bitcode: Move the DEBUG_LOC record to DEBUG_LOC_OLDDuncan P. N. Exon Smith2015-01-092-2/+2
| | | | | | Prepare to simplify the `DebugLoc` record. llvm-svn: 225498
* [PowerPC] Add a flag for experimenting with subreg liveness trackingHal Finkel2015-01-092-0/+10
| | | | | | | This cannot yet be enabled by default, it causes ~50 miscompiles in the test suite. llvm-svn: 225497
* [PowerPC] Fold [sz]ext with fp_to_int lowering where possibleHal Finkel2015-01-092-4/+61
| | | | | | | On modern cores with lfiw[az]x, we can fold a sign or zero extension from i32 to i64 into the load necessary for an i64 -> fp conversion. llvm-svn: 225493
* [DAGCombine] Remainder of fix to r225380 (More FMA folding opportunities)Hal Finkel2015-01-091-10/+24
| | | | | | | | | | As pointed out by Aditya (and Owen), when we elide an FP extend to form an FMA, we need to extend the incoming operands so that the resulting node will really be legal. This is currently enabled only for PowerPC, and it happens to work there regardless, but this should fix the functionality for everyone else should anyone else wish to use it. llvm-svn: 225492
* [x86] Add a flag to control the vector shuffle legality predicates thatChandler Carruth2015-01-091-0/+23
| | | | | | | | | | complements the new vector shuffle lowering code path. This flag, naturally, is *off* because we've not tested or evaluated the results of this at all. However, the flag will make it much easier to evaluate whether we can be this aggressive and whether there are missing vector shuffle lowering optimizations. llvm-svn: 225491
* Cleaup ValueHandle to no longer keep a PointerIntPair for the Value*.Chandler Carruth2015-01-091-12/+12
| | | | | | | | This was used previously for metadata but is no longer needed there. Not doing this simplifies ValueHandle and will make it easier to fix things like AssertingVH's DenseMapInfo. llvm-svn: 225487
* Partial fix to r225380 (More FMA folding opportunities)Hal Finkel2015-01-091-96/+95
| | | | | | | | | | | | As pointed out by Aditya (and Owen), there are two things wrong with this code. First, it adds patterns which elide FP extends when forming FMAs, and that might not be profitable on all targets (it belongs behind the pre-existing aggressive-FMA-formation flag). This is fixed by this change. Second, the resulting nodes might have operands of different types (the extensions need to be re-added). That will be fixed in the follow-up commit. llvm-svn: 225485
* [REFACTOR] Push logic from MemDepPrinter into getNonLocalPointerDependencyPhilip Reames2015-01-092-35/+24
| | | | | | | | Previously, MemDepPrinter handled volatile and unordered accesses without involving MemoryDependencyAnalysis. By making a slight tweak to the documented interface - which is respected by both callers - we can move this responsibility to MDA for the benefit of any future callers. This is basically just cleanup. In the future, we may decide to extend MDA's non local dependency analysis to return useful results for ordered or volatile loads. I believe (but have not really checked in detail) that local dependency analyis does get useful results for ordered, but not volatile, loads. llvm-svn: 225483
* [Refactor] Have getNonLocalPointerDependency take the query instructionPhilip Reames2015-01-093-11/+50
| | | | | | | | | | Previously, MemoryDependenceAnalysis::getNonLocalPointerDependency was taking a list of properties about the instruction being queried. Since I'm about to need one more property to be passed down through the infrastructure - I need to know a query instruction is non-volatile in an inner helper - fix the interface once and for all. I also added some assertions and behaviour clarifications around volatile and ordered field accesses. At the moment, this is mostly to document expected behaviour. The only non-standard instructions which can currently reach this are atomic, but unordered, loads and stores. Neither ordered or volatile accesses can reach here. The call in GVN is protected by an isSimple check when it first considers the load. The calls in MemDepPrinter are protected by isUnordered checks. Both utilities also check isVolatile for loads and stores. llvm-svn: 225481
* Utils: Keep distinct MDNodes distinct in MapMetadata()Duncan P. N. Exon Smith2015-01-081-0/+14
| | | | | | | | | | | | | Create new copies of distinct `MDNode`s instead of following the uniquing `MDNode` logic. Just like self-references (or other cycles), `MapMetadata()` creates a new node. In practice most calls use `RF_NoModuleLevelChanges`, in which case nothing is duplicated anyway. Part of PR22111. llvm-svn: 225476
* IR: Add 'distinct' MDNodes to bitcode and assemblyDuncan P. N. Exon Smith2015-01-087-6/+26
| | | | | | | | | | | | | | | | | | Propagate whether `MDNode`s are 'distinct' through the other types of IR (assembly and bitcode). This adds the `distinct` keyword to assembly. Currently, no one actually calls `MDNode::getDistinct()`, so these nodes only get created for: - self-references, which are never uniqued, and - nodes whose operands are replaced that hit a uniquing collision. The concept of distinct nodes is still not quite first-class, since distinct-ness doesn't yet survive across `MapMetadata()`. Part of PR22111. llvm-svn: 225474
OpenPOWER on IntegriCloud