summaryrefslogtreecommitdiffstats
path: root/llvm/test/Transforms
Commit message (Collapse)AuthorAgeFilesLines
* [InstCombine] change tests to show a more obvious transform possibilitySanjay Patel2016-06-021-63/+62
| | | | | | | | | | | | The original tests were intended to show a missing transform that would be solved by D20774: http://reviews.llvm.org/D20774 But it's not clear that the transform for the simpler tests is a win for all targets. Make the tests show a larger pattern that should be a win regardless of the cost of bitcast instructions. llvm-svn: 271603
* transform obscured FP sign bit ops into a fabs/fneg using TLI hookSanjay Patel2016-06-022-67/+0
| | | | | | | | | | | | | | | | | | | This is effectively a revert of: http://reviews.llvm.org/rL249702 - [InstCombine] transform masking off of an FP sign bit into a fabs() intrinsic call (PR24886) and: http://reviews.llvm.org/rL249701 - [ValueTracking] teach computeKnownBits that a fabs() clears sign bits and a reimplementation as a DAG combine for targets that have IEEE754-compliant fabs/fneg instructions. This is intended to resolve the objections raised on the dev list: http://lists.llvm.org/pipermail/llvm-dev/2016-April/098154.html and: https://llvm.org/bugs/show_bug.cgi?id=24886#c4 In the interest of patch minimalism, I've only partly enabled AArch64. PowerPC, MIPS, x86 and others can enable later. Differential Revision: http://reviews.llvm.org/D19391 llvm-svn: 271573
* [profile] value profiling bug fix -- missing icall targets in profile-useXinliang David Li2016-06-021-0/+11
| | | | | | | | | | | | | | | | | Inline virtual functions has linkeonceodr linkage (emitted in comdat on supporting targets). If the vtable for the class is not emitted in the defining module, function won't be address taken thus its address is not recorded. At the mercy of the linker, if the per-func prf_data from this module (in comdat) is picked at link time, we will lose mapping from function address to its hash val. This leads to missing icall promotion. The second test case (currently disabled) in compiler_rt (r271528): instrprof-icall-prom.test demostrates the bug. The first profile-use subtest is fine due to linker order difference. With this change, no missing icall targets is found in instrumented clang's raw profile. llvm-svn: 271532
* make icall pass name consistent /NFCXinliang David Li2016-06-022-4/+4
| | | | llvm-svn: 271467
* [MemorySSA] Port to new pass managerGeoff Berry2016-06-0116-16/+32
| | | | | | | | | | | | | | | | | Add support for the new pass manager to MemorySSA pass. Change MemorySSA to be computed eagerly upon construction. Change MemorySSAWalker to be owned by the MemorySSA object that creates it. Reviewers: dberlin, george.burgess.iv Subscribers: mcrosier, llvm-commits Differential Revision: http://reviews.llvm.org/D19664 llvm-svn: 271432
* Revert "Claim NoAlias if two GEPs index different fields of the same struct"Daniel Berlin2016-06-011-77/+81
| | | | | | This reverts commit 2d5d6493f43eb68493a3852b8c226ac9fafdc7eb. llvm-svn: 271422
* Claim NoAlias if two GEPs index different fields of the same structDaniel Berlin2016-06-011-81/+77
| | | | | | | | | | | | | | Patch by Taewook Oh Summary: Patch for Bug 27478. Make BasicAliasAnalysis claims NoAlias if two GEPs index different fields of the same structure. Reviewers: hfinkel, dberlin Subscribers: dberlin, mcrosier, llvm-commits Differential Revision: http://reviews.llvm.org/D20665 llvm-svn: 271415
* [LV] For some IVs, use vector phis instead of widening in the loop bodyMichael Kuperstein2016-06-018-20/+85
| | | | | | | | | | | | | Previously, whenever we needed a vector IV, we would create it on the fly, by splatting the scalar IV and adding a step vector. Instead, we can create a real vector IV. This tends to save a couple of instructions per iteration. This only changes the behavior for the most basic case - integer primary IVs with a constant step. Differential Revision: http://reviews.llvm.org/D20315 llvm-svn: 271410
* [SLP] Pass in correct alignment when query memory access costGuozhi Wei2016-05-312-0/+31
| | | | | | | | | | This patch fixes bug https://llvm.org/bugs/show_bug.cgi?id=27897. When query memory access cost, current SLP always passes in alignment value of 1 (unaligned), so it gets a very high cost of scalar memory access, and wrongly vectorize memory loads in the test case. It can be fixed by simply giving correct alignment. llvm-svn: 271333
* Fix a crash in MergeFunctions related to ordering of weak/strong functionsErik Eckstein2016-05-311-0/+47
| | | | | | | | | | | The assumption, made in insert() that weak functions are always inserted after strong functions, is only true in the first round of adding functions. In subsequent rounds this is no longer guaranteed , because we might remove a strong function from the tree (because it's modified) and add it later, where an equivalent weak function already exists in the tree. This change removes the assert in insert() and explicitly enforces a weak->strong order. This also removes the need of two separate loops in runOnModule(). llvm-svn: 271299
* [IndVars] Eliminate op.with.overflow when possible (re-apply)Sanjoy Das2016-05-291-0/+137
| | | | | | | | | | | | | | | | | | | Summary: If we can prove that an op.with.overflow intrinsic does not overflow, we can get rid of the intrinsic, and replace it with non-wrapping arithmetic. This was first checked in at r265913 but reverted in r265950 because it exposed some issues around how SCEV handled post-inc add recurrences. Those issues have now been fixed. Reviewers: atrick, regehr Subscribers: sanjoy, mcrosier, llvm-commits Differential Revision: http://reviews.llvm.org/D18685 llvm-svn: 271153
* [SCEV] Don't always add no-wrap flags to post-inc add recsSanjoy Das2016-05-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fixes PR27315. The post-inc version of an add recurrence needs to "follow the same rules" as a normal add or subtract expression. Otherwise we miscompile programs like ``` int main() { int a = 0; unsigned a_u = 0; volatile long last_value; do { a_u += 3; last_value = (long) ((int) a_u); if (will_add_overflow(a, 3)) { // Leave, and don't actually do the increment, so no UB. printf("last_value = %ld\n", last_value); exit(0); } a += 3; } while (a != 46); return 0; } ``` This patch changes SCEV to put no-wrap flags on post-inc add recurrences only when the poison from a potential overflow will go ahead to cause undefined behavior. To avoid regressing performance too much, I've assumed infinite loops without side effects is undefined behavior to prove poison<->UB equivalence in more cases. This isn't ideal, but is not new to LLVM as a whole, and far better than the situation I'm trying to fix. llvm-svn: 271151
* [X86][SSE] (Reapplied) Replace (V)PMOVSX and (V)PMOVZX integer extension ↵Simon Pilgrim2016-05-282-207/+0
| | | | | | | | | | | | intrinsics with generic IR (llvm) This patch removes the llvm intrinsics VPMOVSX and (V)PMOVZX sign/zero extension intrinsics and auto-upgrades to SEXT/ZEXT calls instead. We already did this for SSE41 PMOVSX sometime ago so much of that implementation can be reused. Reapplied now that the the companion patch (D20684) removes/auto-upgrade the clang intrinsics has been committed. Differential Revision: http://reviews.llvm.org/D20686 llvm-svn: 271131
* [InstCombine] add tests to show bitcast interferenceSanjay Patel2016-05-281-0/+90
| | | | llvm-svn: 271125
* regenerate checksSanjay Patel2016-05-281-42/+52
| | | | llvm-svn: 271117
* join RUN lines; NFCSanjay Patel2016-05-281-2/+1
| | | | llvm-svn: 271115
* Bring back r271090 in a way that doesn't depend on r271089.Sean Silva2016-05-281-0/+3
| | | | llvm-svn: 271092
* Revert r271089 and r271090.Sean Silva2016-05-281-3/+0
| | | | | | | | | | | | | | It was triggering an msan bot. Revert "[IRPGO] Set the function entry count metadata." This reverts commit r271090. Revert "[IRPGO] Centralize the function attribute inliner hint logic. NFC." This reverts commit r271089. llvm-svn: 271091
* [IRPGO] Set the function entry count metadata.Sean Silva2016-05-281-0/+3
| | | | llvm-svn: 271090
* [PM] Port the Sample FDO to new PM (part-2)Xinliang David Li2016-05-2717-0/+27
| | | | llvm-svn: 271072
* The patch refactors unroll pass.Evgeny Stupachenko2016-05-271-1/+1
| | | | | | | | | | | | | | | | Summary: Unroll factor (Count) calculations moved to a new function. Early exits on pragma and "-unroll-count" defined factor added. New type of unrolling "Force" introduced (previously used implicitly). New unroll preference "AllowRemainder" introduced and set "true" by default. (should be set to false for architectures that suffers from it). Reviewers: hfinkel, mzolotukhin, zzheng Differential Revision: http://reviews.llvm.org/D19553 From: Evgeny Stupachenko <evstupac@gmail.com> llvm-svn: 271071
* [GVN] Preserve !range metadata when PRE'ing loadsSanjoy Das2016-05-271-0/+24
| | | | | | | | | | Reviewers: dberlin, reames, george.burgess.iv Subscribers: mcrosier, llvm-commits Differential Revision: http://reviews.llvm.org/D20743 llvm-svn: 271034
* Move test to X86 directory: I think it depends on X86 TTI.Tim Northover2016-05-271-0/+0
| | | | llvm-svn: 271019
* Vectorizer: track non-fast FP instructions through phis when finding reductions.Tim Northover2016-05-271-0/+75
| | | | | | | | When we traced through a phi node looking for floating-point reductions, we forgot whether we'd ever seen an instruction without fast-math flags (that would block vectorization). This propagates it through to the end. llvm-svn: 271015
* Remove sample profile dependency to instcombine, which is not a analysis pass.Dehao Chen2016-05-274-4/+4
| | | | | | | | | | | | Summary: This patch removes dependency from sample profile pass to instcombine pass. Reviewers: davidxl, dnovillo Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D20501 llvm-svn: 271009
* [RewriteStatepointsForGC] All constant should have null base pointerIgor Laevsky2016-05-274-3/+147
| | | | | | | | | | | | | | | | | | Currently we consider that each constant has itself as a base value. I.e "base(const) = const". This introduces couple of problems when we are trying to avoid reporting constants in statepoint live sets: 1. When querying "base( phi(const1, const2) )" we will get "phi(const1, const2)" as a base pointer. Since it's not a constant we will record it in a stack map. However on practice we don't want this to happen (constant are never relocated). 2. base( phi(const, gc ptr) ) = phi( const, base(gc ptr) ). This particular case imposes challenge on our runtime - we don't expect to see constant base pointers other than null. This problems can be avoided by treating all constant as if they were derived from null pointer base. I.e in a first case we will not include constant pointer in a stack map at all. In a second case we will get "phi(null, base(gc ptr))" as a base pointer which is a lot more convenient. Differential Revision: http://reviews.llvm.org/D20584 llvm-svn: 270993
* Revert: r270973 - [X86][SSE] Replace (V)PMOVSX and (V)PMOVZX integer ↵Simon Pilgrim2016-05-272-0/+207
| | | | | | extension intrinsics with generic IR (llvm) llvm-svn: 270976
* [X86][SSE] Replace (V)PMOVSX and (V)PMOVZX integer extension intrinsics with ↵Simon Pilgrim2016-05-272-207/+0
| | | | | | | | | | | | generic IR (llvm) This patch removes the llvm intrinsics VPMOVSX and (V)PMOVZX sign/zero extension intrinsics and auto-upgrades to SEXT/ZEXT calls instead. We already did this for SSE41 PMOVSX sometime ago so much of that implementation can be reused. A companion patch (D20684) removes/auto-upgrade the clang intrinsics. Differential Revision: http://reviews.llvm.org/D20686 llvm-svn: 270973
* Form objc_storeStrong in the presence of bitcasts.Pete Cooper2016-05-271-0/+26
| | | | | | | | | | | | | | | | objc_storeStrong can be formed from a sequence such as %0 = tail call i8* @objc_retain(i8* %p) nounwind %tmp = load i8*, i8** @x, align 8 store i8* %0, i8** @x, align 8 tail call void @objc_release(i8* %tmp) nounwind The code was already looking through bitcasts for most of the values involved, but had missed one case where the pointer operand for the store was a bitcast. Ultimately the pointer for the load and store have to be the same value, after stripping casts. llvm-svn: 270955
* [LoopUnrollAnalyzer] Bail out instead of dying with assert when facing huge ↵Michael Zolotukhin2016-05-271-0/+21
| | | | | | | | index. This fixes PR27902. llvm-svn: 270946
* Attach profile summary in IR based instrumentation pass.Easwaran Raman2016-05-261-1/+3
| | | | | | Differential revision: http://reviews.llvm.org/D20655 llvm-svn: 270933
* [LoopUnrollAnalyzer] Fix a crash in analyzeLoopUnrollCost.Michael Zolotukhin2016-05-261-0/+30
| | | | | | | | | Condition might be simplified to a Constant, but it doesn't have to be ConstantInt, so we should dyn_cast, instead of cast. This fixes PR27886. llvm-svn: 270924
* [MemCpyOpt] Don't perform callslot optimization across may-throw callsDavid Majnemer2016-05-262-1/+35
| | | | | | | | | An exception could prevent a store from occurring but MemCpyOpt's callslot optimization would fire anyway, causing the store to occur. This fixes PR27849. llvm-svn: 270892
* [BBVectorize] Don't vectorize selects with a scalar condition and vector ↵Michael Kuperstein2016-05-261-0/+33
| | | | | | | | | | operands. This fixes PR27879. Differential Revision: http://reviews.llvm.org/D20659 llvm-svn: 270888
* [CaptureTracking] Volatile operations capture their memory locationDavid Majnemer2016-05-261-0/+8
| | | | | | | | | | The memory location that corresponds to a volatile operation is very special. They are observed by the machine in ways which we cannot reason about. Differential Revision: http://reviews.llvm.org/D20555 llvm-svn: 270879
* [InstCombine] Catch more bswap cases missed due to zext and truncs.Chad Rosier2016-05-261-0/+38
| | | | | | | Fixes PR27824. Differential Revision: http://reviews.llvm.org/D20591. llvm-svn: 270853
* [MergedLoadStoreMotion] Don't transform across may-throw callsDavid Majnemer2016-05-262-1/+58
| | | | | | | | | | | | It is unsafe to hoist a load before a function call which may throw, the throw might prevent a pointer dereference. Likewise, it is unsafe to sink a store after a call which may throw. The caller might be able to observe the difference. This fixes PR27858. llvm-svn: 270828
* [ConstantFold] Fix incorrect index rewrites for GEPsAdam Nemet2016-05-261-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: If an index for a vector or array type is out-of-range GEP constant folding tries to factor it into preceding dimensions. The code however does not consider addressing of structure field padding which should not qualify as out-of-range index. As demonstrated by the testcase, this can occur if the indexing performed on a vector type and the preceding index is an array type. SROA generates GEPs for example involving padding bytes as it slices an alloca. My fix disables this folding if the element type is a vector type. I believe that this is the only way we can end up with padding. (We have no access to DataLayout so I am not sure if there is actual robust way of actually checking the presence of padding.) Reviewers: majnemer Subscribers: llvm-commits, Gerolf Differential Revision: http://reviews.llvm.org/D20663 llvm-svn: 270826
* MemorySSA: Revert r269678 and r268068; replace with special casing in MemorySSA.Peter Collingbourne2016-05-263-0/+39
| | | | | | | | | | | | | It turns out that too many passes are relying on alias analysis results for control dependencies. Until we fix that by introducing a more accurate modelling of control dependencies, special case assume in MemorySSA instead. Also introduce tests to ensure we don't regress the FunctionAttrs or LICM passes. Differential Revision: http://reviews.llvm.org/D20658 llvm-svn: 270823
* [IRCE] Optimize conjunctions of range checksSanjoy Das2016-05-261-0/+99
| | | | | | | | | | | | | After this change, we do the expected thing for cases like ``` Check0Passed = /* range check IRCE can optimize */ Check1Passed = /* range check IRCE can optimize */ if (!(Check0Passed && Check1Passed)) throw_Exception(); ``` llvm-svn: 270804
* [PM] Port PartiallyInlineLibCalls to the new pass manager.Davide Italiano2016-05-251-0/+1
| | | | llvm-svn: 270798
* Look for a loop's starting location in the llvm.loop metadataHal Finkel2016-05-251-0/+74
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Getting accurate locations for loops is important, because those locations are used by the frontend to generate optimization remarks. Currently, optimization remarks for loops often appear on the wrong line, often the first line of the loop body instead of the loop itself. This is confusing because that line might itself be another loop, or might be somewhere else completely if the body was inlined function call. This happens because of the way we find the loop's starting location. First, we look for a preheader, and if we find one, and its terminator has a debug location, then we use that. Otherwise, we look for a location on an instruction in the loop header. The fallback heuristic is not bad, but will almost always find the beginning of the body, and not the loop statement itself. The preheader location search often fails because there's often not a preheader, and even when there is a preheader, depending on how it was formed, it sometimes carries the location of some preceeding code. I don't see any good theoretical way to fix this problem. On the other hand, this seems like a straightforward solution: Put the debug location in the loop's llvm.loop metadata. A companion Clang patch will cause Clang to insert llvm.loop metadata with appropriate locations when generating debugging information. With these changes, our loop remarks have much more accurate locations. Differential Revision: http://reviews.llvm.org/D19738 llvm-svn: 270771
* [TLI] Also cover Linux 64 libfunc (stat64, ...) prototype checking.Ahmed Bougacha2016-05-252-1/+63
| | | | | | My script missed those in r270750. llvm-svn: 270763
* [TLI] Fix NumParams==0 prototype checking typo.Ahmed Bougacha2016-05-252-27/+1651
| | | | | | | | | | | | | There was a typo in r267758. It caused invalid accesses when given something like "void @free(...)", as NumParams == 0, and we then try to look at the 0th parameter. Turns out, most of these were untested; add both attribute and missing-prototype checks for all libc libfuncs. Differential Revision: http://reviews.llvm.org/D20543 llvm-svn: 270750
* [IR] Copy comdats in GlobalObject::copyAttributesFromReid Kleckner2016-05-251-0/+14
| | | | | | | | | | | | | | This is probably correct for all uses except cross-module IR linking, where we need to move the comdat from the source module to the destination module. Fixes PR27870. Reviewers: majnemer Differential Revision: http://reviews.llvm.org/D20631 llvm-svn: 270743
* [x86] avoid code explosion from LoopVectorizer for gather loop (PR27826) Sanjay Patel2016-05-251-0/+41
| | | | | | | | | | | | | | By making pointer extraction from a vector more expensive in the cost model, we avoid the vectorization of a loop that is very likely to be memory-bound: https://llvm.org/bugs/show_bug.cgi?id=27826 There are still bugs related to this, so we may need a more general solution to avoid vectorizing obviously memory-bound loops when we don't have HW gather support. Differential Revision: http://reviews.llvm.org/D20601 llvm-svn: 270729
* [X86] Remove the llvm.x86.sse2.storel.dq intrinsic. It hasn't been used in a ↵Craig Topper2016-05-251-13/+0
| | | | | | long time. llvm-svn: 270677
* [FunctionAttrs] Volatile loads should disable readonlyDavid Majnemer2016-05-251-0/+8
| | | | | | | | A volatile load has side effects beyond what callers expect readonly to signify. For example, it is not safe to reorder two function calls which each perform a volatile load to the same memory location. llvm-svn: 270671
* [PM] Port BDCE to the new pass manager.Davide Italiano2016-05-251-0/+1
| | | | llvm-svn: 270647
* Re-enable "[LoopUnroll] Enable advanced unrolling analysis by default" one ↵Michael Zolotukhin2016-05-241-1/+1
| | | | | | | | more time. This reverts commit r270577. llvm-svn: 270630
OpenPOWER on IntegriCloud