summaryrefslogtreecommitdiffstats
path: root/llvm/test/Transforms
Commit message (Collapse)AuthorAgeFilesLines
* Revert r321377, it causes regression to https://reviews.llvm.org/P8055.Guozhi Wei2017-12-281-231/+0
| | | | llvm-svn: 321528
* [RewriteStatepoints] Fix incorrect assertionMax Kazantsev2017-12-281-0/+38
| | | | | | | | | | | | | | | | | | | | `RewriteStatepointsForGC` iterates over function blocks and their predecessors in order of declaration. One of outcomes of this is that callsites are placed in arbitrary order which has nothing to do with travelsar order. On the other hand, function `recomputeLiveInValues` asserts that bases are added to `Info.PointerToBase` before their deried pointers are updated. But if call sites are processed in order different from RPOT, this is not necessarily true. We cannot guarantee that the base was placed there before every pointer derived from it. All we can guarantee is that this base was marked as known base by this point. This patch replaces the fact that we assert from checking that the base was added to the map with assert that the base was marked as known base. Differential Revision: https://reviews.llvm.org/D41593 llvm-svn: 321517
* [InstCombine] Check for isa<Instruction> before using cast<>Simon Pilgrim2017-12-281-0/+13
| | | | | | | | Protects against casts from constexpr etc. Reduced from oss-fuzz #4788 test case llvm-svn: 321515
* Revert "[memcpyopt] Teach memcpyopt to optimize across basic blocks"Reid Kleckner2017-12-284-243/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts r321138. It seems there are still underlying issues with memdep. PR35519 seems to still be present if debug info is enabled. We end up losing a memcpy. Somehow during store to memset merging, we insert the memset after the memcpy or fail to update the memdep analysis to account for the newly inserted memset of a pair. Reduced test case: #include <assert.h> #include <stdio.h> #include <string> #include <utility> #include <vector> void do_push_back( std::vector<std::pair<std::string, std::vector<std::string>>>* crls) { crls->push_back(std::make_pair(std::string(), std::vector<std::string>())); } int __attribute__((optnone)) main() { // Put some data in the vector and then remove it so we take the push_back // fast path. std::vector<std::pair<std::string, std::vector<std::string>>> crl_set; crl_set.push_back({"asdf", {}}); crl_set.pop_back(); printf("first word in vector storage: %p\n", *(void**)crl_set.data()); // Do the push_back which may fail to initialize the data. do_push_back(&crl_set); auto* first = &crl_set.back().first; printf("first word in vector storage (should be zero): %p\n", *(void**)crl_set.data()); assert(first->empty()); puts("ok"); } Compile with libc++, enable optimizations, and enable debug info: $ clang++ -stdlib=libc++ -g -O2 t.cpp -o t.exe -Wl,-rpath=llvm/build/lib This program will assert with this change. llvm-svn: 321510
* [InstCombine] add tests for min/max folds (PR35717); NFCSanjay Patel2017-12-271-0/+155
| | | | llvm-svn: 321500
* [InstCombine] Gracefully handle out of range extractelement indicesSimon Pilgrim2017-12-271-0/+11
| | | | | | | | InstSimplify is responsible for handling these, but we shouldn't just assert here. Reduced from oss-fuzz #4808 test case llvm-svn: 321489
* [instcombine] add powi(x, 2) -> x * xPhilip Reames2017-12-271-0/+5
| | | | llvm-svn: 321468
* [Unroll][DebugInfo] Propagate loop body's debug location to epilog preheaderZhaoshi Zheng2017-12-262-2/+132
| | | | | | | NewExit and epilog PreHeader should has the same debug loc as the original loop body, instead of original loop exit. llvm-svn: 321465
* [InstCombine] fix miscompile of frem with 0.0 operand (PR34870)Sanjay Patel2017-12-261-2/+3
| | | | | | | We might want to select NAN here or do this transform with fast-math, but this should at least fix the miscompile. llvm-svn: 321461
* [InstCombine] add test for frem with 0.0 (PR34870); NFCSanjay Patel2017-12-261-0/+13
| | | | llvm-svn: 321460
* [ValueTracking] ignore FP signed-zero when detecting a casted-to-integer ↵Sanjay Patel2017-12-261-5/+18
| | | | | | | | | | | | | | | | | fmin/fmax pattern This is a preliminary step for the patch discussed in D41136 (and denoted here with the FIXME comment). When we match an FP min/max that is cast to integer, any intermediate difference between +0.0 or -0.0 should be muted in the result by the conversion (either fptosi or fptoui) of the result. Thus, we can enable 'nsz' for the purpose of matching fmin/fmax. Note that there's probably room to generalize this more, possibly by fixing the current calls to the weak version of isKnownNonZero() in matchSelectPattern() to the more powerful recursive version. Differential Revision: https://reviews.llvm.org/D41333 llvm-svn: 321456
* [InstSimplify] Check for in range extraction index before calling ↵Simon Pilgrim2017-12-261-0/+13
| | | | | | | | APInt::getZExtValue() Reduced from oss-fuzz #4768 test case llvm-svn: 321454
* [CallSiteSplitting] Remove isOrHeader restriction.Florian Hahn2017-12-232-0/+157
| | | | | | | | | | | By following the single predecessors of the predecessors of the call site, we do not need to restrict the control flow. Reviewed By: junbuml, davide Differential Revision: https://reviews.llvm.org/D40729 llvm-svn: 321413
* [SimplifyCFG] Don't do if-conversion if there is a long dependence chainGuozhi Wei2017-12-221-0/+231
| | | | | | | | | | If after if-conversion, most of the instructions in this new BB construct a long and slow dependence chain, it may be slower than cmp/branch, even if the branch has a high miss rate, because the control dependence is transformed into data dependence, and control dependence can be speculated, and thus, the second part can execute in parallel with the first part on modern OOO processor. This patch checks for the long dependence chain, and give up if-conversion if find one. Differential Revision: https://reviews.llvm.org/D39352 llvm-svn: 321377
* [InlineCost] Find more free binary operationsHaicheng Wu2017-12-221-0/+291
| | | | | | | | | | | | Currently, inline cost model considers a binary operator as free only if both its operands are constants. Some simple cases are missing such as a + 0, a - a, etc. This patch modifies visitBinaryOperator() to call SimplifyBinOp() without going through simplifyInstruction() to get rid of the constant restriction. Thus, visitAnd() and visitOr() are not needed. Differential Revision: https://reviews.llvm.org/D41494 llvm-svn: 321366
* inline-fp.ll was moved in r321332; delete it properly.Eli Friedman2017-12-221-137/+0
| | | | llvm-svn: 321333
* [Inliner] Restrict soft-float inlining penalty.Eli Friedman2017-12-221-0/+113
| | | | | | | | | | | | | | The penalty is currently getting applied in a bunch of places where it doesn't make sense, like bitcasts (which are free) and calls (which were getting the call penalty applied twice). Instead, just apply the penalty to binary operators and floating-point casts. While I'm here, also fix getFPOpCost() to do the right thing in more cases, so we don't have to dig into function attributes. Differential Revision: https://reviews.llvm.org/D41522 llvm-svn: 321332
* [ICP] Expose unconditional call promotion interfaceMatthew Simpson2017-12-206-20/+18
| | | | | | | | | | | | | | | | | | | | | | | This patch modifies the indirect call promotion utilities by exposing and using an unconditional call promotion interface. The unconditional promotion interface (i.e., call promotion without creating an if-then-else) can be used if it's known that an indirect call has only one possible callee. The existing conditional promotion interface uses this unconditional interface to promote an indirect call after it has been versioned and placed within the "then" block. A consequence of unconditional promotion is that the fix-up operations for phi nodes in the normal destination of invoke instructions are changed. This is necessary because the existing implementation assumed that an invoke had been versioned, creating a "merge" block where a return value bitcast could be placed. In the new implementation, the edge between a promoted invoke's parent block and its normal destination is split if needed to add a bitcast for the return value. If the invoke is also versioned, the phi node merging the return value of the promoted and original invoke instructions is placed in the "merge" block. Differential Revision: https://reviews.llvm.org/D40751 llvm-svn: 321210
* [PGO] Function section hotness prefix should look at all blocksTeresa Johnson2017-12-201-10/+37
| | | | | | | | | | | | | | | | | | | | Summary: The function section prefix for PGO based layout (e.g. hot/unlikely) should look at the hotness of all blocks not just the entry BB. A function with a cold entry but a very hot loop should be placed in the hot section, for example, so that it is located close to other hot functions it may call. For SamplePGO it was already looking at the branch weights on calls, and I made that code conditional on whether this is SamplePGO since it was essentially a noop for instrumentation PGO anyway. Reviewers: davidxl Subscribers: eraman, llvm-commits Differential Revision: https://reviews.llvm.org/D41395 llvm-svn: 321197
* [InstCombine] Add debug location to new caller.Florian Hahn2017-12-201-4/+19
| | | | | | | | | | Reviewers: rnk, aprantl, majnemer Reviewed By: aprantl Differential Revision: https://reviews.llvm.org/D414 llvm-svn: 321191
* Revert r320548:[SLP] Vectorize jumbled memory loadsMohammad Shahid2017-12-205-384/+52
| | | | llvm-svn: 321181
* [LV] Remove unnecessary DoExtraAnalysis guard (silent bug)Florian Hahn2017-12-201-0/+27
| | | | | | | | | | | | | | canVectorize is only checking if the loop has a normalized pre-header if DoExtraAnalysis is true. This doesn't make sense to me because reporting analysis information shouldn't alter legality checks. This is probably the result of a last minute minor change before committing (?). Patch by Diego Caballero. Reviewed By: fhahn Differential Revision: https://reviews.llvm.org/D40973 llvm-svn: 321172
* [memcpyopt] Teach memcpyopt to optimize across basic blocksDan Gohman2017-12-204-0/+243
| | | | | | | | | | | | | | | | | This teaches memcpyopt to make a non-local memdep query when a local query indicates that the dependency is non-local. This notably allows it to eliminate many more llvm.memcpy calls in common Rust code, often by 20-30%. This is r319482 and r319483, along with fixes for PR35519: fix the optimization that merges stores into memsets to preserve cached memdep info, and fix memdep's non-local caching strategy to not assume that larger queries are always more conservative than smaller ones. Fixes PR28958 and PR35519. Differential Revision: https://reviews.llvm.org/D40802 llvm-svn: 321138
* [InlineCost] Skip volatile loads when looking for repeated loadsHaicheng Wu2017-12-191-0/+18
| | | | | | This is a follow-up fix of r320814. A test case is also added. llvm-svn: 321075
* [JumpThreading] Restrict PRE across instructions that don't pass control to ↵Max Kazantsev2017-12-191-0/+103
| | | | | | | | | | | | | | successors PRE in JumpThreading should not be able to hoist copy of non-speculable loads across instructions that don't always transfer execution to their successors, otherwise they may introduce an unsafe load which otherwise would not be executed. The same problem for GVN was fixed as rL316975. Differential Revision: https://reviews.llvm.org/D40347 llvm-svn: 321063
* [Analysis] Generate more precise TBAA tags when one access encloses the otherIvan A. Kosarev2017-12-182-42/+80
| | | | | | | | | | | | | | There are cases when two tags with different base types denote accesses to the same direct or indirect member of a structure type. Currently, merging of such tags results in a tag that represents an access to an object that has the type of that member. This patch changes this so that if one of the accesses encloses the other, then the generic tag is the one of the enclosed access. Differential Revision: https://reviews.llvm.org/D39557 llvm-svn: 321019
* [PGO] Fix handling of cold entry count for instrumented PGOTeresa Johnson2017-12-181-2/+2
| | | | | | | | | | | | | | | | | | | | | Summary: In r277849, getEntryCount was changed to return None when the entry count was 0, specifically for SamplePGO where it means no samples were recorded. However, for instrumentation PGO a 0 entry count should be returned directly, since it does mean that the function was completely cold. Otherwise we end up treating these functions conservatively in isFunctionEntryCold() and isColdBB(). Instead, for SamplePGO use -1 when there are no samples, and change getEntryCount to return None when the value is -1. Reviewers: danielcdh, davidxl Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D41307 llvm-svn: 321018
* [PGO] add MST min edge selection heuristic to ensure non-zero entry countXinliang David Li2017-12-189-14/+35
| | | | | | Differential Revision: http://reviews.llvm.org/D41059 llvm-svn: 320998
* [TargetLibraryInfo] Discard library functions with incorrectly sized integersIgor Laevsky2017-12-181-0/+16
| | | | | | Differential Revision: https://reviews.llvm.org/D41184 llvm-svn: 320964
* [SROA] Disable non-whole-alloca splits by defaultHiroshi Inoue2017-12-182-57/+16
| | | | | | | This patch introduce a switch to control splitting of non-whole-alloca slices with default off. The switch will be default on again after fixing an issue reported in PR35657. llvm-svn: 320958
* [CGP] Fix the handling select inst in complex addressing modeSerguei Katkov2017-12-181-0/+21
| | | | | | | | | | | | | | | When we put the value in select placeholder we must pass the value through simplification tracker due to the value might be already simplified and erased. This is a fix for PR35658. Reviewers: john.brawn, uabelho Reviewed By: john.brawn Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D41251 llvm-svn: 320956
* [InstCombine] Regenerate FMUL/FMA combine tests with update_test_checks.pySimon Pilgrim2017-12-162-100/+186
| | | | llvm-svn: 320922
* [InstCombine] canonicalize shifty abs(): ashr+add+xor --> cmp+neg+selSanjay Patel2017-12-161-12/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We want to do this for 2 reasons: 1. Value tracking does not recognize the ashr variant, so it would fail to match for cases like D39766. 2. DAGCombiner does better at producing optimal codegen when we have the cmp+sel pattern. More detail about what happens in the backend: 1. DAGCombiner has a generic transform for all targets to convert the scalar cmp+sel variant of abs into the shift variant. That is the opposite of this IR canonicalization. 2. DAGCombiner has a generic transform for all targets to convert the vector cmp+sel variant of abs into either an ABS node or the shift variant. That is again the opposite of this IR canonicalization. 3. DAGCombiner has a generic transform for all targets to convert the exact shift variants produced by #1 or #2 into an ISD::ABS node. Note: It would be an efficiency improvement if we had #1 go directly to an ABS node when that's legal/custom. 4. The pattern matching above is incomplete, so it is possible to escape the intended/optimal codegen in a variety of ways. a. For #2, the vector path is missing the case for setlt with a '1' constant. b. For #3, we are missing a match for commuted versions of the shift variants. 5. Therefore, this IR canonicalization can only help get us to the optimal codegen. The version of cmp+sel produced by this patch will be recognized in the DAG and converted to an ABS node when possible or the shift sequence when not. 6. In the following examples with this patch applied, we may get conditional moves rather than the shift produced by the generic DAGCombiner transforms. The conditional move is created using a target-specific decision for any given target. Whether it is optimal or not for a particular subtarget may be up for debate. define i32 @abs_shifty(i32 %x) { %signbit = ashr i32 %x, 31 %add = add i32 %signbit, %x %abs = xor i32 %signbit, %add ret i32 %abs } define i32 @abs_cmpsubsel(i32 %x) { %cmp = icmp slt i32 %x, zeroinitializer %sub = sub i32 zeroinitializer, %x %abs = select i1 %cmp, i32 %sub, i32 %x ret i32 %abs } define <4 x i32> @abs_shifty_vec(<4 x i32> %x) { %signbit = ashr <4 x i32> %x, <i32 31, i32 31, i32 31, i32 31> %add = add <4 x i32> %signbit, %x %abs = xor <4 x i32> %signbit, %add ret <4 x i32> %abs } define <4 x i32> @abs_cmpsubsel_vec(<4 x i32> %x) { %cmp = icmp slt <4 x i32> %x, zeroinitializer %sub = sub <4 x i32> zeroinitializer, %x %abs = select <4 x i1> %cmp, <4 x i32> %sub, <4 x i32> %x ret <4 x i32> %abs } > $ ./opt -instcombine shiftyabs.ll -S | ./llc -o - -mtriple=x86_64 -mattr=avx > abs_shifty: > movl %edi, %eax > negl %eax > cmovll %edi, %eax > retq > > abs_cmpsubsel: > movl %edi, %eax > negl %eax > cmovll %edi, %eax > retq > > abs_shifty_vec: > vpabsd %xmm0, %xmm0 > retq > > abs_cmpsubsel_vec: > vpabsd %xmm0, %xmm0 > retq > > $ ./opt -instcombine shiftyabs.ll -S | ./llc -o - -mtriple=aarch64 > abs_shifty: > cmp w0, #0 // =0 > cneg w0, w0, mi > ret > > abs_cmpsubsel: > cmp w0, #0 // =0 > cneg w0, w0, mi > ret > > abs_shifty_vec: > abs v0.4s, v0.4s > ret > > abs_cmpsubsel_vec: > abs v0.4s, v0.4s > ret > > $ ./opt -instcombine shiftyabs.ll -S | ./llc -o - -mtriple=powerpc64le > abs_shifty: > srawi 4, 3, 31 > add 3, 3, 4 > xor 3, 3, 4 > blr > > abs_cmpsubsel: > srawi 4, 3, 31 > add 3, 3, 4 > xor 3, 3, 4 > blr > > abs_shifty_vec: > vspltisw 3, -16 > vspltisw 4, 15 > vsubuwm 3, 4, 3 > vsraw 3, 2, 3 > vadduwm 2, 2, 3 > xxlxor 34, 34, 35 > blr > > abs_cmpsubsel_vec: > vspltisw 3, -16 > vspltisw 4, 15 > vsubuwm 3, 4, 3 > vsraw 3, 2, 3 > vadduwm 2, 2, 3 > xxlxor 34, 34, 35 > blr > Differential Revision: https://reviews.llvm.org/D40984 llvm-svn: 320921
* Move Transforms/LoopVectorize/consecutive-ptr-cg-bug.ll into the X86 ↵Hal Finkel2017-12-161-0/+0
| | | | | | | | subdirectory This test depends on X86's TTI; move into the X86 subdirectory. llvm-svn: 320914
* [LV] Extend InstWidening with CM_Widen_RecursiveHal Finkel2017-12-161-0/+68
| | | | | | | | | | | | | | | | | | | | Changes to the original scalar loop during LV code gen cause the return value of Legal->isConsecutivePtr() to be inconsistent with the return value during legal/cost phases (further analysis and information of the bug is in D39346). This patch is an alternative fix to PR34965 following the CM_Widen approach proposed by Ayal and Gil in D39346. It extends InstWidening enum with CM_Widen_Reverse to properly record the widening decision for consecutive reverse memory accesses and, consequently, get rid of the Legal->isConsetuviePtr() call in LV code gen. I think this is a simpler/cleaner solution to PR34965 than the one in D39346. Fixes PR34965. Patch by Diego Caballero, thanks! Differential Revision: https://reviews.llvm.org/D40742 llvm-svn: 320913
* [LTO] Make processing of combined module more consistentVitaly Buka2017-12-162-2/+2
| | | | | | | | | | | | | | | | | Summary: 1. Use stream 0 only for combined module. Previously if combined module was not processes ThinLTO used the stream for own output. However small changes in input, could trigger combined module and shuffle outputs making life of llvm::LTO harder. 2. Always process combined module and write output to stream 0. Processing empty combined module is cheap and allows llvm::LTO users to avoid implementing processing which is already done in llvm::LTO. Subscribers: mehdi_amini, inglorion, eraman, hiraditya Differential Revision: https://reviews.llvm.org/D41267 llvm-svn: 320905
* [SimplifyLibCalls] Inline calls to cabs when it's safe to do soHal Finkel2017-12-162-0/+124
| | | | | | | | | | | | When unsafe algerbra is allowed calls to cabs(r) can be replaced by: sqrt(creal(r)*creal(r) + cimag(r)*cimag(r)) Patch by Paul Walker, thanks! Differential Revision: https://reviews.llvm.org/D40069 llvm-svn: 320901
* [ThinLTO] Enable importing of aliases as copy of aliaseeTeresa Johnson2017-12-161-5/+6
| | | | | | | | | | | | | | | | | | | | | | | Summary: This implements a missing feature to allow importing of aliases, which was previously disabled because alias cannot be available_externally. We instead import an alias as a copy of its aliasee. Some additional work was required in the IndexBitcodeWriter for the distributed build case, to ensure that the aliasee has a value id in the distributed index file (i.e. even when it is not being imported directly). This is a performance win in codes that have many aliases, e.g. C++ applications that have many constructor and destructor aliases. Reviewers: pcc Subscribers: mehdi_amini, inglorion, eraman, llvm-commits Differential Revision: https://reviews.llvm.org/D40747 llvm-svn: 320895
* Re-commit : [LICM] Allow sinking when foldable in loopJun Bum Lim2017-12-151-0/+150
| | | | | | | | | | | | | | | | | | | | | | This recommits r320823 reverted due to the test failure in sink-foldable.ll and an unused variable. Added "REQUIRES: aarch64-registered-target" in the test and removed unused variable. Original commit message: Continue trying to sink an instruction if its users in the loop is foldable. This will allow the instruction to be folded in the loop by decoupling it from the user outside of the loop. Reviewers: hfinkel, majnemer, davidxl, efriedma, danielcdh, bmakam, mcrosier Reviewed By: hfinkel Subscribers: javed.absar, bmakam, mcrosier, llvm-commits Differential Revision: https://reviews.llvm.org/D37076 llvm-svn: 320858
* Revert "Re-commit : [LICM] Allow sinking when foldable in loop"Jun Bum Lim2017-12-151-150/+0
| | | | | | This reverts commit r320833. llvm-svn: 320836
* Re-commit : [LICM] Allow sinking when foldable in loopJun Bum Lim2017-12-151-0/+150
| | | | | | | | | | | | | | | | | | | | This recommit r320823 after fixing a test failure. Original commit message: Continue trying to sink an instruction if its users in the loop is foldable. This will allow the instruction to be folded in the loop by decoupling it from the user outside of the loop. Reviewers: hfinkel, majnemer, davidxl, efriedma, danielcdh, bmakam, mcrosier Reviewed By: hfinkel Subscribers: javed.absar, bmakam, mcrosier, llvm-commits Differential Revision: https://reviews.llvm.org/D37076 llvm-svn: 320833
* Revert "[LICM] Allow sinking when foldable in loop"Jun Bum Lim2017-12-151-147/+0
| | | | | | This reverts commit r320823. llvm-svn: 320828
* [LICM] Allow sinking when foldable in loopJun Bum Lim2017-12-151-0/+147
| | | | | | | | | | | | | | | | | Summary: Continue trying to sink an instruction if its users in the loop is foldable. This will allow the instruction to be folded in the loop by decoupling it from the user outside of the loop. Reviewers: hfinkel, majnemer, davidxl, efriedma, danielcdh, bmakam, mcrosier Reviewed By: hfinkel Subscribers: javed.absar, bmakam, mcrosier, llvm-commits Differential Revision: https://reviews.llvm.org/D37076 llvm-svn: 320823
* [InlineCost] Find repeated loads in the calleeHaicheng Wu2017-12-151-0/+186
| | | | | | | | | | | SROA analysis of InlineCost can figure out that some stores can be removed after inlining and then the repeated loads clobbered by these stores are also free. This patch finds these clobbered loads and adjust the inline cost accordingly. Differential Revision: https://reviews.llvm.org/D33946 llvm-svn: 320814
* [PM] port Rewrite Statepoints For GC to the new pass manager.Fedor Sergeev2017-12-1543-0/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: The port is nearly straightforward. The only complication is related to the analyses handling, since one of the analyses used in this module pass is domtree, which is a function analysis. That requires asking for the results of each function and disallows a single interface for run-on-module pass action. Decided to copy-paste the main body of this pass. Most of its code is requesting analyses anyway, so not that much of a copy-paste. The rest of the code movement is to transform all the implementation helper functions like stripNonValidData into non-member statics. Extended all the related LLVM tests with new-pass-manager use. No failures. Reviewers: sanjoy, anna, reames Reviewed By: anna Subscribers: skatkov, llvm-commits Differential Revision: https://reviews.llvm.org/D41162 llvm-svn: 320796
* [SCEV] Fix the movement of insertion point in expander. PR35406.Serguei Katkov2017-12-151-0/+88
| | | | | | | | | | | | We cannot move the insertion point to header if SCEV contains div/rem operations due to they may go over check for zero denominator. Reviewers: sanjoy, mkazantsev, sebpop Reviewed By: sebpop Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D41229 llvm-svn: 320789
* [SimplifyCFG] don't sink common insts too soon (PR34603)Sanjay Patel2017-12-143-6/+4
| | | | | | | | | | | | This should solve: https://bugs.llvm.org/show_bug.cgi?id=34603 ...by preventing SimplifyCFG from altering redundant instructions before early-cse has a chance to run. It changes the default (canonical-forming) behavior of SimplifyCFG, so we're only doing the sinking transform later in the optimization pipeline. Differential Revision: https://reviews.llvm.org/D38566 llvm-svn: 320749
* [SLPVectorizer] Don't ignore scalar extraction instructions of aggregate valueGuozhi Wei2017-12-142-0/+38
| | | | | | | | | In SLPVectorizer, the vector build instructions (insertvalue for aggregate type) is passed to BoUpSLP.buildTree, it is treated as UserIgnoreList, so later in cost estimation, the cost of these instructions are not counted. For aggregate value, later usage are more likely to be done in scalar registers, either used as individual scalars or used as a whole for function call or return value. Ignore scalar extraction instructions may cause too aggressive vectorization for aggregate values, and slow down performance. So for vectorization of aggregate value, the scalar extraction instructions are required in cost estimation. Differential Revision: https://reviews.llvm.org/D41139 llvm-svn: 320736
* [ScalarEvolution] Fix base condition in isNormalAddRecPHI.Bjorn Pettersson2017-12-141-0/+63
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: The function is meant to recurse until it comes upon the phi it's looking for. However, with the current condition, it will recurse until it finds anything _but_ the phi. The function will even fail for simple cases like: %i = phi i32 [ %inc, %loop ], ... ... %inc = add i32 %i, 1 because the base condition will not happen when the phi is recursed to, and the recursion will end with a 'false' result since the previous instruction is a phi. Reviewers: sanjoy, atrick Reviewed By: sanjoy Subscribers: Ka-Ka, bjope, llvm-commits Committing on behalf of: Bevin Hansson (bevinh) Differential Revision: https://reviews.llvm.org/D40946 llvm-svn: 320700
* [InlineCost] Tracking Values through PHI NodesHaicheng Wu2017-12-141-0/+504
| | | | | | | | | | | | This patch fix this FIXME in visitPHI() FIXME: We should potentially be tracking values through phi nodes, especially when they collapse to a single value due to deleted CFG edges during inlining. Differential Revision: https://reviews.llvm.org/D38594 llvm-svn: 320699
OpenPOWER on IntegriCloud