summaryrefslogtreecommitdiffstats
path: root/llvm/test/Transforms
Commit message (Collapse)AuthorAgeFilesLines
...
* [Unroll] Do NOT unroll a loop with small runtime upperboundZhaoshi Zheng2019-09-261-0/+70
| | | | | | | | | | | | | | | | For a runtime loop if we can compute its trip count upperbound: Don't unroll if: 1. loop is not guaranteed to run either zero or upperbound iterations; and 2. trip count upperbound is less than UnrollMaxUpperBound Unless user or TTI asked to do so. If unrolling, limit unroll factor to loop's trip count upperbound. Differential Revision: https://reviews.llvm.org/D62989 Change-Id: I6083c46a9d98b2e22cd855e60523fdc5a4929c73 llvm-svn: 373017
* [InstCombine][NFC] Add tests for shift-by-signextRoman Lebedev2019-09-261-0/+105
| | | | llvm-svn: 373013
* [InstCombine][NFC] Regenerate load-cmp.ll testRoman Lebedev2019-09-261-25/+25
| | | | llvm-svn: 373012
* [NFC] Precommit tests for D68089David Bolvansky2019-09-261-0/+79
| | | | llvm-svn: 373006
* [InstCombine] Use m_Zero instead of isNullValue() when checking if a GEP ↵Craig Topper2019-09-261-0/+17
| | | | | | | | | | | | index is all zeroes to prevent an infinite loop. The test case here previously infinite looped. Only one element from the GEP is used so SimplifyDemandedVectorElts would replace the other lanes in each index with undef leading to the first index being <0, undef, undef, undef>. But there's a GEP transform that tries to replace an index into a 0 sized type with a zero index. But the zero index check only works on ConstantInt 0 or ConstantAggregateZero so it would turn the index back to zeroinitializer. Resulting in a loop. The fix is to use m_Zero() to allow a vector of zeroes and undefs. Differential Revision: https://reviews.llvm.org/D67977 llvm-svn: 373000
* [LoopInfo] Limit the iterations to check whether a loop has dedicated exitsWei Mi2019-09-261-0/+102
| | | | | | | | | | | | | | for extreme large case. We had a case that a single loop which has 4000 exits and the average number of predecessors of each exit is > 1000, and we found compiling the case spent a significant amount of time on checking whether a loop has dedicated exits. This patch adds a limit for the iterations to the check. With the patch, the time to compile our testcase reduced from 1000s to 200s (clang release build). Differential Revision: https://reviews.llvm.org/D67359 llvm-svn: 372990
* Handle successor's PHI node correctly when flattening CFG merges two if-regionsJakub Kuderski2019-09-261-0/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: FlattenCFG merges two 'if' basicblocks by inserting one basicblock to another basicblock. The inserted basicblock can have a successor that contains a PHI node whoes incoming basicblock is the inserted basicblock. Since the existing code does not handle it, it becomes a badref. if (cond1) statement if (cond2) statement successor - contains PHI node whose predecessor is cond2 --> if (cond1 || cond2) statement (BB for cond2 was deleted) successor - contains PHI node whose predecessor is cond2 --> bad ref! Author: Jaebaek Seo Reviewers: asbirlea, kuhar, tstellar, chandlerc, davide, dexonsmith Reviewed By: kuhar Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D68032 llvm-svn: 372989
* [InstCombine] Don't assume CmpInst has been visited in ↵Bjorn Pettersson2019-09-261-0/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | getFlippedStrictnessPredicateAndConstant Summary: Removing an assumption (assert) that the CmpInst already has been simplified in getFlippedStrictnessPredicateAndConstant. Solution is to simply bail out instead of hitting the assertion. Instead we assume that any profitable rewrite will happen in the next iteration of InstCombine. The reason why we can't assume that the CmpInst already has been simplified is that the worklist does not guarantee such an ordering. Solves https://bugs.llvm.org/show_bug.cgi?id=43376 Reviewers: spatel, lebedev.ri Reviewed By: lebedev.ri Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D68022 llvm-svn: 372972
* [SLPVectorizer][X86] Add SSE common check prefix to let us merge SSE2+SLM checksSimon Pilgrim2019-09-262-150/+4
| | | | llvm-svn: 372955
* [CostModel][X86] Fix SLM <2 x i64> icmp costsSimon Pilgrim2019-09-262-32/+64
| | | | | | | | SLM is 2 x slower for <2 x i64> comparison ops than other vector types, we should account for this like we do for SLM <2 x i64> add/sub/mul costs. This should remove some of the SLM codegen diffs in D43582 llvm-svn: 372954
* [InstCombine] foldUnsignedUnderflowCheck(): one last pattern with 'sub' ↵Roman Lebedev2019-09-251-8/+4
| | | | | | | | (PR43251) https://rise4fun.com/Alive/0j9 llvm-svn: 372930
* [NFC][InstCombine] Tests for 'base u<= offset && (base - offset) != 0' ↵Roman Lebedev2019-09-251-0/+33
| | | | | | pattern (PR43251) llvm-svn: 372929
* [InstSimplify] Handle more 'A </>/>=/<= B &&/|| (A - B) !=/== 0' patterns ↵Roman Lebedev2019-09-252-28/+10
| | | | | | | | | | | (PR43251) https://rise4fun.com/Alive/sl9s https://rise4fun.com/Alive/2plN https://bugs.llvm.org/show_bug.cgi?id=43251 llvm-svn: 372928
* [NFC][InstSimplify] More exaustive test coverage for 'A </>/>=/<= B &&/|| (A ↵Roman Lebedev2019-09-251-68/+245
| | | | | | - B) !=/== 0' pattern (PR43251) llvm-svn: 372927
* [InstSimplify] Match 1.0 and 0.0 for both operands in SimplifyFMAMulFlorian Hahn2019-09-251-3/+2
| | | | | | | | | | | | | | | | | | Because we do not constant fold multiplications in SimplifyFMAMul, we match 1.0 and 0.0 for both operands, as multiplying by them is guaranteed to produce an exact result (if it is allowed to do so). Note that it is not enough to just swap the operands to ensure a constant is on the RHS, as we want to also cover the case with 2 constants. Reviewers: lebedev.ri, spatel, reames, scanon Reviewed By: lebedev.ri, reames Differential Revision: https://reviews.llvm.org/D67553 llvm-svn: 372915
* [InstCombine] Fold (A - B) u>=/u< A --> B u>/u<= A iff B != 0Roman Lebedev2019-09-252-8/+8
| | | | | | | | | | | | | https://rise4fun.com/Alive/KtL This also shows that the fold added in D67412 / r372257 was too specific, and the new fold allows those test cases to be handled more generically, therefore i delete now-dead code. This is yet again motivated by D67122 "[UBSan][clang][compiler-rt] Applying non-zero offset to nullptr is undefined behaviour" llvm-svn: 372912
* [NFC][InstCombine] Add tests for (X - Y) < X --> Y <= X iff Y != 0Roman Lebedev2019-09-251-0/+111
| | | | | | | | https://rise4fun.com/Alive/KtL This should go to InstCombiner::foldICmpBinO(), next to "Convert sub-with-unsigned-overflow comparisons into a comparison of args." llvm-svn: 372911
* [InstCombine] Limit FMul constant folding for fma simplifications.Florian Hahn2019-09-251-10/+70
| | | | | | | | | | | | | | | | | As @reames pointed out post-commit, rL371518 adds additional rounding in some cases, when doing constant folding of the multiplication. This breaks a guarantee llvm.fma makes and must be avoided. This patch reapplies rL371518, but splits off the simplifications not requiring rounding from SimplifFMulInst as SimplifyFMAFMul. Reviewers: spatel, lebedev.ri, reames, scanon Reviewed By: reames Differential Revision: https://reviews.llvm.org/D67434 llvm-svn: 372899
* [GCRelocate] Add a peephole to canonicalize base pointer relocationPhilip Reames2019-09-241-0/+11
| | | | | | If we generate the gc.relocate, and then later prove two arguments to the statepoint are equivalent, we should canonicalize the gc.relocate to the form we would have produced if this had been known before rewriting. llvm-svn: 372771
* [InstCombine] (a+b) < a && (a+b) != 0 -> (0-b) < a iff a/b != 0 (PR43259)Roman Lebedev2019-09-241-34/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This is again motivated by D67122 sanitizer check enhancement. That patch seemingly worsens `-fsanitize=pointer-overflow` overhead from 25% to 50%, which strongly implies missing folds. For ``` #include <cassert> char* test(char& base, signed long offset) { __builtin_assume(offset < 0); return &base + offset; } ``` We produce https://godbolt.org/z/r40U47 and again those two icmp's can be merged: ``` Name: 0 Pre: C != 0 %adjusted = add i8 %base, C %not_null = icmp ne i8 %adjusted, 0 %no_underflow = icmp ult i8 %adjusted, %base %r = and i1 %not_null, %no_underflow => %neg_offset = sub i8 0, C %r = icmp ugt i8 %base, %neg_offset ``` https://rise4fun.com/Alive/ALap https://rise4fun.com/Alive/slnN There are 3 other variants of this pattern, i believe they all will go into InstSimplify. https://bugs.llvm.org/show_bug.cgi?id=43259 Reviewers: spatel, xbolva00, nikic Reviewed By: spatel Subscribers: efriedma, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67849 llvm-svn: 372768
* [InstCombine] (a+b) <= a && (a+b) != 0 -> (0-b) < a (PR43259)Roman Lebedev2019-09-241-26/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This is again motivated by D67122 sanitizer check enhancement. That patch seemingly worsens `-fsanitize=pointer-overflow` overhead from 25% to 50%, which strongly implies missing folds. This pattern isn't exactly what we get there (strict vs. non-strict predicate), but this pattern does not require known-bits analysis, so it is best to handle it first. ``` Name: 0 %adjusted = add i8 %base, %offset %not_null = icmp ne i8 %adjusted, 0 %no_underflow = icmp ule i8 %adjusted, %base %r = and i1 %not_null, %no_underflow => %neg_offset = sub i8 0, %offset %r = icmp ugt i8 %base, %neg_offset ``` https://rise4fun.com/Alive/knp There are 3 other variants of this pattern, they all will go into InstSimplify: https://rise4fun.com/Alive/bIDZ https://bugs.llvm.org/show_bug.cgi?id=43259 Reviewers: spatel, xbolva00, nikic Reviewed By: spatel Subscribers: hiraditya, majnemer, vsk, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67846 llvm-svn: 372767
* [Debuginfo] dbg.value points to undef value after Induction Variable ↵Alexey Lapshin2019-09-242-0/+182
| | | | | | | | | | | | | | | | | | | | | | | | | Simplification. Induction Variable Simplification pass does not update dbg.value intrinsic. Before: %add = add nuw nsw i32 %ArgIndex.06, 1 call void @llvm.dbg.value(metadata i32 %add, metadata !17, metadata !DIExpression()) After: %indvars.iv.next = add nuw nsw i64 %indvars.iv, 1 call void @llvm.dbg.value(metadata i64 undef, metadata !17, metadata !DIExpression()) There should be: %indvars.iv.next = add nuw nsw i64 %indvars.iv, 1 call void @llvm.dbg.value(metadata i64 %indvars.iv.next, metadata !17, metadata !DIExpression()) Differential Revision: https://reviews.llvm.org/D67770 llvm-svn: 372703
* [LV] Forced vectorization with runtime checks and OptForSizeSjoerd Meijer2019-09-241-1/+31
| | | | | | | | | | | | | | | When vectorisation is forced with a pragma, we optimise for min size, and we need to emit runtime memory checks, then allow this code growth and don't run in an assert like we currently do. This is the result of D65197 and D66803, and was a use-case not really considered before. If this now happens, we emit an optimisation remark warning about the code-size expansion, which can be avoided by not forcing vectorisation or possibly source-code modifications. Differential Revision: https://reviews.llvm.org/D67764 llvm-svn: 372694
* [InstCombine] Fold a shifty implementation of clamp-to-allones.Huihui Zhang2019-09-241-35/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Fold or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) into X s> Y ? -1 : X https://rise4fun.com/Alive/d8Ab clamp255 is a common operator in image processing, can be implemented in a shifty way "(255 - X) >> 31 | X & 255". Fold shift into select enables more optimization, e.g., vmin generation for ARM target. Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon Reviewed By: lebedev.ri Subscribers: kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67800 llvm-svn: 372678
* [InstCombine] Fold a shifty implementation of clamp-to-zero.Huihui Zhang2019-09-241-32/+22
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: Fold and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) into X s> Y ? X : 0 https://rise4fun.com/Alive/lFH Fold shift into select enables more optimization, e.g., vmax generation for ARM target. Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon Reviewed By: lebedev.ri Subscribers: xbolva00, andreadb, craig.topper, RKSimon, kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67799 llvm-svn: 372676
* [NFC][InstCombine] Add tests for shifty implementation of clamping.Huihui Zhang2019-09-232-0/+473
| | | | | | | | | | | | | | | | | | | | | | | Summary: Clamp negative to zero and clamp positive to allOnes are common operation in image saturation. Add tests for shifty implementation of clamping, as prepare work for folding: and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> 0 ? X : 0; or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) --> X s> Y ? allOnes : X. Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon Subscribers: llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67798 llvm-svn: 372671
* HotColdSplitting: invalidate the AssumptionCache on splitSaleem Abdulrasool2019-09-231-0/+38
| | | | | | | | | | | When a cold path is outlined, the value tracking in the assumption cache may be invalidated due to the code motion. We would previously trip an assertion in subsequent passes (but required the passes to happen in a single run as the assumption cache is shared across the passes). Invalidating the cache ensures that we get the correct information when needed with the legacy pass manager as well. llvm-svn: 372667
* [SampleFDO] Treat names in profile as not cold only when profile symbol listWei Mi2019-09-231-9/+19
| | | | | | | | | | | | | | is available In rL372232, we treated names showing up in profile as not cold when profile-sample-accurate is enabled. This caused 70k size regression in Chrome/Android. The patch put a guard and only enable the change when profile symbol list is available, i.e., keep the old behavior when profile symbol list is not available. Differential Revision: https://reviews.llvm.org/D67931 llvm-svn: 372665
* [InstCombine] Annotate strndup calls with dereferenceable_or_nullDavid Bolvansky2019-09-232-2/+2
| | | | | | "Implementations are free to malloc() a buffer containing either (size + 1) bytes or (strnlen(s, size) + 1) bytes. Applications should not assume that strndup() will allocate (size + 1) bytes when strlen(s) is smaller than size." llvm-svn: 372647
* [SLC] Convert some strndup calls to strdup callsDavid Bolvansky2019-09-232-7/+74
| | | | | | | | | | | | | | | | | | | | | Summary: Motivation: - If we can fold it to strdup, we should (strndup does more things than strdup). - Annotation mechanism. (Works for strdup well). strdup and strndup are part of C 20 (currently posix fns), so we should optimize them. Reviewers: efriedma, jdoerfert Reviewed By: jdoerfert Subscribers: lebedev.ri, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67679 llvm-svn: 372636
* [InstCombine] dropRedundantMaskingOfLeftShiftInput(): pat. c/d/e with mask ↵Roman Lebedev2019-09-233-18/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | (PR42563) Summary: If we have a pattern `(x & (-1 >> maskNbits)) << shiftNbits`, we already know (have a fold) that will drop the `& (-1 >> maskNbits)` mask iff `(shiftNbits-maskNbits) s>= 0` (i.e. `shiftNbits u>= maskNbits`). So even if `(shiftNbits-maskNbits) s< 0`, we can still fold, we will just need to apply a **constant** mask afterwards: ``` Name: c, normal+mask %t0 = lshr i32 -1, C1 %t1 = and i32 %t0, %x %r = shl i32 %t1, C2 => %n0 = shl i32 %x, C2 %n1 = i32 ((-(C2-C1))+32) %n2 = zext i32 %n1 to i64 %n3 = lshr i64 -1, %n2 %n4 = trunc i64 %n3 to i32 %r = and i32 %n0, %n4 ``` https://rise4fun.com/Alive/gslRa Naturally, old `%masked` will have to be one-use. This is not valid for pattern f - where "masking" is done via `ashr`. https://bugs.llvm.org/show_bug.cgi?id=42563 Reviewers: spatel, nikic, xbolva00 Reviewed By: spatel Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67725 llvm-svn: 372630
* [InstCombine] dropRedundantMaskingOfLeftShiftInput(): pat. a/b with mask ↵Roman Lebedev2019-09-232-12/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | (PR42563) Summary: And this is **finally** the interesting part of that fold! If we have a pattern `(x & (~(-1 << maskNbits))) << shiftNbits`, we already know (have a fold) that will drop the `& (~(-1 << maskNbits))` mask iff `(maskNbits+shiftNbits) u>= bitwidth(x)`. But that is actually ignorant, there's more general fold here: In this pattern, `(maskNbits+shiftNbits)` actually correlates with the number of low bits that will remain in the final value. So even if `(maskNbits+shiftNbits) u< bitwidth(x)`, we can still fold, we will just need to apply a **constant** mask afterwards: ``` Name: a, normal+mask %onebit = shl i32 -1, C1 %mask = xor i32 %onebit, -1 %masked = and i32 %mask, %x %r = shl i32 %masked, C2 => %n0 = shl i32 %x, C2 %n1 = add i32 C1, C2 %n2 = zext i32 %n1 to i64 %n3 = shl i64 -1, %n2 %n4 = xor i64 %n3, -1 %n5 = trunc i64 %n4 to i32 %r = and i32 %n0, %n5 ``` https://rise4fun.com/Alive/F5R Naturally, old `%masked` will have to be one-use. Similar fold exists for patterns c,d,e, will post patch later. https://bugs.llvm.org/show_bug.cgi?id=42563 Reviewers: spatel, nikic, xbolva00 Reviewed By: spatel Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67677 llvm-svn: 372629
* [SLP] Fix for PR31847: Assertion failed: (isLoopInvariant(Operands[i], L) && ↵Alexey Bataev2019-09-2320-1263/+300
| | | | | | | | | | | | | | | | | | | | | "SCEVAddRecExpr operand is not loop-invariant!") Summary: Initially SLP vectorizer replaced all going-to-be-vectorized instructions with Undef values. It may break ScalarEvaluation and may cause a crash. Reworked SLP vectorizer so that it does not replace vectorized instructions by UndefValue anymore. Instead vectorized instructions are marked for deletion inside if BoUpSLP class and deleted upon class destruction. Reviewers: mzolotukhin, mkuper, hfinkel, RKSimon, davide, spatel Subscribers: RKSimon, Gerolf, anemet, hans, majnemer, llvm-commits, sanjoy Differential Revision: https://reviews.llvm.org/D29641 llvm-svn: 372626
* [InstCombine] allow icmp+binop folds before min/max bailout (PR43310)Sanjay Patel2019-09-222-14/+8
| | | | | | | | | This has the potential to uncover missed analysis/folds as shown in the min/max code comment/test, but fewer restrictions on icmp folds should be better in general to solve cases like: https://bugs.llvm.org/show_bug.cgi?id=43310 llvm-svn: 372510
* [InstCombine] add tests for icmp fold hindered by min/max; NFCSanjay Patel2019-09-221-0/+36
| | | | llvm-svn: 372509
* [Cost][X86] Add v2i64 truncation costsSimon Pilgrim2019-09-221-16/+64
| | | | | | | | We are missing costs for a lot of truncation cases, I'm hoping to address all the 'zero cost' cases in trunc.ll I thought this was a vector widening side effect, but even before this we had some interesting LV decisions (notably over indvars) being made due to these zero costs. llvm-svn: 372498
* [InstSimplify] simplifyUnsignedRangeCheck(): X >= Y && Y == 0 --> Y == 0Roman Lebedev2019-09-211-3/+1
| | | | | | https://rise4fun.com/Alive/v9Y4 llvm-svn: 372491
* [NFC][InstSimplify] Add exhaustive test coverage for ↵Roman Lebedev2019-09-211-0/+132
| | | | | | | | simplifyUnsignedRangeCheck(). One case is not handled. llvm-svn: 372489
* SROA: Check Total Bits of vector typeSuyog Sarda2019-09-211-0/+24
| | | | | | | | | | While Promoting alloca instruction of Vector Type, Check total size in bits of its slices too. If they don't match, don't promote the alloca instruction. Bug : https://bugs.llvm.org/show_bug.cgi?id=42585 llvm-svn: 372480
* [Attributor] Implement "norecurse" function attribute deductionHideto Ueno2019-09-212-48/+103
| | | | | | | | | | | | | | | | | | | | | | | | Summary: This patch introduces `norecurse` function attribute deduction. `norecurse` will be deduced if the following conditions hold: * The size of SCC in which the function belongs equals to 1. * The function doesn't have self-recursion. * We have `norecurse` for all call site. To avoid a large change, SCC is calculated using scc_iterator in InfoCache initialization for now. Reviewers: jdoerfert, sstefan1 Reviewed By: jdoerfert Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67751 llvm-svn: 372475
* [Inliner] Remove incorrect early exit during switch cost computationTeresa Johnson2019-09-201-0/+160
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: The CallAnalyzer::visitSwitchInst has an early exit when the estimated lower bound of the switch cost will put the overall cost of the inline above the threshold. However, this code is not correctly estimating the lower bound for switches that can be transformed into bit tests, leading to unnecessary lost inlines, and also differing behavior with optimization remarks enabled. First, the early exit is controlled by whether ComputeFullInlineCost is enabled or not, and that in turn is disabled by default but enabled when enabling -pass-remarks=missed. This by itself wouldn't lead to a problem, except that as described below, the lower bound can be above the real lower bound, so we can sometimes get different inline decisions with inline remarks enabled, which is problematic. The early exit was added in along with a new switch cost model in D31085. The reason why this early exit was added is due to a concern one reviewer raised about compile time for large switches: https://reviews.llvm.org/D31085?id=94559#inline-276200 However, the code just below there calls getEstimatedNumberOfCaseClusters, which in turn immediately calls BasicTTIImpl getEstimatedNumberOfCaseClusters, which in the worst case does a linear scan of the cases to get the high and low values. The bit test handling in particular is guarded by whether the number of cases fits into the max bit width. There is no suggestion that anyone measured a compile time issue, it appears to be theoretical. The problem is that the reviewer's comment about the lower bound calculation is incorrect, specifically in the case of a switch that can be lowered to a bit test. This isn't followed up on the comment thread, but the author does add a FIXME to that effect above the early exit added when they subsequently revised the patch. As a result, we were incorrectly early exiting and not inlining functions with switch statements that would be lowered to bit tests in cases where we were nearing the threshold. Combined with the fact that this early exit was skipped with opt remarks enabled, this caused different inlining decisions to be made when -pass-remarks=missed is enabled to debug the missing inline. Remove the early exit for the above reasons. I also copied over an existing AArch64 inlining test to X86, and adjusted the threshold so that the bit test inline only occurs with the fix in this patch. Reviewers: davidxl Subscribers: eraman, kristof.beyls, haicheng, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67716 llvm-svn: 372440
* [NFC][InstCombine] Fixup newly-added testsRoman Lebedev2019-09-202-8/+8
| | | | llvm-svn: 372413
* [InstCombine] Tests for (a+b)<=a && (a+b)!=0 fold (PR43259)Roman Lebedev2019-09-202-4/+189
| | | | | | | https://rise4fun.com/Alive/knp https://rise4fun.com/Alive/ALap llvm-svn: 372402
* [SLPVectorizer] add tests for bogus reductions; NFCSanjay Patel2019-09-201-0/+334
| | | | | | | https://bugs.llvm.org/show_bug.cgi?id=42708 https://bugs.llvm.org/show_bug.cgi?id=43146 llvm-svn: 372393
* [ObjC][ARC] Skip debug instructions when computing the insert point ofAkira Hatanaka2019-09-191-0/+39
| | | | | | | | | | | objc_release calls This fixes a bug where the presence of debug instructions would cause ARC optimizer to change the order of retain and release calls. rdar://problem/55319419 llvm-svn: 372352
* Don't use invalidated iterators in FlattenCFGPassJakub Kuderski2019-09-191-0/+30
| | | | | | | | | | | | | | | | | | | | Summary: FlattenCFG may erase unnecessary blocks, which also invalidates iterators to those erased blocks. Before this patch, `iterativelyFlattenCFG` could try to increment a BB iterator after that BB has been removed and crash. This patch makes FlattenCFGPass use `WeakVH` to skip over erased blocks. Reviewers: dblaikie, tstellar, davide, sanjoy, asbirlea, grosser Reviewed By: asbirlea Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67672 llvm-svn: 372347
* [InstCombine] Simplify @llvm.usub.with.overflow+non-zero check (PR43251)Roman Lebedev2019-09-191-18/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This is again motivated by D67122 sanitizer check enhancement. That patch seemingly worsens `-fsanitize=pointer-overflow` overhead from 25% to 50%, which strongly implies missing folds. In this particular case, given ``` char* test(char& base, unsigned long offset) { return &base - offset; } ``` it will end up producing something like https://godbolt.org/z/luGEju which after optimizations reduces down to roughly ``` declare void @use64(i64) define i1 @test(i8* dereferenceable(1) %base, i64 %offset) { %base_int = ptrtoint i8* %base to i64 %adjusted = sub i64 %base_int, %offset call void @use64(i64 %adjusted) %not_null = icmp ne i64 %adjusted, 0 %no_underflow = icmp ule i64 %adjusted, %base_int %no_underflow_and_not_null = and i1 %not_null, %no_underflow ret i1 %no_underflow_and_not_null } ``` Without D67122 there was no `%not_null`, and in this particular case we can "get rid of it", by merging two checks: Here we are checking: `Base u>= Offset && (Base u- Offset) != 0`, but that is simply `Base u> Offset` Alive proofs: https://rise4fun.com/Alive/QOs The `@llvm.usub.with.overflow` pattern itself is not handled here because this is the main pattern, that we currently consider canonical. https://bugs.llvm.org/show_bug.cgi?id=43251 Reviewers: spatel, nikic, xbolva00, majnemer Reviewed By: xbolva00, majnemer Subscribers: vsk, majnemer, xbolva00, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67356 llvm-svn: 372341
* [Float2Int] avoid crashing on unreachable code (PR38502)Sanjay Patel2019-09-191-0/+22
| | | | | | | | | | | | | In the example from: https://bugs.llvm.org/show_bug.cgi?id=38502 ...we hit infinite looping/crashing because we have non-standard IR - an instruction operand is used before defined. This and other unusual constructs are allowed in unreachable blocks, so avoid the problem by using DominatorTree to step around landmines. Differential Revision: https://reviews.llvm.org/D67766 llvm-svn: 372339
* [Float2Int] auto-generate complete test checks; NFCSanjay Patel2019-09-191-177/+213
| | | | llvm-svn: 372324
* [Unroll] Add an option to control complete unrollingSerguei Katkov2019-09-191-0/+35
| | | | | | | | | | | | Add an ability to specify the max full unroll count for LoopUnrollPass pass in pass options. Reviewers: fhahn, fedor.sergeev Reviewed By: fedor.sergeev Subscribers: hiraditya, zzheng, dmgreen, llvm-commits Differential Revision: https://reviews.llvm.org/D67701 llvm-svn: 372305
OpenPOWER on IntegriCloud