summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/InstCombine
Commit message (Collapse)AuthorAgeFilesLines
...
* [InstCombine] Canonicalize const arg for saturating addsNikita Popov2018-11-281-0/+6
| | | | | | | | | If a saturating add intrinsic has one constant argument, make sure it is on the RHS. This will simplify further transformations. This change is part of https://reviews.llvm.org/D54534. llvm-svn: 347769
* InstCombine: add comment explaining malloc deletion. NFC.Tim Northover2018-11-271-3/+10
| | | | | | | | I tried to change this, not quite realising the logic behind what we were doing. Hopefully this comment will help the next person to come along. llvm-svn: 347653
* [InstCombine] add helper function to reduce code duplication; NFCSanjay Patel2018-11-261-24/+19
| | | | llvm-svn: 347604
* [InstCombine] Determine demanded and known bits for funnel shiftsNikita Popov2018-11-241-0/+24
| | | | | | | | | | | | | | Support funnel shifts in InstCombine demanded bits simplification. If the shift amount is constant, we can determine both the demanded bits of the operands, as well as the known bits of the result. If one of the operands has no demanded bits, it will be replaced by undef and the funnel shift will be simplified into a simple shift due to the simplifications added in D54778. Differential Revision: https://reviews.llvm.org/D54869 llvm-svn: 347515
* [InstCombine] Simplify funnel shift with zero/undef operand to shiftNikita Popov2018-11-231-0/+23
| | | | | | | | | | | | | | | | | | | | The following simplifications are implemented: * `fshl(X, 0, C) -> shl X, C%BW` * `fshl(X, undef, C) -> shl X, C%BW` (assuming undef = 0) * `fshl(0, X, C) -> lshr X, BW-C%BW` * `fshl(undef, X, C) -> lshr X, BW-C%BW` (assuming undef = 0) * `fshr(X, 0, C) -> shl X, (BW-C%BW)` * `fshr(X, undef, C) -> shl X, BW-C%BW` (assuming undef = 0) * `fshr(0, X, C) -> lshr X, C%BW` * `fshr(undef, X, C) -> lshr, X, C%BW` (assuming undef = 0) The simplification is only performed if the shift amount C is constant, because we can explicitly compute C%BW and BW-C%BW in this case. Differential Revision: https://reviews.llvm.org/D54778 llvm-svn: 347505
* [InstCombine] Set debug loc on `mergeStoreIntoSuccessor` phiVedant Kumar2018-11-191-2/+6
| | | | | | | | | Assigning a merged debug location to the `mergeStoreIntoSuccessor` phi improves backtrace quality. Fixes llvm.org/PR38083. llvm-svn: 347257
* [IR] Add hasNPredecessors, hasNPredecessorsOrMore to BasicBlockVedant Kumar2018-11-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | Add methods to BasicBlock which make it easier to efficiently check whether a block has N (or more) predecessors. This can be more efficient than using pred_size(), which is a linear time operation. We might consider adding similar methods for successors. I haven't done so in this patch because succ_size() is already O(1). With this patch applied, I measured a 0.065% compile-time reduction in user time for running `opt -O3` on the sqlite3 amalgamation (30 trials). The change in mergeStoreIntoSuccessor alone saves 45 million linked list iterations in a stage2 Release build of llc. See llvm.org/PR39702 for a harder but more general way of achieving similar results. Differential Revision: https://reviews.llvm.org/D54686 llvm-svn: 347256
* [InstCombine] fix rotate narrowing bug for non-pow-2 typesSanjay Patel2018-11-151-2/+7
| | | | llvm-svn: 346968
* [InstCombine] Remove a couple of asserts based on incorrect assumptionsMandeep Singh Grang2018-11-141-11/+4
| | | | | | | | | | | | | | | | Summary: These asserts are based on the assumption that the order of true/false operands in a select and those in the compare would always be the same. This fixes PR39595. Reviewers: craig.topper, spatel, dmgreen Reviewed By: craig.topper Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D54359 llvm-svn: 346874
* [InstCombine] fix formatting for matchBSwap(); NFCSanjay Patel2018-11-142-7/+9
| | | | | | | We should have a similar function for matching rotate and/or funnel shift, so tidy up the related existing call. llvm-svn: 346871
* [InstCombine] fold funnel shift amount based on demanded bitsSanjay Patel2018-11-131-0/+14
| | | | | | | | | | | | | | The shift amount of a funnel shift is modulo the scalar bitwidth: http://llvm.org/docs/LangRef.html#llvm-fshl-intrinsic ...so we can use demanded bits analysis on that operand to simplify it when we have a power-of-2 bitwidth. This is another step towards canonicalizing {shift/shift/or} to the intrinsics in IR. Differential Revision: https://reviews.llvm.org/D54478 llvm-svn: 346814
* [InstCombine] canonicalize rotate patterns with cmp/selectSanjay Patel2018-11-131-0/+63
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The cmp+branch variant of this pattern is shown in: https://bugs.llvm.org/show_bug.cgi?id=34924 ...and as discussed there, we probably can't transform that without a rotate intrinsic. We do have that now via funnel shift, but we're not quite ready to canonicalize IR to that form yet. The case with 'select' should already be transformed though, so that's this patch. The sequence with negation followed by masking is what we use in the backend and partly in clang (though that part should be updated). https://rise4fun.com/Alive/TplC %cmp = icmp eq i32 %shamt, 0 %sub = sub i32 32, %shamt %shr = lshr i32 %x, %shamt %shl = shl i32 %x, %sub %or = or i32 %shr, %shl %r = select i1 %cmp, i32 %x, i32 %or => %neg = sub i32 0, %shamt %masked = and i32 %shamt, 31 %maskedneg = and i32 %neg, 31 %shl2 = lshr i32 %x, %masked %shr2 = shl i32 %x, %maskedneg %r = or i32 %shl2, %shr2 llvm-svn: 346807
* [InstCombine] narrow width of rotate patterns, part 3Sanjay Patel2018-11-121-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a longer variant for the pattern handled in rL346713 This one includes zexts. Eventually, we should canonicalize all rotate patterns to the funnel shift intrinsics, but we need a bit more infrastructure to make sure the vectorizers handle those intrinsics as well as the shift+logic ops. https://rise4fun.com/Alive/FMn Name: narrow rotateright %neg = sub i8 0, %shamt %rshamt = and i8 %shamt, 7 %rshamtconv = zext i8 %rshamt to i32 %lshamt = and i8 %neg, 7 %lshamtconv = zext i8 %lshamt to i32 %conv = zext i8 %x to i32 %shr = lshr i32 %conv, %rshamtconv %shl = shl i32 %conv, %lshamtconv %or = or i32 %shl, %shr %r = trunc i32 %or to i8 => %maskedShAmt2 = and i8 %shamt, 7 %negShAmt2 = sub i8 0, %shamt %maskedNegShAmt2 = and i8 %negShAmt2, 7 %shl2 = lshr i8 %x, %maskedShAmt2 %shr2 = shl i8 %x, %maskedNegShAmt2 %r = or i8 %shl2, %shr2 llvm-svn: 346716
* [InstCombine] narrow width of rotate patterns, part 2 (PR39624)Sanjay Patel2018-11-121-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The sub-pattern for the shift amount in a rotate can take on several different forms, and there's apparently no way to canonicalize those without seeing the entire rotate sequence. This is the form noted in: https://bugs.llvm.org/show_bug.cgi?id=39624 https://rise4fun.com/Alive/qnT %zx = zext i8 %x to i32 %maskedShAmt = and i32 %shAmt, 7 %shl = shl i32 %zx, %maskedShAmt %negShAmt = sub i32 0, %shAmt %maskedNegShAmt = and i32 %negShAmt, 7 %shr = lshr i32 %zx, %maskedNegShAmt %rot = or i32 %shl, %shr %r = trunc i32 %rot to i8 => %truncShAmt = trunc i32 %shAmt to i8 %maskedShAmt2 = and i8 %truncShAmt, 7 %shl2 = shl i8 %x, %maskedShAmt2 %negShAmt2 = sub i8 0, %truncShAmt %maskedNegShAmt2 = and i8 %negShAmt2, 7 %shr2 = lshr i8 %x, %maskedNegShAmt2 %r = or i8 %shl2, %shr2 llvm-svn: 346713
* [InstCombine] refactor code for matching shift amount of a rotate; NFCSanjay Patel2018-11-121-12/+17
| | | | | | | | As shown in existing test cases and with: https://bugs.llvm.org/show_bug.cgi?id=39624 ...we're missing at least 2 more patterns for rotate narrowing. llvm-svn: 346711
* [GC][InstCombine] Fix a potential iteration issuePhilip Reames2018-11-121-1/+4
| | | | | | Noticed via inspection. Appears to be largely innocious in practice, but slight code change could have resulted in either visit order dependent missed optimizations or infinite loops. May be a minor compile time problem today. llvm-svn: 346698
* [InstCombine] simplify code for merging stores; NFCISanjay Patel2018-11-102-51/+28
| | | | llvm-svn: 346596
* InstCombine: Avoid introducing poison values when lowering llvm.amdgcn.[us]bfeTom Stellard2018-11-081-13/+5
| | | | | | | | | | | | | | | | | | | Summary: When the 3rd argument to these intrinsics is zero, lowering them to shift instructions produces poison values, since we end up with shift amounts equal to the number of bits in the shifted value. This means we can only lower these intrinsics if we can prove that the 3rd argument is not zero. Reviewers: arsenm Reviewed By: arsenm Subscribers: bnieuwenhuizen, jvesely, wdng, nhaehnle, llvm-commits Differential Revision: https://reviews.llvm.org/D53739 llvm-svn: 346422
* [InstCombine] propagate FMF for fcmp+fabs foldsSanjay Patel2018-11-071-7/+13
| | | | | | | By morphing the instruction rather than deleting and creating a new one, we retain fast-math-flags and potentially other metadata (profile info?). llvm-svn: 346331
* [InstCombine] peek through fabs() when checking isnan()Sanjay Patel2018-11-071-1/+6
| | | | | | | | | That should be the end of the missing cases for this fold. See earlier patches in this series: rL346321 rL346324 llvm-svn: 346327
* [InstCombine] add folds for fcmp Pred fabs(X), 0.0Sanjay Patel2018-11-071-0/+8
| | | | | | | | Similar to rL346321, we had folds for the ordered versions of these compares already, so add the unordered siblings for completeness. llvm-svn: 346324
* [InstCombine] add fold for fabs(X) u< 0.0Sanjay Patel2018-11-071-0/+5
| | | | | | | | | | | | | | The sibling fold for 'oge' --> 'ord' was already here, but this half was missing. The result of fabs() must be positive or nan, so asking if the result is negative or nan is the same as asking if the result is nan. This is another step towards fixing: https://bugs.llvm.org/show_bug.cgi?id=39475 llvm-svn: 346321
* fix typos aggressively; NFCSanjay Patel2018-11-071-1/+1
| | | | llvm-svn: 346316
* [InstCombine] do not shrink switch conditions to illegal types (PR29009)Sanjay Patel2018-11-071-3/+5
| | | | | | | | | | | | | | | | | This patch makes shrinking switch conditions less aggressive which was introduced by: rL274233 Note that we have 2 new bugs to track potential follow-ups that might have solved PR29009 in different ways: https://bugs.llvm.org/show_bug.cgi?id=39569 https://bugs.llvm.org/show_bug.cgi?id=39578 Patch by: @dendibakh (Denis Bakhvalov) Differential Revision: https://reviews.llvm.org/D54115 llvm-svn: 346315
* [IR] add optional parameter for copying IR flags to compare instructionsSanjay Patel2018-11-071-29/+14
| | | | | | | | As shown, this is used to eliminate redundant code in InstCombine, and there are more cases where we should be using this pattern, but we're currently unintentionally dropping flags. llvm-svn: 346282
* [InstCombine] allow vector types for fcmp+fpext foldSanjay Patel2018-11-061-9/+9
| | | | llvm-svn: 346245
* [InstCombine] propagate fast-math-flags when folding fcmp+fpext, part 2Sanjay Patel2018-11-061-3/+6
| | | | llvm-svn: 346242
* [InstCombine] rearrange code for fcmp+fpext; NFCISanjay Patel2018-11-061-29/+27
| | | | llvm-svn: 346241
* [InstCombine] propagate fast-math-flags when folding fcmp+fpextSanjay Patel2018-11-061-5/+7
| | | | llvm-svn: 346240
* [InstCombine] propagate fast-math-flags when folding fcmp+fneg, part 2Sanjay Patel2018-11-061-2/+3
| | | | llvm-svn: 346238
* [InstCombine] reduce code; NFCSanjay Patel2018-11-061-1/+1
| | | | llvm-svn: 346235
* [InstCombine] propagate fast-math-flags when folding fcmp+fnegSanjay Patel2018-11-061-11/+16
| | | | | | | | | | This is another part of solving PR39475: https://bugs.llvm.org/show_bug.cgi?id=39475 This might be enough to fix that particular issue, but as noted with the FIXME, we're still dropping FMF on other folds around here. llvm-svn: 346234
* [InstCombine] Ensure nested shifts are in range (OSS-Fuzz #9880)Simon Pilgrim2018-11-061-5/+6
| | | | llvm-svn: 346225
* [InstSimplify] fold select (fcmp X, Y), X, YSanjay Patel2018-11-051-50/+0
| | | | | | | | | This is NFCI for InstCombine because it calls InstSimplify, so I left the tests for this transform there. As noted in the code comment, we can allow this fold more often by using FMF and/or value tracking. llvm-svn: 346169
* [InstCombine] canonicalize -0.0 to +0.0 in fcmpSanjay Patel2018-11-051-0/+7
| | | | | | | | | | | | | | | | | As stated in IEEE-754 and discussed in: https://bugs.llvm.org/show_bug.cgi?id=38086 ...the sign of zero does not affect any FP compare predicate. Known regressions were fixed with: rL346097 (D54001) rL346143 The transform will help reduce pattern-matching complexity to solve: https://bugs.llvm.org/show_bug.cgi?id=39475 ...as well as improve CSE and codegen (a zero constant is almost always easier to produce than 0x80..00). llvm-svn: 346147
* [InstCombine] loosen FP 0.0 constraint for fcmp+select substitutionSanjay Patel2018-11-051-4/+13
| | | | | | | | | | | | | | It looks like we correctly removed edge cases with 0.0 from D50714, but we were a bit conservative because getBinOpIdentity() doesn't distinguish between +0.0 and -0.0 and 'nsz' is effectively always true for fcmp (see discussion in: https://bugs.llvm.org/show_bug.cgi?id=38086 Without this change, we would get regressions by canonicalizing to +0.0 in all fcmp, and that's a step towards solving: https://bugs.llvm.org/show_bug.cgi?id=39475 llvm-svn: 346143
* [InstCombine] Combine nested min/max intrinsics with constantsVolkan Keles2018-10-311-1/+35
| | | | | | | | | | | | Reviewers: arsenm, spatel Reviewed By: spatel Subscribers: lebedev.ri, wdng, llvm-commits Differential Revision: https://reviews.llvm.org/D53774 llvm-svn: 345751
* [InstCombine] refactor fabs+fcmp fold; NFCSanjay Patel2018-10-311-39/+45
| | | | | | | Also, remove/replace/minimize/enhance the tests for this fold. The code drops FMF, so it needs more tests and at least 1 fix. llvm-svn: 345734
* [InstCombine] add assertion that InstSimplify has folded a fabs+fcmp; NFCSanjay Patel2018-10-311-2/+5
| | | | | | | | The 'OLT' case was updated at rL266175, so I assume it was just an oversight that 'UGE' was not included because that patch handled both predicates in InstSimplify. llvm-svn: 345727
* [InstSimplify] fold 'fcmp nnan oge X, 0.0' when X is not negativeSanjay Patel2018-10-311-2/+3
| | | | | | | | | | | | | | This re-raises some of the open questions about how to apply and use fast-math-flags in IR from PR38086: https://bugs.llvm.org/show_bug.cgi?id=38086 ...but given the current implementation (no FMF on casts), this is likely the only way to predicate the transform. This is part of solving PR39475: https://bugs.llvm.org/show_bug.cgi?id=39475 Differential Revision: https://reviews.llvm.org/D53874 llvm-svn: 345725
* [InstCombine] use 'match' to reduce code; NFCSanjay Patel2018-10-301-92/+90
| | | | llvm-svn: 345647
* [InstCombine] Teach the move free before null test opti how to deal with ↵Quentin Colombet2018-10-301-12/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | noop casts InstCombine features an optimization that essentially replaces: if (a) free(a) into: free(a) Right now, this optimization is gated by the minsize attribute and therefore we only perform it if we can prove that we are going to be able to eliminate the branch and the destination block. However when casts are involved the optimization would fail to apply, because the optimization was not smart enough to realize that it is possible to also move the casts away from the destination block and that is harmless to the performance since they are just noops. E.g., foo(int *a) if (a) free((char*)a) Wouldn't be optimized by instcombine, because - We would refuse to hoist the `bitcast i32* %a to i8` in the source block - We would fail to see that `bitcast i32* %a to i8` and %a are the same value. This patch fixes both these problems: - It teaches the pattern matching of the comparison how to look through casts. - It checks that whether the additional instruction in the destination block can be hoisted and are harmless performance-wise. - It hoists all the code of the destination block in the source block. Differential Revision: D53356 llvm-svn: 345644
* [InstCombine] use getFltSemantics() instead of duplicating it; NFCSanjay Patel2018-10-301-19/+3
| | | | llvm-svn: 345613
* [InstCombine] try to turn shuffle into insertelementSanjay Patel2018-10-301-0/+70
| | | | | | | | | | | | | | | | | | | | | | shuffle (insert ?, Scalar, IndexC), V1, Mask --> insert V1, Scalar, IndexC' The motivating case is at least a couple of steps away: I noticed that SLPVectorizer does not analyze shuffles as well as sequences of insert/extract in PR34724: https://bugs.llvm.org/show_bug.cgi?id=34724 ...so SLP may fail to vectorize when source code has shuffles to start with or instcombine has converted insert/extract to shuffles. Independent of that, an insertelement is always a simpler op for IR analysis vs. a shuffle, so we should transform to insert when possible. I don't think there's any codegen concern here - if a target can't insert a scalar directly to some fixed element in a vector (x86?), then this should get expanded to the insert+shuffle that we started with. Differential Revision: https://reviews.llvm.org/D53507 llvm-svn: 345607
* [FPEnv] Last BinaryOperator::isFNeg(...) to m_FNeg(...) changesCameron McInally2018-10-251-2/+3
| | | | | | | | | Replacing BinaryOperator::isFNeg(...) to avoid regressions when we separate FNeg from the FSub IR instruction. Differential Revision: https://reviews.llvm.org/D53650 llvm-svn: 345295
* Add -instcombine-code-sinking optionGabor Buella2018-10-251-1/+5
| | | | | | | | | | Reviewers: craig.topper, andrew.w.kaylor, efriedma Reviewed By: craig.topper, andrew.w.kaylor, efriedma Differential Revision: https://reviews.llvm.org/D52709 llvm-svn: 345248
* [InstCombine] try harder to form select from logic ops (2nd try)Sanjay Patel2018-10-242-30/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The original patch was committed here: rL344609 ...and reverted: rL344612 ...because it did not properly check/test data types before calling ComputeNumSignBits(). The tests that caused bot failures for the previous commit are over-reaching front-end tests that run the entire -O optimizer pipeline: Clang :: CodeGen/builtins-systemz-zvector.c Clang :: CodeGen/builtins-systemz-zvector2.c I've added a negative test here to ensure coverage for that case. The new early exit check also tests the type of the 'B' parameter, so we don't waste time on matching if either value is unsuitable. Original commit message: This is part of solving PR37549: https://bugs.llvm.org/show_bug.cgi?id=37549 The patterns shown here are a special case of something that we already convert to select. Using ComputeNumSignBits() catches that case (but not the more complicated motivating patterns yet). The backend has hooks/logic to convert back to logic ops if that's better for the target. llvm-svn: 345149
* [InstCombine] use 'match' to simplify codeSanjay Patel2018-10-231-1/+1
| | | | | | | | | | | | There's probably some vector-with-undef-element pattern that shows an improvement, so this is probably not quite 'NFC'. This is the last step towards removing the fake binop queries for not/neg. Ie, there are no more uses of those functions in trunk. Fneg should follow. llvm-svn: 345050
* [InstCombine] use 'match' to handle vectors and simplify codeSanjay Patel2018-10-231-2/+3
| | | | | | | This is another step towards completely removing the fake binop queries for not/neg/fneg. llvm-svn: 345036
* [InstCombine] swap select profile metadata when swapping select opsSanjay Patel2018-10-231-0/+1
| | | | llvm-svn: 345034
OpenPOWER on IntegriCloud