summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/InstCombine/InstCombineAddSub.cpp
Commit message (Collapse)AuthorAgeFilesLines
* [InstCombine] 'C-(C2-X) --> X+(C-C2)' constant-foldRoman Lebedev2019-05-311-1/+6
| | | | | | | | | | It looks this fold was already partially happening, indirectly via some other folds, but with one-use limitation. No other fold here has that restriction. https://rise4fun.com/Alive/ftR llvm-svn: 362217
* [InstCombine] 'add (sub C1, X), C2 --> sub (add C1, C2), X' constant-foldRoman Lebedev2019-05-311-1/+8
| | | | | | https://rise4fun.com/Alive/qJQ llvm-svn: 362216
* [NFC][InstCombine] Add FIXME for one-use check on constant negation transforms.Cameron McInally2019-05-201-0/+2
| | | | llvm-svn: 361197
* [InstCombine] Add visitFNeg(...) visitor for unary FnegCameron McInally2019-05-201-13/+27
| | | | | | | | Also, break out a helper function, namely foldFNegIntoConstant(...), which performs transforms common between visitFNeg(...) and visitFSub(...). Differential Revision: https://reviews.llvm.org/D61693 llvm-svn: 361188
* Add InstCombine::visitFNeg(...)Cameron McInally2019-05-101-0/+9
| | | | | | Differential Revision: https://reviews.llvm.org/D61784 llvm-svn: 360461
* [InstCombine] Add new combine to add foldingRobert Lougher2019-05-071-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | (X | C1) + C2 --> (X | C1) ^ C1 iff (C1 == -C2) I verified the correctness using Alive: https://rise4fun.com/Alive/YNV This transform enables the following transform that already exists in instcombine: (X | Y) ^ Y --> X & ~Y As a result, the full expected transform is: (X | C1) + C2 --> X & ~C1 iff (C1 == -C2) There already exists the transform in the sub case: (X | Y) - Y --> X & ~Y However this does not trigger in the case where Y is constant due to an earlier transform: X - (-C) --> X + C With this new add fold, both the add and sub constant cases are handled. Patch by Chris Dawson. Differential Revision: https://reviews.llvm.org/D61517 llvm-svn: 360185
* [InstCombine] prevent possible miscompile with negate+sdiv of vector opSanjay Patel2019-04-091-3/+6
| | | | | | | | | | | | | | // 0 - (X sdiv C) -> (X sdiv -C) provided the negation doesn't overflow. This fold has been around for many years and nobody noticed the potential vector miscompile from overflow until recently... So it seems unlikely that there's much demand for a vector sdiv optimization on arbitrary vector constants, so just limit the matching to splat constants to avoid the possible bug. Differential Revision: https://reviews.llvm.org/D60426 llvm-svn: 358005
* [InstCombine] sdiv exact flag fixup.Chen Zheng2019-04-081-2/+5
| | | | | | Differential Revision: https://reviews.llvm.org/D60396 llvm-svn: 357904
* [InstCombine] form uaddsat from add+umin (PR14613)Sanjay Patel2019-03-261-0/+25
| | | | | | | | | | | | | | | | | | This is the last step towards solving the examples shown in: https://bugs.llvm.org/show_bug.cgi?id=14613 With this change, x86 should end up with psubus instructions when those are available. All known codegen issues with expanding the saturating intrinsics were resolved with: D59006 / rL356855 We also have some early evidence in D58872 that using the intrinsics will lead to better perf. If some target regresses from this, custom lowering of the intrinsics (as in the above for x86) may be needed. llvm-svn: 357012
* [InstCombine] fold adds of constants separated by sext/zextSanjay Patel2019-02-281-8/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | This is part of a transform that may be done in the backend: D13757 ...but it should always be beneficial to fold this sooner in IR for all targets. https://rise4fun.com/Alive/vaiW Name: sext add nsw %add = add nsw i8 %i, C0 %ext = sext i8 %add to i32 %r = add i32 %ext, C1 => %s = sext i8 %i to i32 %r = add i32 %s, sext(C0)+C1 Name: zext add nuw %add = add nuw i8 %i, C0 %ext = zext i8 %add to i16 %r = add i16 %ext, C1 => %s = zext i8 %i to i16 %r = add i16 %s, zext(C0)+C1 llvm-svn: 355118
* [InstCombine] canonicalize add/sub with boolSanjay Patel2019-02-241-0/+6
| | | | | | | | | | | | | | | | | | | | | | | add A, sext(B) --> sub A, zext(B) We have to choose 1 of these forms, so I'm opting for the zext because that's easier for value tracking. The backend should be prepared for this change after: D57401 rL353433 This is also a preliminary step towards reducing the amount of bit hackery that we do in IR to optimize icmp/select. That should be waiting to happen at a later optimization stage. The seeming regression in the fuzzer test was discussed in: D58359 We were only managing that fold in instcombine by luck, and other passes should be able to deal with that better anyway. llvm-svn: 354748
* Update the file headers across all of the LLVM projects in the monorepoChandler Carruth2019-01-191-4/+3
| | | | | | | | | | | | | | | | | to reflect the new license. We understand that people may be surprised that we're moving the header entirely to discuss the new license. We checked this carefully with the Foundation's lawyer and we believe this is the correct approach. Essentially, all code in the project is now made available by the LLVM project under our new license, so you will see that the license headers include that license only. Some of our contributors have contributed code under our old license, and accordingly, we have retained a copy of our old license notice in the top-level files in each project and repository. llvm-svn: 351636
* [InstCombine] Don't undo 0 - (X * Y) canonicalization when combining subs.Florian Hahn2019-01-151-4/+3
| | | | | | | | | Otherwise instcombine gets stuck in a cycle. The canonicalization was added in D55961. This patch fixes https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=12400 llvm-svn: 351187
* [InstCombine] name change: foldShuffledBinop -> foldVectorBinop; NFCSanjay Patel2018-10-031-4/+4
| | | | | | | This function will deal with more than shuffles with D50992, and I have another potential per-element fold that could live here. llvm-svn: 343692
* [InstCombine] Fold ~A - Min/Max(~A, O) -> Max/Min(A, ~O) - ADavid Green2018-10-021-0/+33
| | | | | | | | | | | | | | | This is an attempt to get out of a local-minimum that instcombine currently gets stuck in. We essentially combine two optimisations at once, ~a - ~b = b-a and min(~a, ~b) = ~max(a, b), only doing the transform if the result is at least neutral. This involves using IsFreeToInvert, which has been expanded a little to include selects that can be easily inverted. This is trying to fix PR35875, using the ideas from Sanjay. It is a large improvement to one of our rgb to cmy kernels. Differential Revision: https://reviews.llvm.org/D52177 llvm-svn: 343569
* [InstCombine] Support (sub (sext x), (sext y)) --> (sext (sub x, y)) and ↵Craig Topper2018-09-151-0/+3
| | | | | | | | | | | | | | | | | | | (sub (zext x), (zext y)) --> (zext (sub x, y)) Summary: If the sub doesn't overflow in the original type we can move it above the sext/zext. This is similar to what we do for add. The overflow checking for sub is currently weaker than add, so the test cases are constructed for what is supported. Reviewers: spatel Reviewed By: spatel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D52075 llvm-svn: 342335
* [InstCombine] refactor mul narrowing folds; NFCISanjay Patel2018-09-141-40/+1
| | | | | | | | | | | | | Similar to rL342278: The test diffs are all cosmetic due to the change in value naming, but I'm including that to show that the new code does perform these folds rather than something else in instcombine. D52075 should be able to use this code too rather than duplicating all of the logic. llvm-svn: 342292
* [InstCombine] add/use overflowing math helper functions; NFCSanjay Patel2018-09-141-3/+1
| | | | | | | | The mul case can already be refactored to use this similar to rL342278. The sub case is proposed in D52075. llvm-svn: 342289
* [InstCombine] refactor add narrowing folds; NFCISanjay Patel2018-09-141-71/+43
| | | | | | | | | The test diffs are all cosmetic due to the change in value naming, but I'm including that to show that the new code does perform these folds rather than something else in instcombine. llvm-svn: 342278
* [InstCombine] Use dyn_cast instead of match(m_Constant). NFCCraig Topper2018-09-111-4/+2
| | | | llvm-svn: 341962
* [InstCombine] Extend (add (sext x), cst) --> (sext (add x, cst')) and (add ↵Craig Topper2018-08-281-2/+4
| | | | | | | | (zext x), cst) --> (zext (add x, cst')) to work for vectors Differential Revision: https://reviews.llvm.org/D51236 llvm-svn: 340796
* [InstCombine] Remove unused method FAddCombine::createFDiv(). NFCAndrea Di Biagio2018-08-171-8/+0
| | | | | | | | This commit fixes a (gcc 7.3.0) [-Wunused-function] warning caused by the presence of unused method FaddCombine::createFDiv(). The last use of that method was removed at r339519. llvm-svn: 340014
* [InstCombine] fix/enhance fadd/fsub factorizationSanjay Patel2018-08-121-87/+45
| | | | | | | | | | | | | (X * Z) + (Y * Z) --> (X + Y) * Z (X * Z) - (Y * Z) --> (X - Y) * Z (X / Z) + (Y / Z) --> (X + Y) / Z (X / Z) - (Y / Z) --> (X - Y) / Z The existing code that implemented these folds failed to optimize vectors, and it transformed code with multiple uses when it should not have. llvm-svn: 339519
* [InstCombine] allow fsub+fmul FMF folds for vectorsSanjay Patel2018-08-091-0/+11
| | | | llvm-svn: 339368
* [InstCombine] reduce code duplication; NFCSanjay Patel2018-08-091-9/+7
| | | | llvm-svn: 339349
* [InstCombine] fold fadd+fsub with common operandSanjay Patel2018-08-081-0/+5
| | | | | | | This is a sibling to the simplify from: https://reviews.llvm.org/rL339174 llvm-svn: 339267
* [InstCombine] fold fsub+fsub with common operandSanjay Patel2018-08-081-0/+8
| | | | | | | This is a sibling to the simplify from: rL339171 llvm-svn: 339266
* [InstCombine] fold fneg into constant operand of fmul/fdivSanjay Patel2018-08-081-2/+15
| | | | | | | | | | | | This accounts for the missing IR fold noted in D50195. We don't need any fast-math to enable the negation transform. FP negation can always be folded into an fmul/fdiv constant to eliminate the fneg. I've limited this to one-use to ensure that we are eliminating an instruction rather than replacing fneg by a potentially expensive fdiv or fmul. Differential Revision: https://reviews.llvm.org/D50417 llvm-svn: 339248
* Remove trailing spaceFangrui Song2018-07-301-1/+1
| | | | | | sed -Ei 's/[[:space:]]+$//' include/**/*.{def,h,td} lib/**/*.{cpp,h} llvm-svn: 338293
* [InstCombine] try to fold 'add+sub' to 'not+add'Sanjay Patel2018-07-291-0/+8
| | | | | | | | | | | | | These are reassociated versions of the same pattern and similar transforms as in rL338200 and rL338118. The motivation is identical to those commits: Patterns with add/sub combos can be improved using 'not' ops. This is better for analysis and may lead to follow-on transforms because 'xor' and 'add' are commutative/associative. It can also help codegen. llvm-svn: 338221
* [InstCombine] try to fold 'sub' to 'not'Sanjay Patel2018-07-281-1/+7
| | | | | | | | | | | https://rise4fun.com/Alive/jDd Patterns with add/sub combos can be improved using 'not' ops. This is better for analysis and may lead to follow-on transforms because 'xor' and 'add' are commutative/associative. It can also help codegen. llvm-svn: 338200
* [InstCombine] return when SimplifyAssociativeOrCommutative makes a changeSanjay Patel2018-07-131-3/+8
| | | | | | | | | | | | | This bug was created by rL335258 because we used to always call instsimplify after trying the associative folds. After that change it became possible for subsequent folds to encounter unsimplified code (and potentially assert because of it). Instead of carrying changed state through instcombine, we can just return immediately. This allows instsimplify to run, so we can continue assuming that easy folds have already occurred. llvm-svn: 336965
* [InstCombine] (A + 1) + (B ^ -1) --> A - BGil Rapaport2018-06-261-0/+5
| | | | | | | | | Turn canonicalized subtraction back into (-1 - B) and combine it with (A + 1) into (A - B). This is similar to the folding already done for (B ^ -1) + Const into (-1 + Const) - B. Differential Revision: https://reviews.llvm.org/D48535 llvm-svn: 335579
* [InstCombine] simplify binops before trying other foldsSanjay Patel2018-06-211-14/+16
| | | | | | | | | | This is outwardly NFC from what I can tell, but it should be more efficient to simplify first (despite the name, SimplifyAssociativeOrCommutative does not actually simplify as InstSimplify does - it creates/morphs instructions). This should make it easier to refactor duplicated code that runs for all binops. llvm-svn: 335258
* [InstCombine] fold another shifty abs pattern to cmp+sel (PR36036)Sanjay Patel2018-06-061-0/+21
| | | | | | | | | | | | | | | | | | | | | | The bug report: https://bugs.llvm.org/show_bug.cgi?id=36036 ...requests a DAG change for this, but an IR canonicalization probably handles most cases. If we still want to match this pattern in the backend, there's a proposal for that too: D47831 Alive proofs including nsw/nuw cases that were first noted in: D46988 https://rise4fun.com/Alive/Kmp This patch is largely copied from the existing code that was initially added with: D40984 ...but I didn't see much gain from trying to share code. llvm-svn: 334137
* [InstCombine] PR37603: low bit mask canonicalizationRoman Lebedev2018-06-061-0/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This is [[ https://bugs.llvm.org/show_bug.cgi?id=37603 | PR37603 ]]. https://godbolt.org/g/VCMNpS https://rise4fun.com/Alive/idM When doing bit manipulations, it is quite common to calculate some bit mask, and apply it to some value via `and`. The typical C code looks like: ``` int mask_signed_add(int nbits) { return (1 << nbits) - 1; } ``` which is translated into (with `-O3`) ``` define dso_local i32 @mask_signed_add(int)(i32) local_unnamed_addr #0 { %2 = shl i32 1, %0 %3 = add nsw i32 %2, -1 ret i32 %3 } ``` But there is a second, less readable variant: ``` int mask_signed_xor(int nbits) { return ~(-(1 << nbits)); } ``` which is translated into (with `-O3`) ``` define dso_local i32 @mask_signed_xor(int)(i32) local_unnamed_addr #0 { %2 = shl i32 -1, %0 %3 = xor i32 %2, -1 ret i32 %3 } ``` Since we created such a mask, it is quite likely that we will use it in `and` next. And then we may get rid of `not` op by folding into `andn`. But now that i have actually looked: https://godbolt.org/g/VTUDmU _some_ backend changes will be needed too. We clearly loose `bzhi` recognition. Reviewers: spatel, craig.topper, RKSimon Reviewed By: spatel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D47428 llvm-svn: 334127
* [InstCombine] improve sub with bool foldsSanjay Patel2018-06-031-13/+14
| | | | | | | | There's a patchwork of existing transforms trying to handle these cases, but as seen in the changed test, we weren't catching them all. llvm-svn: 333845
* [InstCombine] call simplify before trying vector foldsSanjay Patel2018-06-021-15/+12
| | | | | | | | | | | | | | | | | | | | As noted in the review thread for rL333782, we could have made a bug harder to hit if we were simplifying instructions before trying other folds. The shuffle transform in question isn't ever a simplification; it's just a canonicalization. So I've renamed that to make that clearer. This is NFCI at this point, but I've regenerated the test file to show the cosmetic value naming difference of using instcombine's RAUW vs. the builder. Possible follow-ups: 1. Move reassociation folds after simplifies too. 2. Refactor common code; we shouldn't have so much repetition. llvm-svn: 333820
* [InstCombine] don't negate constant expression with fsub (PR37605)Sanjay Patel2018-05-301-1/+3
| | | | | | | X + (-C) would be transformed back into X - C, so infinite loop: https://bugs.llvm.org/show_bug.cgi?id=37605 llvm-svn: 333610
* [InstCombine] Negate ABS/NABS patterns by swapping the select operands to ↵Craig Topper2018-05-231-0/+16
| | | | | | | | remove the negation Differential Revision: https://reviews.llvm.org/D47236 llvm-svn: 333101
* [InstCombine] Moving overflow computation logic from InstCombine to ↵Omer Paparo Bivas2018-05-101-42/+0
| | | | | | | | | ValueTracking; NFC Differential Revision: https://reviews.llvm.org/D46704 Change-Id: Ifabcbe431a2169743b3cc310f2a34fd706f13f02 llvm-svn: 332026
* Remove \brief commands from doxygen comments.Adrian Prantl2018-05-011-2/+2
| | | | | | | | | | | | | | | | We've been running doxygen with the autobrief option for a couple of years now. This makes the \brief markers into our comments redundant. Since they are a visual distraction and we don't want to encourage more \brief markers in new code either, this patch removes them all. Patch produced by for i in $(git grep -l '\\brief'); do perl -pi -e 's/\\brief //g' $i & done Differential Revision: https://reviews.llvm.org/D46290 llvm-svn: 331272
* [PatternMatch] Stabilize the matching order of commutative matchersRoman Lebedev2018-04-271-15/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Currently, we 1. match `LHS` matcher to the `first` operand of binary operator, 2. and then match `RHS` matcher to the `second` operand of binary operator. If that does not match, we swap the `LHS` and `RHS` matchers: 1. match `RHS` matcher to the `first` operand of binary operator, 2. and then match `LHS` matcher to the `second` operand of binary operator. This works ok. But it complicates writing of commutative matchers, where one would like to match (`m_Value()`) the value on one side, and use (`m_Specific()`) it on the other side. This is additionally complicated by the fact that `m_Specific()` stores the `Value *`, not `Value **`, so it won't work at all out of the box. The last problem is trivially solved by adding a new `m_c_Specific()` that stores the `Value **`, not `Value *`. I'm choosing to add a new matcher, not change the existing one because i guess all the current users are ok with existing behavior, and this additional pointer indirection may have performance drawbacks. Also, i'm storing pointer, not reference, because for some mysterious-to-me reason it did not work with the reference. The first one appears trivial, too. Currently, we 1. match `LHS` matcher to the `first` operand of binary operator, 2. and then match `RHS` matcher to the `second` operand of binary operator. If that does not match, we swap the ~~`LHS` and `RHS` matchers~~ **operands**: 1. match ~~`RHS`~~ **`LHS`** matcher to the ~~`first`~~ **`second`** operand of binary operator, 2. and then match ~~`LHS`~~ **`RHS`** matcher to the ~~`second`~ **`first`** operand of binary operator. Surprisingly, `$ ninja check-llvm` still passes with this. But i expect the bots will disagree.. The motivational unittest is included. I'd like to use this in D45664. Reviewers: spatel, craig.topper, arsenm, RKSimon Reviewed By: craig.topper Subscribers: xbolva00, wdng, llvm-commits Differential Revision: https://reviews.llvm.org/D45828 llvm-svn: 331085
* [InstCombine] Simplify Add with remainder expressions as operands.Sanjoy Das2018-04-261-0/+109
| | | | | | | | | | | | | | | | | | | | | Summary: Simplify integer add expression X % C0 + (( X / C0 ) % C1) * C0 to X % (C0 * C1). This is a common pattern seen in code generated by the XLA GPU backend. Add test cases for this new optimization. Patch by Bixia Zheng! Reviewers: sanjoy Reviewed By: sanjoy Subscribers: efriedma, craig.topper, lebedev.ri, llvm-commits, jlebar Differential Revision: https://reviews.llvm.org/D45976 llvm-svn: 330992
* [InstCombine] simplify fneg+fadd folds; NFCSanjay Patel2018-04-161-8/+7
| | | | | | | | Two cleanups: 1. As noted in D45453, we had tests that don't need FMF that were misplaced in the 'fast-math.ll' test file. 2. This removes the final uses of dyn_castFNegVal, so that can be deleted. We use 'match' now. llvm-svn: 330126
* [InstCombine] Enable Add/Sub simplifications with only 'reassoc' FMFWarren Ristow2018-04-141-3/+4
| | | | | | | | These simplifications were previously enabled only with isFast(), but that is more restrictive than required. Since r317488, FMF has 'reassoc' to control these cases at a finer level. llvm-svn: 330089
* [InstCombine] limit X - (cast(-Y) --> X + cast(Y) with hasOneUse()Sanjay Patel2018-04-111-10/+10
| | | | llvm-svn: 329821
* [InstCombine] limit nsz: -(X - Y) --> Y - X to hasOneUse()Sanjay Patel2018-04-061-12/+9
| | | | | | | As noted in the post-commit discussion for r329350, we shouldn't generally assume that fsub is the same cost as fneg. llvm-svn: 329429
* [InstCombine] FP: Z - (X - Y) --> Z + (Y - X)Sanjay Patel2018-04-051-2/+11
| | | | | | | | | | | | This restores what was lost with rL73243 but without re-introducing the bug that was present in the old code. Note that we already have these transforms if the ops are marked 'fast' (and I assume that's happening somewhere in the code added with rL170471), but we clearly don't need all of 'fast' for these transforms. llvm-svn: 329362
* [InstCombine] nsz: -(X - Y) --> Y - XSanjay Patel2018-04-051-4/+11
| | | | | | This restores part of the fold that was removed with rL73243 (PR4374). llvm-svn: 329350
OpenPOWER on IntegriCloud