summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
Commit message (Collapse)AuthorAgeFilesLines
...
* [InstCombine] Optimize redundant 'signed truncation check pattern'.Roman Lebedev2018-08-131-0/+128
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This comes with `Implicit Conversion Sanitizer - integer sign change` (D50250): ``` signed char test(unsigned int x) { return x; } ``` `clang++ -fsanitize=implicit-conversion -S -emit-llvm -o - /tmp/test.cpp -O3` * Old: {F6904292} * With this patch: {F6904294} General pattern: X & Y Where `Y` is checking that all the high bits (covered by a mask `4294967168`) are uniform, i.e. `%arg & 4294967168` can be either `4294967168` or `0` Pattern can be one of: %t = add i32 %arg, 128 %r = icmp ult i32 %t, 256 Or %t0 = shl i32 %arg, 24 %t1 = ashr i32 %t0, 24 %r = icmp eq i32 %t1, %arg Or %t0 = trunc i32 %arg to i8 %t1 = sext i8 %t0 to i32 %r = icmp eq i32 %t1, %arg This pattern is a signed truncation check. And `X` is checking that some bit in that same mask is zero. I.e. can be one of: %r = icmp sgt i32 %arg, -1 Or %t = and i32 %arg, 2147483648 %r = icmp eq i32 %t, 0 Since we are checking that all the bits in that mask are the same, and a particular bit is zero, what we are really checking is that all the masked bits are zero. So this should be transformed to: %r = icmp ult i32 %arg, 128 https://rise4fun.com/Alive/3Ou Reviewers: spatel, craig.topper Reviewed By: spatel Subscribers: RKSimon, erichkeane, vsk, llvm-commits Differential Revision: https://reviews.llvm.org/D50465 llvm-svn: 339610
* [InstCombine] Fix typo in comment. NFCCraig Topper2018-08-131-1/+1
| | | | llvm-svn: 339532
* [InstCombine] Replace call to haveNoCommonBitsSet in visitXor with just the ↵Craig Topper2018-08-131-2/+8
| | | | | | | | | | | | | | | | special case that doesn't use computeKnownBits. Summary: computeKnownBits is expensive. The cases that would be detected by the computeKnownBits portion of haveNoCommonBitsSet were already handled by the earlier call to SimplifyDemandedInstructionBits. Reviewers: spatel, lebedev.ri Reviewed By: lebedev.ri Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D50604 llvm-svn: 339531
* [InstCombine] De Morgan: sink 'not' into 'xor' (PR38446)Roman Lebedev2018-08-081-0/+29
| | | | | | | | | | | | | | | | | | | | | | | | Summary: https://rise4fun.com/Alive/IT3 Comes up in the [most ugliest] `signed int` -> `signed char` case of `-fsanitize=implicit-conversion` (https://reviews.llvm.org/D50250) Previously, we were stuck with `not`: {F6867736} But now we are able to completely get rid of it: {F6867737} (FIXME: why are we loosing the metadata? that seems wrong/strange.) Here, we only want to do that it we will be able to completely get rid of that 'not'. Reviewers: spatel, craig.topper Reviewed By: spatel Subscribers: vsk, erichkeane, llvm-commits Differential Revision: https://reviews.llvm.org/D50301 llvm-svn: 339243
* [InstCombine] simplify code for A & (A ^ B) --> A & ~BSanjay Patel2018-07-311-25/+7
| | | | | | | | | | | | | This fold was written in an odd way and tried to avoid an endless loop by bailing out on all constants instead of the supposedly problematic case of -1. But (X & -1) should always be simplified before we reach here, so I'm not sure how that is a problem. There were no tests for the commuted patterns, so I added those at rL338364. llvm-svn: 338367
* [InstCombine] not(sub X, Y) --> add (not X), YSanjay Patel2018-07-271-0/+4
| | | | | | | | | | | The tests with constants show a missing optimization. Analysis for adds is better than subs, so this can also help with other transforms. And codegen is better with adds for targets like x86 (destructive ops, no sub-from). https://rise4fun.com/Alive/llK llvm-svn: 338118
* [InstCombine] return when SimplifyAssociativeOrCommutative makes a changeSanjay Patel2018-07-131-6/+12
| | | | | | | | | | | | | This bug was created by rL335258 because we used to always call instsimplify after trying the associative folds. After that change it became possible for subsequent folds to encounter unsimplified code (and potentially assert because of it). Instead of carrying changed state through instcombine, we can just return immediately. This allows instsimplify to run, so we can continue assuming that easy folds have already occurred. llvm-svn: 336965
* [InstCombine] simplify binops before trying other foldsSanjay Patel2018-06-211-9/+12
| | | | | | | | | | This is outwardly NFC from what I can tell, but it should be more efficient to simplify first (despite the name, SimplifyAssociativeOrCommutative does not actually simplify as InstSimplify does - it creates/morphs instructions). This should make it easier to refactor duplicated code that runs for all binops. llvm-svn: 335258
* [InstCombine] fold another shifty abs pattern to cmp+sel (PR36036)Sanjay Patel2018-06-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | The bug report: https://bugs.llvm.org/show_bug.cgi?id=36036 ...requests a DAG change for this, but an IR canonicalization probably handles most cases. If we still want to match this pattern in the backend, there's a proposal for that too: D47831 Alive proofs including nsw/nuw cases that were first noted in: D46988 https://rise4fun.com/Alive/Kmp This patch is largely copied from the existing code that was initially added with: D40984 ...but I didn't see much gain from trying to share code. llvm-svn: 334137
* Move Analysis/Utils/Local.h back to TransformsDavid Blaikie2018-06-041-1/+1
| | | | | | | | | | Review feedback from r328165. Split out just the one function from the file that's used by Analysis. (As chandlerc pointed out, the original change only moved the header and not the implementation anyway - which was fine for the one function that was used (since it's a template/inlined in the header) but not in general) llvm-svn: 333954
* [InstCombine] call simplify before trying vector foldsSanjay Patel2018-06-021-12/+9
| | | | | | | | | | | | | | | | | | | | As noted in the review thread for rL333782, we could have made a bug harder to hit if we were simplifying instructions before trying other folds. The shuffle transform in question isn't ever a simplification; it's just a canonicalization. So I've renamed that to make that clearer. This is NFCI at this point, but I've regenerated the test file to show the cosmetic value naming difference of using instcombine's RAUW vs. the builder. Possible follow-ups: 1. Move reassociation folds after simplifies too. 2. Refactor common code; we shouldn't have so much repetition. llvm-svn: 333820
* Revert rL333106 / D46814: [InstCombine] Fold unfolded masked merge pattern ↵Roman Lebedev2018-05-311-36/+0
| | | | | | | | | | | | | with variable mask! In post-commit review, Eric Christopher notes that many new MSan warnings are being observed with this patch. The probable reason is: if 'y' is undef here and we could evaluate it twice and get different results. We can't increase the number of uses of a value. llvm-svn: 333631
* [InstCombine] Fold unfolded masked merge pattern with variable mask!Roman Lebedev2018-05-231-0/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Finally fixes [[ https://bugs.llvm.org/show_bug.cgi?id=6773 | PR6773 ]]. Now that the backend is all done, we can finally fold it! The canonical unfolded masked merge pattern is ```(x & m) | (y & ~m)``` There is a second, equivalent variant: ```(x | ~m) & (y | m)``` Only one of them (the or-of-and's i think) is canonical. And if the mask is not a constant, we should fold it to: ```((x ^ y) & M) ^ y``` https://rise4fun.com/Alive/ndQw Reviewers: spatel, craig.topper Reviewed By: spatel Subscribers: nicholas, RKSimon, llvm-commits Differential Revision: https://reviews.llvm.org/D46814 llvm-svn: 333106
* [InstCombine] Propagate the nsw/nuw flags from the add in the 'shifty' abs ↵Craig Topper2018-05-171-1/+5
| | | | | | | | | | | | pattern to the sub in the select version. According to alive this is valid. I'm hoping to use this to make an assumption that the sign bit is zero after this sequence. The only way it wouldn't be is if the input was INT__MIN, but by preserving the flags we can make doing this to INT_MIN UB. The nuw flags is weird because it creates such a contradiction that the original number would have to be positive meaning we could remove the select entirely, but we don't get that far. Differential Revision: https://reviews.llvm.org/D46988 llvm-svn: 332623
* Remove \brief commands from doxygen comments.Adrian Prantl2018-05-011-1/+1
| | | | | | | | | | | | | | | | We've been running doxygen with the autobrief option for a couple of years now. This makes the \brief markers into our comments redundant. Since they are a visual distraction and we don't want to encourage more \brief markers in new code either, this patch removes them all. Patch produced by for i in $(git grep -l '\\brief'); do perl -pi -e 's/\\brief //g' $i & done Differential Revision: https://reviews.llvm.org/D46290 llvm-svn: 331272
* [InstCombine] Adjusting bswap pattern matching to hold for And/Shift mixed caseOmer Paparo Bivas2018-05-011-1/+12
| | | | | | | Differential Revision: https://reviews.llvm.org/D45731 Change-Id: I85d4226504e954933c41598327c91b2d08192a9d llvm-svn: 331257
* [InstCombine] Unfold masked merge with constant maskRoman Lebedev2018-04-301-1/+15
| | | | | | | | | | | | | | | | | Summary: As discussed in D45733, we want to do this in InstCombine. https://rise4fun.com/Alive/LGk Reviewers: spatel, craig.topper Reviewed By: spatel Subscribers: chandlerc, xbolva00, llvm-commits Differential Revision: https://reviews.llvm.org/D45867 llvm-svn: 331205
* [InstCombine] Canonicalize variable mask in masked mergeRoman Lebedev2018-04-281-0/+33
| | | | | | | | | | | | | | | | | | | Summary: Masked merge has a pattern of: `((x ^ y) & M) ^ y`. But, there is no difference between `((x ^ y) & M) ^ y` and `((x ^ y) & ~M) ^ x`, We should canonicalize the pattern to non-inverted mask. https://rise4fun.com/Alive/Yol Reviewers: spatel, craig.topper Reviewed By: spatel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D45664 llvm-svn: 331112
* [PatternMatch] Stabilize the matching order of commutative matchersRoman Lebedev2018-04-271-16/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Currently, we 1. match `LHS` matcher to the `first` operand of binary operator, 2. and then match `RHS` matcher to the `second` operand of binary operator. If that does not match, we swap the `LHS` and `RHS` matchers: 1. match `RHS` matcher to the `first` operand of binary operator, 2. and then match `LHS` matcher to the `second` operand of binary operator. This works ok. But it complicates writing of commutative matchers, where one would like to match (`m_Value()`) the value on one side, and use (`m_Specific()`) it on the other side. This is additionally complicated by the fact that `m_Specific()` stores the `Value *`, not `Value **`, so it won't work at all out of the box. The last problem is trivially solved by adding a new `m_c_Specific()` that stores the `Value **`, not `Value *`. I'm choosing to add a new matcher, not change the existing one because i guess all the current users are ok with existing behavior, and this additional pointer indirection may have performance drawbacks. Also, i'm storing pointer, not reference, because for some mysterious-to-me reason it did not work with the reference. The first one appears trivial, too. Currently, we 1. match `LHS` matcher to the `first` operand of binary operator, 2. and then match `RHS` matcher to the `second` operand of binary operator. If that does not match, we swap the ~~`LHS` and `RHS` matchers~~ **operands**: 1. match ~~`RHS`~~ **`LHS`** matcher to the ~~`first`~~ **`second`** operand of binary operator, 2. and then match ~~`LHS`~~ **`RHS`** matcher to the ~~`second`~ **`first`** operand of binary operator. Surprisingly, `$ ninja check-llvm` still passes with this. But i expect the bots will disagree.. The motivational unittest is included. I'd like to use this in D45664. Reviewers: spatel, craig.topper, arsenm, RKSimon Reviewed By: craig.topper Subscribers: xbolva00, wdng, llvm-commits Differential Revision: https://reviews.llvm.org/D45828 llvm-svn: 331085
* [PatternMatch] allow undef elements when matching a vector zeroSanjay Patel2018-04-221-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | This is the last step in getting constant pattern matchers to allow undef elements in constant vectors. I'm adding a dedicated m_ZeroInt() function and building m_Zero() from that. In most cases, calling code can be updated to use m_ZeroInt() directly when there's no need to match pointers, but I'm leaving that efficiency optimization as a follow-up step because it's not always clear when that's ok. There are just enough icmp folds in InstSimplify that can be used for integer or pointer types, that we probably still want a generic m_Zero() for those cases. Otherwise, we could eliminate it (and possibly add a m_NullPtr() as an alias for isa<ConstantPointerNull>()). We're conservatively returning a full zero vector (zeroinitializer) in InstSimplify/InstCombine on some of these folds (see diffs in InstSimplify), but I'm not sure if that's actually necessary in all cases. We may be able to propagate an undef lane instead. One test where this happens is marked with 'TODO'. llvm-svn: 330550
* [InstCombine] Simplify 'xor' to 'or' if no common bits are set.Roman Lebedev2018-04-151-0/+4
| | | | | | | | | | | | | | | | | | | | | | | Summary: In order to get the whole fold as specified in [[ https://bugs.llvm.org/show_bug.cgi?id=6773 | PR6773 ]], let's first handle the simple straight-forward things. Let's start with the `and` -> `or` simplification. The one obvious thing missing here: the constant mask is not handled. I have an idea how to handle it, but it will require some thinking, and is not strictly required here, so i've left that for later. https://rise4fun.com/Alive/Pkmg Reviewers: spatel, craig.topper, eli.friedman, jingyue Reviewed By: spatel Subscribers: llvm-commits Was reviewed as part of https://reviews.llvm.org/D45631 llvm-svn: 330103
* Eliminate a bitwise 'not' op of 'not' min/max by inverting the min/max.Artur Gainullin2018-04-111-0/+30
| | | | | | | | | | | | | | | | | | | | | Bitwise 'not' of the min/max could be eliminated in the pattern: %notx = xor i32 %x, -1 %cmp1 = icmp sgt[slt/ugt/ult] i32 %notx, %y %smax = select i1 %cmp1, i32 %notx, i32 %y %res = xor i32 %smax, -1 https://rise4fun.com/Alive/lCN Reviewers: spatel Reviewed by: spatel Subscribers: a.elovikov, llvm-commits Differential Revision: https://reviews.llvm.org/D45317 llvm-svn: 329791
* [PatternMatch] allow undef elements when matching vector FP +0.0Sanjay Patel2018-03-251-2/+2
| | | | | | | | | | | | | This continues the FP constant pattern matching improvements from: https://reviews.llvm.org/rL327627 https://reviews.llvm.org/rL327339 https://reviews.llvm.org/rL327307 Several integer constant matchers also have this ability. I'm separating matching of integer/pointer null from FP positive zero and renaming/commenting to make the functionality clearer. llvm-svn: 328461
* [InstCombine] add folds for xor-of-icmp signbit tests (PR36682)Sanjay Patel2018-03-221-0/+28
| | | | | | | | | | | | | | | | | | This is a retry of r328119 which was reverted at r328145 because it could crash by trying to combine icmps with different operand types. This version has a check for that and additional tests. Original commit message: This is part of solving: https://bugs.llvm.org/show_bug.cgi?id=36682 There's also a leftover improvement from the long-ago-closed: https://bugs.llvm.org/show_bug.cgi?id=5438 https://rise4fun.com/Alive/dC1 llvm-svn: 328197
* Fix a couple of layering violations in TransformsDavid Blaikie2018-03-211-1/+1
| | | | | | | | | | | | | Remove #include of Transforms/Scalar.h from Transform/Utils to fix layering. Transforms depends on Transforms/Utils, not the other way around. So remove the header and the "createStripGCRelocatesPass" function declaration (& definition) that is unused and motivated this dependency. Move Transforms/Utils/Local.h into Analysis because it's used by Analysis/MemoryBuiltins.cpp. llvm-svn: 328165
* Revert r328119 "[InstCombine] add folds for xor-of-icmp signbit tests (PR36682)"Reid Kleckner2018-03-211-30/+0
| | | | | | | This asserts when compiling safe_numerics_unittest.cpp in Chromium with MSan. llvm-svn: 328145
* [InstCombine] add folds for xor-of-icmp signbit tests (PR36682)Sanjay Patel2018-03-211-0/+30
| | | | | | | | | | | | This is part of solving: https://bugs.llvm.org/show_bug.cgi?id=36682 There's also a leftover improvement from the long-ago-closed: https://bugs.llvm.org/show_bug.cgi?id=5438 https://rise4fun.com/Alive/dC1 llvm-svn: 328119
* Simplify more cases of logical ops of masked icmps.Hiroshi Yamauchi2018-03-131-17/+199
| | | | | | | | | | | | | | | | | | Summary: For example, ((X & 255) != 0) && ((X & 15) == 8) -> ((X & 15) == 8). ((X & 7) != 0) && ((X & 15) == 8) -> false. Reviewers: davidxl Reviewed By: davidxl Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D43835 llvm-svn: 327450
* [InstCombine] Replace calls to getNumUses with hasNUses or hasNUsesOrMoreCraig Topper2018-03-121-3/+3
| | | | | | | | | | getNumUses is a linear time operation. It traverses the user linked list to the end and counts as it goes. Since we are only interested in small constant counts, we should use hasNUses or hasNUsesMore more that terminate the traversal as soon as it can provide the answer. There are still two other locations in InstCombine, but changing those would force a rebase of D44266 which if accepted would remove them. Differential Revision: https://reviews.llvm.org/D44398 llvm-svn: 327315
* [InstCombine] move constant check into foldBinOpIntoSelectOrPhi; NFCISanjay Patel2018-02-281-9/+6
| | | | | | | | Also, rename 'foldOpWithConstantIntoOperand' because that's annoyingly vague. The constant check is redundant in some cases, but it allows removing duplication for most of the calls. llvm-svn: 326329
* [InstCombine] Add constant vector support for ~(C >> Y) --> ~C >> YSimon Pilgrim2018-02-101-5/+7
| | | | | | Includes adding m_NonNegative constant pattern matcher llvm-svn: 324825
* [InstCombine] narrow masked zexted binops (PR35792)Sanjay Patel2018-01-251-0/+70
| | | | | | | | | | | | | | | | | This is guarded by shouldChangeType(), so the tests show that we don't do the fold if the narrower type is not legal. Note that there is a proposal (D42424) that would change the results for the specific cases shown in these tests. That difference is also discussed in PR35792: https://bugs.llvm.org/show_bug.cgi?id=35792 Alive proofs for the cases handled here as well as the bitwise logic binops that we should already do better on: https://rise4fun.com/Alive/c97 https://rise4fun.com/Alive/Lc5E https://rise4fun.com/Alive/kdf llvm-svn: 323437
* [InstCombine] canonicalize shifty abs(): ashr+add+xor --> cmp+neg+selSanjay Patel2017-12-161-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We want to do this for 2 reasons: 1. Value tracking does not recognize the ashr variant, so it would fail to match for cases like D39766. 2. DAGCombiner does better at producing optimal codegen when we have the cmp+sel pattern. More detail about what happens in the backend: 1. DAGCombiner has a generic transform for all targets to convert the scalar cmp+sel variant of abs into the shift variant. That is the opposite of this IR canonicalization. 2. DAGCombiner has a generic transform for all targets to convert the vector cmp+sel variant of abs into either an ABS node or the shift variant. That is again the opposite of this IR canonicalization. 3. DAGCombiner has a generic transform for all targets to convert the exact shift variants produced by #1 or #2 into an ISD::ABS node. Note: It would be an efficiency improvement if we had #1 go directly to an ABS node when that's legal/custom. 4. The pattern matching above is incomplete, so it is possible to escape the intended/optimal codegen in a variety of ways. a. For #2, the vector path is missing the case for setlt with a '1' constant. b. For #3, we are missing a match for commuted versions of the shift variants. 5. Therefore, this IR canonicalization can only help get us to the optimal codegen. The version of cmp+sel produced by this patch will be recognized in the DAG and converted to an ABS node when possible or the shift sequence when not. 6. In the following examples with this patch applied, we may get conditional moves rather than the shift produced by the generic DAGCombiner transforms. The conditional move is created using a target-specific decision for any given target. Whether it is optimal or not for a particular subtarget may be up for debate. define i32 @abs_shifty(i32 %x) { %signbit = ashr i32 %x, 31 %add = add i32 %signbit, %x %abs = xor i32 %signbit, %add ret i32 %abs } define i32 @abs_cmpsubsel(i32 %x) { %cmp = icmp slt i32 %x, zeroinitializer %sub = sub i32 zeroinitializer, %x %abs = select i1 %cmp, i32 %sub, i32 %x ret i32 %abs } define <4 x i32> @abs_shifty_vec(<4 x i32> %x) { %signbit = ashr <4 x i32> %x, <i32 31, i32 31, i32 31, i32 31> %add = add <4 x i32> %signbit, %x %abs = xor <4 x i32> %signbit, %add ret <4 x i32> %abs } define <4 x i32> @abs_cmpsubsel_vec(<4 x i32> %x) { %cmp = icmp slt <4 x i32> %x, zeroinitializer %sub = sub <4 x i32> zeroinitializer, %x %abs = select <4 x i1> %cmp, <4 x i32> %sub, <4 x i32> %x ret <4 x i32> %abs } > $ ./opt -instcombine shiftyabs.ll -S | ./llc -o - -mtriple=x86_64 -mattr=avx > abs_shifty: > movl %edi, %eax > negl %eax > cmovll %edi, %eax > retq > > abs_cmpsubsel: > movl %edi, %eax > negl %eax > cmovll %edi, %eax > retq > > abs_shifty_vec: > vpabsd %xmm0, %xmm0 > retq > > abs_cmpsubsel_vec: > vpabsd %xmm0, %xmm0 > retq > > $ ./opt -instcombine shiftyabs.ll -S | ./llc -o - -mtriple=aarch64 > abs_shifty: > cmp w0, #0 // =0 > cneg w0, w0, mi > ret > > abs_cmpsubsel: > cmp w0, #0 // =0 > cneg w0, w0, mi > ret > > abs_shifty_vec: > abs v0.4s, v0.4s > ret > > abs_cmpsubsel_vec: > abs v0.4s, v0.4s > ret > > $ ./opt -instcombine shiftyabs.ll -S | ./llc -o - -mtriple=powerpc64le > abs_shifty: > srawi 4, 3, 31 > add 3, 3, 4 > xor 3, 3, 4 > blr > > abs_cmpsubsel: > srawi 4, 3, 31 > add 3, 3, 4 > xor 3, 3, 4 > blr > > abs_shifty_vec: > vspltisw 3, -16 > vspltisw 4, 15 > vsubuwm 3, 4, 3 > vsraw 3, 2, 3 > vadduwm 2, 2, 3 > xxlxor 34, 34, 35 > blr > > abs_cmpsubsel_vec: > vspltisw 3, -16 > vspltisw 4, 15 > vsubuwm 3, 4, 3 > vsraw 3, 2, 3 > vadduwm 2, 2, 3 > xxlxor 34, 34, 35 > blr > Differential Revision: https://reviews.llvm.org/D40984 llvm-svn: 320921
* [ValueTracking, InstCombine] canonicalize fcmp ord/uno with non-NAN ops to ↵Sanjay Patel2017-09-051-15/+6
| | | | | | | | | | | | | | | | | | | | | null constants This is a preliminary step towards solving the remaining part of PR27145 - IR for isfinite(): https://bugs.llvm.org/show_bug.cgi?id=27145 In order to solve that one more generally, we need to add matching for and/or of fcmp ord/uno with a constant operand. But while looking at those patterns, I realized we were missing a canonicalization for nonzero constants. Rather than limiting to just folds for constants, we're adding a general value tracking method for this based on an existing DAG helper. By transforming everything to 0.0, we can simplify the existing code in foldLogicOfFCmps() and pick up missing vector folds. Differential Revision: https://reviews.llvm.org/D37427 llvm-svn: 312591
* [InstCombine] replace unnecessary fcmp fold with assertSanjay Patel2017-09-021-6/+3
| | | | | | See https://reviews.llvm.org/rL312411 for related InstSimplify tests. llvm-svn: 312421
* [InstCombine] combine foldAndOfFCmps and foldOrOfFcmps; NFCISanjay Patel2017-09-021-75/+30
| | | | | | | | In addition to removing chunks of duplicated code, we don't want these to diverge. If there's a fold for one, there should be a fold of the other via DeMorgan's Laws. llvm-svn: 312420
* [InstCombine] fix misnamed locals and use them to reduce code; NFCISanjay Patel2017-09-021-34/+34
| | | | | | | | | We had these locals: Value *Op0RHS = LHS->getOperand(1); Value *Op1LHS = RHS->getOperand(0); ...so we confusingly transposed the meaning of left/right and op0/op1. llvm-svn: 312418
* [InstCombine] remove unnecessary code; NFCSanjay Patel2017-09-021-3/+0
| | | | llvm-svn: 312416
* [InstCombine] move related functions next to each other; NFCSanjay Patel2017-09-021-51/+51
| | | | | | | | This makes it easier to see that they're almost duplicates. As with the similar icmp functions, there should be identical folds for both logic ops because those are DeMorganized variants. llvm-svn: 312415
* [InstCombine] Don't require the compare types to be the same in ↵Craig Topper2017-09-011-3/+2
| | | | | | | | | | getMaskedTypeForICmpPair. A future patch will make the code look through truncates feeding the compare. So the compares might be different types but the pretruncated types might be the same. This should be safe because we still require the same Value* to be used truncated or not in both compares. So that serves to ensure the types are the same. llvm-svn: 312381
* [InstCombine] When converting decomposeBitTestICmp's APInt return to ↵Craig Topper2017-09-011-2/+2
| | | | | | | | ConstantInt, make sure we use the type from the Value* that was also returned from decomposeBitTestICmp. Previously we used the type from the LHS of the compare, but a future patch will change decomposeBitTestICmp to look through truncates so it will return a pretruncated Value* and the type needs to match that. llvm-svn: 312380
* [InstCombine] Remove check for sext of vector icmp from shouldOptimizeCastCraig Topper2017-08-221-6/+0
| | | | | | | | | | | | Looks like for 'and' and 'or' we end up performing at least some of the transformations this is bocking in a round about way anyway. For 'and sext(cmp1), sext(cmp2) we end up later turning it into 'select cmp1, sext(cmp2), 0'. Then we optimize that back to sext (and cmp1, cmp2). This is the same result we would have gotten if shouldOptimizeCast hadn't blocked it. We do something analogous for 'or'. With this patch we allow that transformation to happen directly in foldCastedBitwiseLogic. And we now support the same thing for 'xor'. This is definitely opening up many other cases, but since we already went around it for some cases hopefully it's ok. Differential Revision: https://reviews.llvm.org/D36213 llvm-svn: 311508
* [InstCombine] Move the checks for pointer types in getMaskedTypeForICmpPair ↵Craig Topper2017-08-211-12/+6
| | | | | | | | | | earlier in the function I don't think there's any reason to have them scattered about and on all 4 operands. We already have an early check that both compares must be the same type. And within a given compare the LHS and RHS must have the same type. Beyond that I don't think there's anyway this function returns anything valid for pointer types. So let's just return early and be done with it. Differential Revision: https://reviews.llvm.org/D36561 llvm-svn: 311383
* Recommit r310869, "[InstSimplify][InstCombine] Modify the interface of ↵Craig Topper2017-08-141-3/+15
| | | | | | | | | | | | | | | | | | | | decomposeBitTestICmp and use it in the InstSimplify" This recommits r310869, with the moved files and no extra changes. Original commit message: This addresses a fixme in InstSimplify about using decomposeBitTest. This also fixes InstSimplify to handle ugt and ult compares too. I've modified the interface a little to return only the APInt version of the mask that InstSimplify needs. InstCombine now has a small wrapper routine to create a Constant out of it. I've also dropped the returning of 0 since InstSimplify doesn't need that. So InstCombine creates a zero constant itself. I also had to make decomposeBitTest support vectors since InstSimplify needs that. As InstSimplify can't use something from the Transforms library, I've moved the CmpInstAnalysis code to the Analysis library. Differential Revision: https://reviews.llvm.org/D36593 llvm-svn: 310889
* Revert r310869 "[InstSimplify][InstCombine] Modify the interface of ↵Craig Topper2017-08-141-15/+3
| | | | | | | | decomposeBitTestICmp and use it in the InstSimplify" Failed to add the two files that moved. And then added an extra change I didn't mean to while trying to fix that. Reverting everything. llvm-svn: 310873
* [InstSimplify][InstCombine] Modify the interface of decomposeBitTestICmp and ↵Craig Topper2017-08-141-3/+15
| | | | | | | | | | | | | | | | use it in the InstSimplify This addresses a fixme in InstSimplify about using decomposeBitTest. This also fixes InstSimplify to handle ugt and ult compares too. I've modified the interface a little to return only the APInt version of the mask that InstSimplify needs. InstCombine now has a small wrapper routine to create a Constant out of it. I've also dropped the returning of 0 since InstSimplify doesn't need that. So InstCombine creates a zero constant itself. I also had to make decomposeBitTest support vectors since InstSimplify needs that. As InstSimplify can't use something from the Transforms library, I've moved the CmpInstAnalysis code to the Analysis library. Differential Revision: https://reviews.llvm.org/D36593 llvm-svn: 310869
* [InstCombine] Simplify and inline FoldOrWithConstants/FoldXorWithConstantsCraig Topper2017-08-141-85/+19
| | | | | | | | | | | | | | | | | Summary: These functions were overly complicated. The body of this function was rechecking for an And operation to find the constant, but we already knew we were looking at two Ands ORed together and the pieces are in variables. We already had earlier nearby code that checked for ConstantInts. So just inline the remaining parts into the earlier code. Next step is to use m_APInt instead of ConstantInt. Reviewers: spatel, efriedma, davide, majnemer Reviewed By: spatel Subscribers: zzheng, llvm-commits Differential Revision: https://reviews.llvm.org/D36439 llvm-svn: 310806
* [InstCombine] Make (X|C1)^C2 -> X^(C1^C2) iff X&~C1 == 0 work for splat vectorsCraig Topper2017-08-101-23/+18
| | | | | | | | This also corrects the description to match what was actually implemented. The old comment said X^(C1|C2), but it implemented X^((C1|C2)&~(C1&C2)). I believe ((C1|C2)&~(C1&C2)) is equivalent to (C1^C2). Differential Revision: https://reviews.llvm.org/D36505 llvm-svn: 310658
* [InstCombine] Fix a crash in getSelectCondition if we happen to have two ↵Craig Topper2017-08-101-2/+3
| | | | | | | | inverse vectors of i1 constants. We used to try to truncate the constant vector to vXi1, but if it's already i1 this would fail. Instead we now use IRBuilder::getZExtOrTrunc which should check the type and only create a trunc if needed. I believe this should trigger constant folding in the IRBuilder and ultimately do the same thing just with the additional type check. llvm-svn: 310639
* [InstCombine] Use regular dyn_cast instead of a matcher for a simple case. NFCCraig Topper2017-08-091-2/+2
| | | | llvm-svn: 310446
OpenPOWER on IntegriCloud