summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/InstCombine
Commit message (Collapse)AuthorAgeFilesLines
...
* [InstCombine] Handle atomic memset in the same way as regular memsetDaniel Neilson2018-05-112-4/+6
| | | | | | | | | | | | | | | | | | Summary: This change adds handling of the atomic memset intrinsic to the code path that simplifies the regular memset. In practice this means that we will now also expand a small constant-length atomic memset into a single unordered atomic store. Reviewers: apilipenko, skatkov, mkazantsev, anna, reames Reviewed By: reames Subscribers: reames, llvm-commits Differential Revision: https://reviews.llvm.org/D46660 llvm-svn: 332132
* [InstCombine] Unify handling of atomic memtransfer with non-atomic memtransferDaniel Neilson2018-05-112-112/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This change reworks the handling of atomic memcpy within the instcombine pass. Previously, a constant length atomic memcpy would be lowered into loads & stores as long as no more than 16 load/store pairs are created. This is quite different from the lowering done for a non-atomic memcpy; which only ever lowers into a single load/store pair of no more than 8 bytes. Larger constant-sized memcpy calls are expanded to load/stores in later passes, such as SelectionDAG lowering. In this change the behaviour for atomic memcpy is unified with non-atomic memcpy; atomic memcpy is now treated in the same was as non-atomic memcpy has always been. We leave it to later passes to lower longer-length atomic memcpy calls. Due to the structure of the pass's handling of memtransfer intrinsics, this change also gives us handling of atomic memmove that we did not previously have. Reviewers: apilipenko, skatkov, mkazantsev, anna, reames Reviewed By: reames Subscribers: reames, llvm-commits Differential Revision: https://reviews.llvm.org/D46658 llvm-svn: 332093
* [InstCombine] Replace an 'if' that should always be true with an assert.Craig Topper2018-05-101-7/+6
| | | | | | The bitwidth of the operation should always be wider than the result width of the truncate since we don't recurse through any width changing operations. llvm-svn: 332055
* [InstCombine] add folds for minnum(-a, -b) --> -maxnum(a, b)Sanjay Patel2018-05-101-0/+16
| | | | | | | | | | | This is similar to what we do for integer min/max with 'not' ops (rL321882). This should fix: https://bugs.llvm.org/show_bug.cgi?id=37404 https://bugs.llvm.org/show_bug.cgi?id=37405 llvm-svn: 332031
* [InstCombine] Moving overflow computation logic from InstCombine to ↵Omer Paparo Bivas2018-05-103-86/+31
| | | | | | | | | ValueTracking; NFC Differential Revision: https://reviews.llvm.org/D46704 Change-Id: Ifabcbe431a2169743b3cc310f2a34fd706f13f02 llvm-svn: 332026
* [InstCombine] Only propagate known leading zeros from udiv input to output.Benjamin Kramer2018-05-101-2/+7
| | | | | | | | | Put in a conservatively correct estimate for now. Avoids miscompiling clang in FDO mode. This is really tricky to trigger in reality as basically all interesting cases will be folded away by computeKnownBits earlier, I was unable to find a reasonably small test case. llvm-svn: 331975
* [InstCombine] Reorder an if condition to put a cheap check in front of a ↵Craig Topper2018-05-101-3/+3
| | | | | | computeKnownBits call. NFC llvm-svn: 331948
* [InstCombine] Use APInt::getBitsSetFrom to shortern a line and fix an 80 ↵Craig Topper2018-05-101-2/+2
| | | | | | | | columns violation. NFC Fix a similar line in the same function. llvm-svn: 331947
* [Inscombine] fix a signedness warning which broke -Werror buildsPhilip Reames2018-05-101-1/+1
| | | | llvm-svn: 331944
* [InstCombine] Widen guards with conditions betweenPhilip Reames2018-05-091-1/+22
| | | | | | | | The previous handling for guard widening in InstCombine was extremely restrictive. In particular, it didn't handle the common case where we had two guards separated by a single icmp. Handle this by scanning through a small fixed window of instructions to find the next guard if needed. Differential Revision: https://reviews.llvm.org/D46203 llvm-svn: 331935
* [InstCombine] Teach SimplifyDemandedBits that udiv doesn't demand low ↵Benjamin Kramer2018-05-091-0/+16
| | | | | | | | | | | dividend bits that are zero in the divisor This is safe as long as the udiv is not exact. The pattern is not common in C++ code, but comes up all the time in code generated by XLA's GPU backend. Differential Revision: https://reviews.llvm.org/D46647 llvm-svn: 331933
* Fix a bunch of places where operator-> was used directly on the return from ↵Craig Topper2018-05-051-1/+1
| | | | | | | | | | dyn_cast. Inspired by r331508, I did a grep and found these. Mostly just change from dyn_cast to cast. Some cases also showed a dyn_cast result being converted to bool, so those I changed to isa. llvm-svn: 331577
* [InstCombine] refine select-of-constants to bitwise opsSanjay Patel2018-05-031-57/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add logic for the special case when a cmp+select can clearly be reduced to just a bitwise logic instruction, and remove an over-reaching chunk of general purpose bit magic. The primary goal is to remove cases where we are not improving the IR instruction count when doing these select transforms, and in all cases here that is true. In the motivating 3-way compare tests, there are further improvements because we can combine/propagate select values (not sure if that belongs in instcombine, but it's there for now). DAGCombiner has folds to turn some of these selects into bit magic, so there should be no difference in the end result in those cases. Not all constant combinations are handled there yet, however, so it is possible that some targets will see more cmov/csel codegen with this change in IR canonicalization. Ideally, we'll go further to *not* turn selects into multiple logic/math ops in instcombine, and we'll canonicalize to selects. But we should make sure that this step does not result in regressions first (and if it does, we should fix those in the backend). The general direction for this change was discussed here: http://lists.llvm.org/pipermail/llvm-dev/2016-September/105373.html http://lists.llvm.org/pipermail/llvm-dev/2017-July/114885.html Alive proofs for the new bit magic: https://rise4fun.com/Alive/XG7 Differential Revision: https://reviews.llvm.org/D46086 llvm-svn: 331486
* Remove @brief commands from doxygen comments, too.Adrian Prantl2018-05-012-4/+4
| | | | | | | | | | | | | | | | | This is a follow-up to r331272. We've been running doxygen with the autobrief option for a couple of years now. This makes the \brief markers into our comments redundant. Since they are a visual distraction and we don't want to encourage more \brief markers in new code either, this patch removes them all. Patch produced by for i in $(git grep -l '\@brief'); do perl -pi -e 's/\@brief //g' $i & done https://reviews.llvm.org/D46290 llvm-svn: 331275
* Remove \brief commands from doxygen comments.Adrian Prantl2018-05-017-39/+39
| | | | | | | | | | | | | | | | We've been running doxygen with the autobrief option for a couple of years now. This makes the \brief markers into our comments redundant. Since they are a visual distraction and we don't want to encourage more \brief markers in new code either, this patch removes them all. Patch produced by for i in $(git grep -l '\\brief'); do perl -pi -e 's/\\brief //g' $i & done Differential Revision: https://reviews.llvm.org/D46290 llvm-svn: 331272
* [InstCombine] Adjusting bswap pattern matching to hold for And/Shift mixed caseOmer Paparo Bivas2018-05-011-1/+12
| | | | | | | Differential Revision: https://reviews.llvm.org/D45731 Change-Id: I85d4226504e954933c41598327c91b2d08192a9d llvm-svn: 331257
* [InstCombine] Unfold masked merge with constant maskRoman Lebedev2018-04-301-1/+15
| | | | | | | | | | | | | | | | | Summary: As discussed in D45733, we want to do this in InstCombine. https://rise4fun.com/Alive/LGk Reviewers: spatel, craig.topper Reviewed By: spatel Subscribers: chandlerc, xbolva00, llvm-commits Differential Revision: https://reviews.llvm.org/D45867 llvm-svn: 331205
* [InstCombine] Canonicalize variable mask in masked mergeRoman Lebedev2018-04-281-0/+33
| | | | | | | | | | | | | | | | | | | Summary: Masked merge has a pattern of: `((x ^ y) & M) ^ y`. But, there is no difference between `((x ^ y) & M) ^ y` and `((x ^ y) & ~M) ^ x`, We should canonicalize the pattern to non-inverted mask. https://rise4fun.com/Alive/Yol Reviewers: spatel, craig.topper Reviewed By: spatel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D45664 llvm-svn: 331112
* [PatternMatch] Stabilize the matching order of commutative matchersRoman Lebedev2018-04-272-31/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Currently, we 1. match `LHS` matcher to the `first` operand of binary operator, 2. and then match `RHS` matcher to the `second` operand of binary operator. If that does not match, we swap the `LHS` and `RHS` matchers: 1. match `RHS` matcher to the `first` operand of binary operator, 2. and then match `LHS` matcher to the `second` operand of binary operator. This works ok. But it complicates writing of commutative matchers, where one would like to match (`m_Value()`) the value on one side, and use (`m_Specific()`) it on the other side. This is additionally complicated by the fact that `m_Specific()` stores the `Value *`, not `Value **`, so it won't work at all out of the box. The last problem is trivially solved by adding a new `m_c_Specific()` that stores the `Value **`, not `Value *`. I'm choosing to add a new matcher, not change the existing one because i guess all the current users are ok with existing behavior, and this additional pointer indirection may have performance drawbacks. Also, i'm storing pointer, not reference, because for some mysterious-to-me reason it did not work with the reference. The first one appears trivial, too. Currently, we 1. match `LHS` matcher to the `first` operand of binary operator, 2. and then match `RHS` matcher to the `second` operand of binary operator. If that does not match, we swap the ~~`LHS` and `RHS` matchers~~ **operands**: 1. match ~~`RHS`~~ **`LHS`** matcher to the ~~`first`~~ **`second`** operand of binary operator, 2. and then match ~~`LHS`~~ **`RHS`** matcher to the ~~`second`~ **`first`** operand of binary operator. Surprisingly, `$ ninja check-llvm` still passes with this. But i expect the bots will disagree.. The motivational unittest is included. I'd like to use this in D45664. Reviewers: spatel, craig.topper, arsenm, RKSimon Reviewed By: craig.topper Subscribers: xbolva00, wdng, llvm-commits Differential Revision: https://reviews.llvm.org/D45828 llvm-svn: 331085
* [InstCombine] Simplify Add with remainder expressions as operands.Sanjoy Das2018-04-262-0/+116
| | | | | | | | | | | | | | | | | | | | | Summary: Simplify integer add expression X % C0 + (( X / C0 ) % C1) * C0 to X % (C0 * C1). This is a common pattern seen in code generated by the XLA GPU backend. Add test cases for this new optimization. Patch by Bixia Zheng! Reviewers: sanjoy Reviewed By: sanjoy Subscribers: efriedma, craig.topper, lebedev.ri, llvm-commits, jlebar Differential Revision: https://reviews.llvm.org/D45976 llvm-svn: 330992
* [InstCombine] clean up foldSelectICmpAnd(); NFCSanjay Patel2018-04-251-46/+39
| | | | | | | | | | As discussed in D45862, we want to delete parts of this code because it can create more instructions than it removes. But we also want to preserve some folds that are winners, so tidy up what's here to make splitting the good from bad a bit easier. llvm-svn: 330841
* InstCombine: Fix layering by not including Scalar.h in InstCombineDavid Blaikie2018-04-241-1/+6
| | | | | | | | (notionally Scalar.h is part of libLLVMScalarOpts, so it shouldn't be included by InstCombine which doesn't/shouldn't need to depend on ScalarOpts) llvm-svn: 330669
* [PatternMatch] allow undef elements when matching a vector zeroSanjay Patel2018-04-221-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | This is the last step in getting constant pattern matchers to allow undef elements in constant vectors. I'm adding a dedicated m_ZeroInt() function and building m_Zero() from that. In most cases, calling code can be updated to use m_ZeroInt() directly when there's no need to match pointers, but I'm leaving that efficiency optimization as a follow-up step because it's not always clear when that's ok. There are just enough icmp folds in InstSimplify that can be used for integer or pointer types, that we probably still want a generic m_Zero() for those cases. Otherwise, we could eliminate it (and possibly add a m_NullPtr() as an alias for isa<ConstantPointerNull>()). We're conservatively returning a full zero vector (zeroinitializer) in InstSimplify/InstCombine on some of these folds (see diffs in InstSimplify), but I'm not sure if that's actually necessary in all cases. We may be able to propagate an undef lane instead. One test where this happens is marked with 'TODO'. llvm-svn: 330550
* [DebugInfo] Sink related dbg users when sinking in InstCombineBjorn Pettersson2018-04-181-1/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: When sinking an instruction in InstCombine we now also sink the DbgInfoIntrinsics that are using the sunken value. Example) When sinking the load in this input bb.X: %0 = load i64, i64* %start, align 4, !dbg !31 tail call void @llvm.dbg.value(metadata i64 %0, ...) br i1 %cond, label %for.end, label %for.body.lr.ph for.body.lr.ph: br label %for.body we now also move the dbg.value, like this bb.X: br i1 %cond, label %for.end, label %for.body.lr.ph for.body.lr.ph: %0 = load i64, i64* %start, align 4, !dbg !31 tail call void @llvm.dbg.value(metadata i64 %0, ...) br label %for.body In the past we haven't moved the dbg.value so we got bb.X: tail call void @llvm.dbg.value(metadata i64 %0, ...) br i1 %cond, label %for.end, label %for.body.lr.ph for.body.lr.ph: %0 = load i64, i64* %start, align 4, !dbg !31 br label %for.body So in the past we got a debug-use before the def of %0. And that dbg.value was also on the path jumping to %for.end, for which %0 never was defined. CodeGenPrepare normally comes to rescue later (when not moving the dbg.value), since it moves dbg.value instrinsics quite brutally, without really analysing if it is correct to move the intrinsic (see PR31878). So at the moment this patch isn't expected to have much impact, besides that it is moving the dbg.value already in opt, making the IR look more sane directly. This can be seen as a preparation to (hopefully) make it possible to turn off CodeGenPrepare::placeDbgValues later as a solution to PR31878. I also adjusted test/DebugInfo/X86/sdagsplit-1.ll to make the IR in the test case up-to-date with this behavior in InstCombine. Reviewers: rnk, vsk, aprantl Reviewed By: vsk, aprantl Subscribers: mattd, JDevlieghere, llvm-commits Tags: #debug-info Differential Revision: https://reviews.llvm.org/D45425 llvm-svn: 330243
* [InstCombine] peek through bitcasted vector/array pointer GEP operandSanjay Patel2018-04-181-5/+25
| | | | | | | | | | | | | | The bitcast may be interfering with other combines or vectorization as shown in PR16739: https://bugs.llvm.org/show_bug.cgi?id=16739 Most pointer-related optimizations are probably able to look through this bitcast, but removing the bitcast shrinks the IR, so it's at least a size savings. Differential Revision: https://reviews.llvm.org/D44833 llvm-svn: 330237
* [InstCombine] simplify code in SimplifyAssociativeOrCommutative; NFCISanjay Patel2018-04-161-16/+11
| | | | llvm-svn: 330137
* [InstCombine] simplify getBinOpsForFactorization(); NFCSanjay Patel2018-04-161-25/+15
| | | | llvm-svn: 330129
* [InstCombine] simplify fneg+fadd folds; NFCSanjay Patel2018-04-163-26/+7
| | | | | | | | Two cleanups: 1. As noted in D45453, we had tests that don't need FMF that were misplaced in the 'fast-math.ll' test file. 2. This removes the final uses of dyn_castFNegVal, so that can be deleted. We use 'match' now. llvm-svn: 330126
* [InstCombine] fix formatting; NFCSanjay Patel2018-04-161-4/+2
| | | | llvm-svn: 330124
* [InstCombine] Simplify 'xor' to 'or' if no common bits are set.Roman Lebedev2018-04-151-0/+4
| | | | | | | | | | | | | | | | | | | | | | | Summary: In order to get the whole fold as specified in [[ https://bugs.llvm.org/show_bug.cgi?id=6773 | PR6773 ]], let's first handle the simple straight-forward things. Let's start with the `and` -> `or` simplification. The one obvious thing missing here: the constant mask is not handled. I have an idea how to handle it, but it will require some thinking, and is not strictly required here, so i've left that for later. https://rise4fun.com/Alive/Pkmg Reviewers: spatel, craig.topper, eli.friedman, jingyue Reviewed By: spatel Subscribers: llvm-commits Was reviewed as part of https://reviews.llvm.org/D45631 llvm-svn: 330103
* [InstCombine] simplify more code for distributive property; NFCISanjay Patel2018-04-151-38/+20
| | | | | | | Also, fix capitalization to current style. Follow-up to: rL330096 llvm-svn: 330097
* [InstCombine] simplify code for distributive property; NFCISanjay Patel2018-04-151-19/+3
| | | | llvm-svn: 330096
* [InstCombine] Enable Add/Sub simplifications with only 'reassoc' FMFWarren Ristow2018-04-141-3/+4
| | | | | | | | These simplifications were previously enabled only with isFast(), but that is more restrictive than required. Since r317488, FMF has 'reassoc' to control these cases at a finer level. llvm-svn: 330089
* [InstCombine]: foldSelectICmpAndAnd(): and is commutativeRoman Lebedev2018-04-131-24/+20
| | | | | | | | | | | | | | | | | | | | | Summary: The fold added in D45108 did not account for the fact that the and instruction is commutative, and if the mask is a variable, the mask variable and the fold variable may be swapped. I have noticed this by accident when looking into [[ https://bugs.llvm.org/show_bug.cgi?id=6773 | PR6773 ]] This extends/generalizes that fold, so it is handled too. Reviewers: spatel, craig.topper Reviewed By: spatel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D45539 llvm-svn: 330001
* [X86] Remove the pmuldq/pmuldq intrinsics and replace with native IR.Craig Topper2018-04-132-98/+0
| | | | | | | | This completes the work started in r329604 and r329605 when we changed clang to no longer use the intrinsics. We lost some InstCombine SimplifyDemandedBit optimizations through this change as we aren't able to fold 'and', bitcast, shuffle very well. llvm-svn: 329990
* [InstCombine] limit X - (cast(-Y) --> X + cast(Y) with hasOneUse()Sanjay Patel2018-04-111-10/+10
| | | | llvm-svn: 329821
* Eliminate a bitwise 'not' op of 'not' min/max by inverting the min/max.Artur Gainullin2018-04-111-0/+30
| | | | | | | | | | | | | | | | | | | | | Bitwise 'not' of the min/max could be eliminated in the pattern: %notx = xor i32 %x, -1 %cmp1 = icmp sgt[slt/ugt/ult] i32 %notx, %y %smax = select i1 %cmp1, i32 %notx, i32 %y %res = xor i32 %smax, -1 https://rise4fun.com/Alive/lCN Reviewers: spatel Reviewed by: spatel Subscribers: a.elovikov, llvm-commits Differential Revision: https://reviews.llvm.org/D45317 llvm-svn: 329791
* [InstCombine] simplify code that propagates FMF; NFCSanjay Patel2018-04-071-12/+4
| | | | llvm-svn: 329503
* [InstCombine] Get rid of select of bittest (PR36950 / PR17564)Roman Lebedev2018-04-071-0/+49
| | | | | | | | | | | | | | | | | | | | Summary: See [[ https://bugs.llvm.org/show_bug.cgi?id=36950 | PR36950 ]], [[ https://bugs.llvm.org/show_bug.cgi?id=17564 | PR17564 ]], D45065, D45107 https://godbolt.org/g/iAYRup Alive proof: https://rise4fun.com/Alive/uiH Testing: `ninja check-llvm` Reviewers: spatel, craig.topper Reviewed By: spatel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D45108 llvm-svn: 329492
* [InstCombine] limit nsz: -(X - Y) --> Y - X to hasOneUse()Sanjay Patel2018-04-061-12/+9
| | | | | | | As noted in the post-commit discussion for r329350, we shouldn't generally assume that fsub is the same cost as fneg. llvm-svn: 329429
* [InstCombine] FP: Z - (X - Y) --> Z + (Y - X)Sanjay Patel2018-04-051-2/+11
| | | | | | | | | | | | This restores what was lost with rL73243 but without re-introducing the bug that was present in the old code. Note that we already have these transforms if the ops are marked 'fast' (and I assume that's happening somewhere in the code added with rL170471), but we clearly don't need all of 'fast' for these transforms. llvm-svn: 329362
* [InstCombine] nsz: -(X - Y) --> Y - XSanjay Patel2018-04-051-4/+11
| | | | | | This restores part of the fold that was removed with rL73243 (PR4374). llvm-svn: 329350
* [InstCombine] Properly change GEP type when reassociating loop invariant GEP ↵Daniel Neilson2018-04-051-3/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | chains Summary: This is a fix to PR37005. Essentially, rL328539 ([InstCombine] reassociate loop invariant GEP chains to enable LICM) contains a bug whereby it will convert: %src = getelementptr inbounds i8, i8* %base, <2 x i64> %val %res = getelementptr inbounds i8, <2 x i8*> %src, i64 %val2 into: %src = getelementptr inbounds i8, i8* %base, i64 %val2 %res = getelementptr inbounds i8, <2 x i8*> %src, <2 x i64> %val By swapping the index operands if the GEPs are in a loop, and %val is loop variant while %val2 is loop invariant. This fix recreates new GEP instructions if the index operand swap would result in the type of %src changing from vector to scalar, or vice versa. Reviewers: sebpop, spatel Reviewed By: sebpop Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D45287 llvm-svn: 329331
* [InstCombine] use pattern matchers for fsub --> fadd foldsSanjay Patel2018-04-051-4/+9
| | | | | | This allows folding for vectors with undef elements. llvm-svn: 329316
* [InstCombine] cleanup; NFCSanjay Patel2018-04-051-15/+12
| | | | llvm-svn: 329282
* [InstCombine] allow more fmul folds with 'reassoc'Sanjay Patel2018-04-031-64/+64
| | | | | | | | The tests marked with 'FIXME' require loosening the check in SimplifyAssociativeOrCommutative() to optimize completely; that's still checking isFast() in Instruction::isAssociative(). llvm-svn: 329121
* [InstCombine] Fold compare of int constant against a splatted vector of intsDaniel Neilson2018-04-032-0/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Folding patterns like: %vec = shufflevector <4 x i8> %insvec, <4 x i8> undef, <4 x i32> zeroinitializer %cast = bitcast <4 x i8> %vec to i32 %cond = icmp eq i32 %cast, 0 into: %ext = extractelement <4 x i8> %insvec, i32 0 %cond = icmp eq i32 %ext, 0 Combined with existing rules, this allows us to fold patterns like: %insvec = insertelement <4 x i8> undef, i8 %val, i32 0 %vec = shufflevector <4 x i8> %insvec, <4 x i8> undef, <4 x i32> zeroinitializer %cast = bitcast <4 x i8> %vec to i32 %cond = icmp eq i32 %cast, 0 into: %cond = icmp eq i8 %val, 0 When we construct a splat vector via a shuffle, and bitcast the vector into an integer type for comparison against an integer constant. Then we can simplify the the comparison to compare the splatted value against the integer constant. Reviewers: spatel, anna, mkazantsev Reviewed By: spatel Subscribers: efriedma, rengolin, llvm-commits Differential Revision: https://reviews.llvm.org/D44997 llvm-svn: 329087
* [InstCombine] Don't strip function type casts from musttail callsReid Kleckner2018-04-021-1/+10
| | | | | | | | | | | | | | | Summary: The cast simplifications that instcombine does here do not make any attempt to obey the verifier rules for musttail calls. Therefore we have to disable them. Reviewers: efriedma, majnemer, pcc Subscribers: hiraditya, llvm-commits Differential Revision: https://reviews.llvm.org/D45186 llvm-svn: 329027
* [InstCombine] add folds for icmp + sub (PR36969)Sanjay Patel2018-04-021-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | (A - B) >u A --> A <u B C <u (C - D) --> C <u D https://rise4fun.com/Alive/e7j Name: ugt %sub = sub i8 %x, %y %cmp = icmp ugt i8 %sub, %x => %cmp = icmp ult i8 %x, %y Name: ult %sub = sub i8 %x, %y %cmp = icmp ult i8 %x, %sub => %cmp = icmp ult i8 %x, %y This should fix: https://bugs.llvm.org/show_bug.cgi?id=36969 llvm-svn: 329011
* [InstCombine] improve code comment; NFCSanjay Patel2018-03-261-2/+2
| | | | llvm-svn: 328560
OpenPOWER on IntegriCloud