summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
Commit message (Collapse)AuthorAgeFilesLines
* [InstCombine] don't unnecessarily generate a constant; NFCISanjay Patel2017-10-161-3/+2
| | | | llvm-svn: 315910
* Move folding of icmp with zero after checking for min/max idioms.Nikolai Bozhenov2017-10-161-11/+22
| | | | | | | | | | | | | | | | | | | | | Summary: The following transformation for cmp instruction: icmp smin(x, PositiveValue), 0 -> icmp x, 0 should only be done after checking for min/max to prevent infinite looping caused by a reverse canonicalization. That is why this transformation was moved to place after the mentioned check. Reviewers: spatel, efriedma Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D38934 Patch by: Artur Gainullin <artur.gainullin@intel.com> llvm-svn: 315895
* revert r314984: revert r314698 - [InstCombine] remove one-use restriction ↵Sanjay Patel2017-10-151-6/+6
| | | | | | | | | for icmp (shr exact X, C1), C2 --> icmp X, (C2<<C1) Recommitting r314698. The bug exposed by this change should be fixed with: https://reviews.llvm.org/rL315579 llvm-svn: 315857
* [InstCombine] improve folds for icmp gt/lt (shr X, C1), C2Sanjay Patel2017-10-051-37/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We can always eliminate the shift in: icmp gt/lt (shr X, C1), C2 --> icmp gt/lt X, C' This patch was supposed to just be an efficiency improvement because we were doing this 3-step process to fold: IC: Visiting: %c = icmp ugt i4 %s, 1 IC: ADD: %s = lshr i4 %x, 1 IC: ADD: %1 = udiv i4 %x, 2 IC: Old = %c = icmp ugt i4 %1, 1 New = <badref> = icmp uge i4 %x, 4 IC: ADD: %c = icmp uge i4 %x, 4 IC: ERASE %2 = icmp ugt i4 %1, 1 IC: Visiting: %c = icmp uge i4 %x, 4 IC: Old = %c = icmp uge i4 %x, 4 New = <badref> = icmp ugt i4 %x, 3 IC: ADD: %c = icmp ugt i4 %x, 3 IC: ERASE %2 = icmp uge i4 %x, 4 IC: Visiting: %c = icmp ugt i4 %x, 3 IC: DCE: %1 = udiv i4 %x, 2 IC: ERASE %1 = udiv i4 %x, 2 IC: DCE: %s = lshr i4 %x, 1 IC: ERASE %s = lshr i4 %x, 1 IC: Visiting: ret i1 %c When we could go directly to canonical icmp form: IC: Visiting: %c = icmp ugt i4 %s, 1 IC: Old = %c = icmp ugt i4 %s, 1 New = <badref> = icmp ugt i4 %x, 3 IC: ADD: %c = icmp ugt i4 %x, 3 IC: ERASE %1 = icmp ugt i4 %s, 1 IC: ADD: %s = lshr i4 %x, 1 IC: DCE: %s = lshr i4 %x, 1 IC: ERASE %s = lshr i4 %x, 1 IC: Visiting: %c = icmp ugt i4 %x, 3 ...but then I noticed that the folds were incomplete too: https://godbolt.org/g/aB2hLE Here are attempts to prove the logic with Alive: https://rise4fun.com/Alive/92o Name: lshr_ult Pre: ((C2 << C1) u>> C1) == C2 %sh = lshr i8 %x, C1 %r = icmp ult i8 %sh, C2 => %r = icmp ult i8 %x, (C2 << C1) Name: ashr_slt Pre: ((C2 << C1) >> C1) == C2 %sh = ashr i8 %x, C1 %r = icmp slt i8 %sh, C2 => %r = icmp slt i8 %x, (C2 << C1) Name: lshr_ugt Pre: (((C2+1) << C1) u>> C1) == (C2+1) %sh = lshr i8 %x, C1 %r = icmp ugt i8 %sh, C2 => %r = icmp ugt i8 %x, ((C2+1) << C1) - 1 Name: ashr_sgt Pre: (C2 != 127) && ((C2+1) << C1 != -128) && (((C2+1) << C1) >> C1) == (C2+1) %sh = ashr i8 %x, C1 %r = icmp sgt i8 %sh, C2 => %r = icmp sgt i8 %x, ((C2+1) << C1) - 1 Name: ashr_exact_sgt Pre: ((C2 << C1) >> C1) == C2 %sh = ashr exact i8 %x, C1 %r = icmp sgt i8 %sh, C2 => %r = icmp sgt i8 %x, (C2 << C1) Name: ashr_exact_slt Pre: ((C2 << C1) >> C1) == C2 %sh = ashr exact i8 %x, C1 %r = icmp slt i8 %sh, C2 => %r = icmp slt i8 %x, (C2 << C1) Name: lshr_exact_ugt Pre: ((C2 << C1) u>> C1) == C2 %sh = lshr exact i8 %x, C1 %r = icmp ugt i8 %sh, C2 => %r = icmp ugt i8 %x, (C2 << C1) Name: lshr_exact_ult Pre: ((C2 << C1) u>> C1) == C2 %sh = lshr exact i8 %x, C1 %r = icmp ult i8 %sh, C2 => %r = icmp ult i8 %x, (C2 << C1) We did something similar for 'shl' in D28406. Differential Revision: https://reviews.llvm.org/D38514 llvm-svn: 315021
* revert r314698 - [InstCombine] remove one-use restriction for icmp (shr ↵Sanjay Patel2017-10-051-6/+6
| | | | | | | | | | | exact X, C1), C2 --> icmp X, (C2<<C1) There is a bot failure that appears to be related to this change: http://lab.llvm.org:8011/builders/clang-cmake-armv7-a15-selfhost-neon/builds/2117 ...so reverting to confirm that and attempting to keep the bot green while investigating. llvm-svn: 314984
* [InstCombine] Improve support for ashr in foldICmpAndShiftCraig Topper2017-10-041-9/+12
| | | | | | | | We can support ashr similar to lshr, if we know that none of the shifted in bits are used. In that case SimplifyDemandedBits would normally convert it to lshr. But that conversion doesn't happen if the shift has additional users. Differential Revision: https://reviews.llvm.org/D38521 llvm-svn: 314945
* [InstCombine] Use isSignBitCheck to simplify an if statement. Directly ↵Craig Topper2017-10-031-12/+8
| | | | | | | | create new sign bit compares instead of manipulating the constant. NFCI Since we no longer had the direct constant compares, manipulating the constant seemeded less clear. llvm-svn: 314830
* [InstCombine] Change a bunch of methods to take APInts by reference instead ↵Craig Topper2017-10-031-119/+119
| | | | | | | | of pointer. This allows us to remove a bunch of dereferences and only have a few dereferences at the call sites. llvm-svn: 314762
* [InstCombine] Replace an equality compare of two APInt pointers with a ↵Craig Topper2017-10-031-1/+1
| | | | | | | | compare of the APInts themselves. Apparently this works by virtue of the fact that the pointers are pointers to the APInts stored inside of the ConstantInt objects. But I really don't think we should be relying on that. llvm-svn: 314761
* [InstCombine] remove one-use restriction for icmp (shr exact X, C1), C2 --> ↵Sanjay Patel2017-10-021-6/+6
| | | | | | icmp X, (C2<<C1) llvm-svn: 314698
* [InstCombine] Use APInt for all the math in foldICmpDivConstantCraig Topper2017-10-011-95/+46
| | | | | | | | | | | | | | Summary: This currently uses ConstantExpr to do its math, but as noted in a TODO it can all be done directly on APInt. Reviewers: spatel, majnemer Reviewed By: majnemer Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D38440 llvm-svn: 314640
* Revert r314017 '[InstCombine] Simplify check for RHS being a splat constant ↵Craig Topper2017-09-271-16/+22
| | | | | | | | in foldICmpUsingKnownBits by just checking Op1Min==Op1Max rather than going through m_APInt.' This reverts r314017 and similar code added in later commits. It seems to not work for pointer compares and is causing a bot failure for the last several days. llvm-svn: 314360
* [InstCombine] Remove one use restriction on the shift for calls to ↵Craig Topper2017-09-261-3/+3
| | | | | | | | | | | | foldICmpAndShift. If this transformation succeeds, we're going to remove our dependency on the shift by rewriting the and. So it doesn't matter how many uses the shift has. This distributes the one use check to other transforms in foldICmpAndConstConst that do need it. Differential Revision: https://reviews.llvm.org/D38206 llvm-svn: 314233
* [InstCombine] Move an optimization from foldICmpAndConstConst to ↵Craig Topper2017-09-251-16/+10
| | | | | | | | | | foldICmpUsingKnownBits All this optimization cares about is knowing how many low bits of LHS is known to be zero and whether that means that the result is 0 or greater than the RHS constant. It doesn't matter where the zeros in the low bits came from. So we don't need to specifically look for an AND. Instead we can use known bits. Differential Revision: https://reviews.llvm.org/D38195 llvm-svn: 314153
* [InstCombine] Teach foldICmpUsingKnownBits to simplify SLE/SGE/ULE/UGE to ↵Craig Topper2017-09-221-0/+8
| | | | | | | | equality comparisons when the min/max ranges intersect in a single value. This is the inverse of what we do for SGT/SLT/UGT/ULT. llvm-svn: 314032
* [InstCombine] Add constant splat handling to one of the ICMP_SLT/SGT cases ↵Craig Topper2017-09-221-6/+5
| | | | | | in foldICmpUsingKnownBits. llvm-svn: 314025
* [InstCombine] Move the call to isSignBitCheck into getDemandedBitsLHSMask ↵Craig Topper2017-09-221-15/+8
| | | | | | | | instead of calling it outside and passing its result through a flag. NFCI The result of the isSignBitCheck isn't used anywhere else and this allows us to share the m_APInt call in the likely case that it isn't a sign bit check. llvm-svn: 314018
* [InstCombine] Simplify check for RHS being a splat constant in ↵Craig Topper2017-09-221-8/+6
| | | | | | foldICmpUsingKnownBits by just checking Op1Min==Op1Max rather than going through m_APInt. llvm-svn: 314017
* [InstCombine] Make cases for ICMP_UGT/ICMP_ULT use similar formatting since ↵Craig Topper2017-09-221-6/+3
| | | | | | they use similar code. NFC llvm-svn: 314016
* [InstCombine] Teach getDemandedBitsLHSMask to handle constant splat vectorsCraig Topper2017-09-201-11/+7
| | | | | | | | This replaces a ConstantInt dyn_cast with m_APInt Differential Revision: https://reviews.llvm.org/D38100 llvm-svn: 313840
* [InstCombine] Handle (X & C2) < C1 --> (X & C2) == 0Craig Topper2017-09-201-6/+13
| | | | | | | | We already did (X & C2) > C1 --> (X & C2) != 0, if any bit set in (X & C2) will produce a result greater than C1. But there is an equivalent inverse condition with <= C1 (which will be canonicalized to < C1+1) Differential Revision: https://reviews.llvm.org/D38065 llvm-svn: 313819
* [InstCombine] Use APInt::getActiveBits() to avoid creating an APInt from a ↵Craig Topper2017-09-201-1/+1
| | | | | | trailing zero count to do a comparison. NFCI llvm-svn: 313792
* Test commitUriel Korach2017-09-101-1/+1
| | | | llvm-svn: 312878
* [ValueTracking, InstCombine] canonicalize fcmp ord/uno with non-NAN ops to ↵Sanjay Patel2017-09-051-0/+13
| | | | | | | | | | | | | | | | | | | | | null constants This is a preliminary step towards solving the remaining part of PR27145 - IR for isfinite(): https://bugs.llvm.org/show_bug.cgi?id=27145 In order to solve that one more generally, we need to add matching for and/or of fcmp ord/uno with a constant operand. But while looking at those patterns, I realized we were missing a canonicalization for nonzero constants. Rather than limiting to just folds for constants, we're adding a general value tracking method for this based on an existing DAG helper. By transforming everything to 0.0, we can simplify the existing code in foldLogicOfFCmps() and pick up missing vector folds. Differential Revision: https://reviews.llvm.org/D37427 llvm-svn: 312591
* [InstCombine] use local variable to reduce code duplication; NFCISanjay Patel2017-09-021-11/+9
| | | | llvm-svn: 312414
* [InstCombine] Remove unused argument. NFCCraig Topper2017-08-231-4/+3
| | | | llvm-svn: 311529
* [InstCombine] Replace a simple matcher with a plain old dyn_cast. NFCCraig Topper2017-08-231-2/+1
| | | | llvm-svn: 311528
* [InstCombine] Remove an unnecessary dyn_cast to Instruction and a switch ↵Craig Topper2017-08-231-23/+16
| | | | | | | | over two opcodes. Just dyn_cast to the specific instruction classes individually. NFC Change the helper methods to take the more specific class as well. llvm-svn: 311527
* Remove checks for debug info intrinsics in use lists, NFCReid Kleckner2017-08-141-1/+0
| | | | | | | These haven't done anything since debug info intrinsics stopped appearing in Value use lists in 2014. llvm-svn: 310892
* [InstCombine] Don't violate dominance when replacing instructions.Davide Italiano2017-07-161-7/+11
| | | | | | Differential Revision: https://reviews.llvm.org/D35376 llvm-svn: 308144
* [InstCombine] convert bitwise (in)equality checks to logical ops (PR32401)Sanjay Patel2017-07-141-3/+15
| | | | | | | | | | | | | As discussed in: https://bugs.llvm.org/show_bug.cgi?id=32401 we have a backend transform to undo this: https://reviews.llvm.org/rL299542 when it's likely that the xor version leads to better codegen, but we want this form in IR for better analysis and simplification potential. llvm-svn: 308031
* Fix invalid cast in instcombine UMul/ZExt idiomSerge Guelton2017-07-101-6/+7
| | | | | | | | | | | Fixes https://bugs.llvm.org/show_bug.cgi?id=25454 Do not assume IRBuilder creates Instruction where it can create Value. Do not assume idiom operands are constant, leave generalisation ot the IRBuilder. Differential Revision: https://reviews.llvm.org/D35114 llvm-svn: 307554
* [IR] Add Type::isIntOrIntVectorTy(unsigned) similar to the existing ↵Craig Topper2017-07-091-2/+2
| | | | | | isIntegerTy(unsigned), but also works for vectors. llvm-svn: 307492
* [InstCombine] Make InstCombine's IRBuilder be passed by reference everywhereCraig Topper2017-07-071-136/+135
| | | | | | | | Previously the InstCombiner class contained a pointer to an IR builder that had been passed to the constructor. Sometimes this would be passed to helper functions as either a pointer or the pointer would be dereferenced to be passed by reference. This patch makes it a reference everywhere including the InstCombiner class itself so there is more inconsistency. This a large, but mechanical patch. I've done very minimal formatting changes on it despite what clang-format wanted to do. llvm-svn: 307451
* [InstCombine] Use m_BitReverse pattern match helper. NFCI.Simon Pilgrim2017-07-021-2/+2
| | | | llvm-svn: 306986
* [InstCombine] fix crash when folding cmp+bswap vectorSanjay Patel2017-07-021-5/+9
| | | | | | | | | We assumed the constant was a scalar when creating the replacement operand. Also, improve tests for this fold and move the tests for this fold to their own file. I'll move the related and missing tests to this file as a follow-up. llvm-svn: 306985
* [InstCombine] look through bswap/bitreverse for equality comparisonsSanjay Patel2017-07-021-0/+9
| | | | | | | | | I noticed this missed bswap optimization in the CGP memcmp() expansion, and then I saw that we don't have the fold in InstCombine. Differential Revision: https://reviews.llvm.org/D34763 llvm-svn: 306980
* Reduce indenting and clean up comparisons around sign bit.Eric Christopher2017-06-301-6/+7
| | | | llvm-svn: 306781
* Reduce the complexity of the signbit/branch test functions.Eric Christopher2017-06-301-3/+3
| | | | llvm-svn: 306779
* [InstCombine] use local variable to reduce code; NFCISanjay Patel2017-06-281-18/+14
| | | | llvm-svn: 306560
* [Analysis][Transforms] Use commutable matchers instead of m_CombineOr in a ↵Craig Topper2017-06-241-2/+1
| | | | | | few places. NFC llvm-svn: 306204
* [InstCombine] Recognize and simplify three way comparison idiomsAnna Thomas2017-06-231-4/+92
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Many languages have a three way comparison idiom where comparing two values produces not a boolean, but a tri-state value. Typical values (e.g. as used in the lcmp/fcmp bytecodes from Java) are -1 for less than, 0 for equality, and +1 for greater than. We actually do a great job already of converting three way comparisons into binary comparisons when the result produced has one a single use. Unfortunately, such values can have more than one use, and in that case, our existing optimizations break down. The patch adds a peephole which converts a three-way compare + test idiom into a binary comparison on the original inputs. It focused on replacing the test on the result of the three way compare and does nothing about removing the three way compare itself. That's left to other optimizations (which do actually kick in commonly.) We currently recognize one idiom on signed integer compare. In the future, we plan to recognize and simplify other comparison idioms on other signed/unsigned datatypes such as floats, vectors etc. This is a resurrection of Philip Reames' original patch: https://reviews.llvm.org/D19452 Reviewers: majnemer, apilipenko, reames, sanjoy, mkazantsev Reviewed by: mkazantsev Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D34278 llvm-svn: 306100
* [InstCombine][InstSimplify] Use APInt::isNullValue/isOneValue to reduce ↵Craig Topper2017-06-071-31/+35
| | | | | | | | compiled code for comparing APInts with 0 and 1. NFC These methods are specifically optimized to only counting leading zeros without an additional uint64_t compare. llvm-svn: 304876
* [InstCombine] Fix two asserts that were accidentally checking that an APInt ↵Craig Topper2017-06-071-2/+2
| | | | | | | | pointer is non-zero instead of checking that the APInt self is non-zero. I believe this code used to use APInt references which would have worked. But then they were changed to pointers to allow m_APInt to be used. llvm-svn: 304875
* [InstCombine] fix icmp with not op and constant to work with splat vector ↵Sanjay Patel2017-06-021-3/+3
| | | | | | constant llvm-svn: 304562
* [InstCombine] improve perf by not creating a known non-canonical instructionSanjay Patel2017-06-021-3/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Op1 (RHS) is a constant, so putting it on the LHS makes us churn through visitICmp an extra time to canonicalize it: INSTCOMBINE ITERATION #1 on cmpnot IC: ADDING: 3 instrs to worklist IC: Visiting: %notx = xor i8 %x, -1 IC: Visiting: %cmp = icmp sgt i8 %notx, 42 IC: Old = %cmp = icmp sgt i8 %notx, 42 New = <badref> = icmp sgt i8 -43, %x IC: ADD: %cmp = icmp sgt i8 -43, %x IC: ERASE %1 = icmp sgt i8 %notx, 42 IC: ADD: %notx = xor i8 %x, -1 IC: DCE: %notx = xor i8 %x, -1 IC: ERASE %notx = xor i8 %x, -1 IC: Visiting: %cmp = icmp sgt i8 -43, %x IC: Mod = %cmp = icmp sgt i8 -43, %x New = %cmp = icmp slt i8 %x, -43 IC: ADD: %cmp = icmp slt i8 %x, -43 IC: Visiting: %cmp = icmp slt i8 %x, -43 IC: Visiting: ret i1 %cmp If we create the swapped ICmp directly, we go faster: INSTCOMBINE ITERATION #1 on cmpnot IC: ADDING: 3 instrs to worklist IC: Visiting: %notx = xor i8 %x, -1 IC: Visiting: %cmp = icmp sgt i8 %notx, 42 IC: Old = %cmp = icmp sgt i8 %notx, 42 New = <badref> = icmp slt i8 %x, -43 IC: ADD: %cmp = icmp slt i8 %x, -43 IC: ERASE %1 = icmp sgt i8 %notx, 42 IC: ADD: %notx = xor i8 %x, -1 IC: DCE: %notx = xor i8 %x, -1 IC: ERASE %notx = xor i8 %x, -1 IC: Visiting: %cmp = icmp slt i8 %x, -43 IC: Visiting: ret i1 %cmp llvm-svn: 304558
* [InstCombine] Pass the DominatorTree, AssumptionCache, and context ↵Craig Topper2017-05-261-2/+2
| | | | | | | | | | instruction to a few calls to isKnownPositive, isKnownNegative, and isKnownNonZero Every other place in InstCombine that uses these methods in ValueTracking already pass this information. This makes the remaining sites consistent. Differential Revision: https://reviews.llvm.org/D33567 llvm-svn: 304018
* [InstCombine] Add an InstCombine specific wrapper around ↵Craig Topper2017-05-251-1/+1
| | | | | | | | isKnownToBeAPowerOfTwo to shorten code. NFC We have wrappers for several other ValueTracking methods that take care of passing all of the analysis and assumption cache parameters. This extends it to isKnownToBeAPowerOfTwo. llvm-svn: 303924
* [InstCombine] make icmp-mul fold more efficientSanjay Patel2017-05-251-5/+7
| | | | | | | | | | | There's probably a lot more like this (see also comments in D33338 about responsibility), but I suspect we don't usually get a visible manifestation. Given the recent interest in improving InstCombine efficiency, another potential micro-opt that could be repeated several times in this function: morph the existing icmp pred/operands instead of creating a new instruction. llvm-svn: 303860
* [InstCombine] use m_APInt to allow icmp-mul-mul vector foldSanjay Patel2017-05-241-11/+12
| | | | | | | | | The swapped operands in the first test is a manifestation of an inefficiency for vectors that doesn't exist for scalars because the IRBuilder checks for an all-ones mask for scalars, but not vectors. llvm-svn: 303818
OpenPOWER on IntegriCloud