summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
Commit message (Collapse)AuthorAgeFilesLines
* [InstCombine] try to pull 'not' of select into compare operandsSanjay Patel2020-01-071-0/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | not (select ?, (cmp TPred, ?, ?), (cmp FPred, ?, ?) --> select ?, (cmp TPred', ?, ?), (cmp FPred', ?, ?) If both sides of the select are cmps, we can remove an instruction. The case where only side is a cmp is deferred to a possible follow-on patch. We have a more general 'isFreeToInvert' analysis, but I'm not seeing a way to use that more widely without inducing infinite looping (opposing transforms). Here, we flip the compare predicates directly, so we should not have any danger by creating extra intermediate 'not' ops. Alive proofs: https://rise4fun.com/Alive/jKa Name: both select values are compares - invert predicates %tcmp = icmp sle i32 %x, %y %fcmp = icmp ugt i32 %z, %w %sel = select i1 %cond, i1 %tcmp, i1 %fcmp %not = xor i1 %sel, true => %tcmp_not = icmp sgt i32 %x, %y %fcmp_not = icmp ule i32 %z, %w %not = select i1 %cond, i1 %tcmp_not, i1 %fcmp_not Name: false val is compare - invert/not %fcmp = icmp ugt i32 %z, %w %sel = select i1 %cond, i1 %tcmp, i1 %fcmp %not = xor i1 %sel, true => %tcmp_not = xor i1 %tcmp, -1 %fcmp_not = icmp ule i32 %z, %w %not = select i1 %cond, i1 %tcmp_not, i1 %fcmp_not Differential Revision: https://reviews.llvm.org/D72007
* [InstCombine] conditional sign-extend of high-bit-extract: 'or' pattern.Roman Lebedev2019-10-201-0/+4
| | | | | | | | | | | | | | In this pattern, all the "magic" bits that we'd `add` are all high sign bits, and in the value we'd be adding to they are all unset, not unexpectedly, so we can have an `or` there: https://rise4fun.com/Alive/ups It is possible that `haveNoCommonBitsSet()` should be taught about this pattern so that we never have an `add` variant, but the reasoning would need to be recursive (because of that `select`), so i'm not really sure that would be worth it just yet. llvm-svn: 375378
* [InstCombine] foldUnsignedUnderflowCheck(): one last pattern with 'sub' ↵Roman Lebedev2019-09-251-0/+10
| | | | | | | | (PR43251) https://rise4fun.com/Alive/0j9 llvm-svn: 372930
* [InstCombine] Fold (A - B) u>=/u< A --> B u>/u<= A iff B != 0Roman Lebedev2019-09-251-15/+0
| | | | | | | | | | | | | https://rise4fun.com/Alive/KtL This also shows that the fold added in D67412 / r372257 was too specific, and the new fold allows those test cases to be handled more generically, therefore i delete now-dead code. This is yet again motivated by D67122 "[UBSan][clang][compiler-rt] Applying non-zero offset to nullptr is undefined behaviour" llvm-svn: 372912
* [InstCombine] (a+b) < a && (a+b) != 0 -> (0-b) < a iff a/b != 0 (PR43259)Roman Lebedev2019-09-241-4/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This is again motivated by D67122 sanitizer check enhancement. That patch seemingly worsens `-fsanitize=pointer-overflow` overhead from 25% to 50%, which strongly implies missing folds. For ``` #include <cassert> char* test(char& base, signed long offset) { __builtin_assume(offset < 0); return &base + offset; } ``` We produce https://godbolt.org/z/r40U47 and again those two icmp's can be merged: ``` Name: 0 Pre: C != 0 %adjusted = add i8 %base, C %not_null = icmp ne i8 %adjusted, 0 %no_underflow = icmp ult i8 %adjusted, %base %r = and i1 %not_null, %no_underflow => %neg_offset = sub i8 0, C %r = icmp ugt i8 %base, %neg_offset ``` https://rise4fun.com/Alive/ALap https://rise4fun.com/Alive/slnN There are 3 other variants of this pattern, i believe they all will go into InstSimplify. https://bugs.llvm.org/show_bug.cgi?id=43259 Reviewers: spatel, xbolva00, nikic Reviewed By: spatel Subscribers: efriedma, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67849 llvm-svn: 372768
* [InstCombine] (a+b) <= a && (a+b) != 0 -> (0-b) < a (PR43259)Roman Lebedev2019-09-241-2/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This is again motivated by D67122 sanitizer check enhancement. That patch seemingly worsens `-fsanitize=pointer-overflow` overhead from 25% to 50%, which strongly implies missing folds. This pattern isn't exactly what we get there (strict vs. non-strict predicate), but this pattern does not require known-bits analysis, so it is best to handle it first. ``` Name: 0 %adjusted = add i8 %base, %offset %not_null = icmp ne i8 %adjusted, 0 %no_underflow = icmp ule i8 %adjusted, %base %r = and i1 %not_null, %no_underflow => %neg_offset = sub i8 0, %offset %r = icmp ugt i8 %base, %neg_offset ``` https://rise4fun.com/Alive/knp There are 3 other variants of this pattern, they all will go into InstSimplify: https://rise4fun.com/Alive/bIDZ https://bugs.llvm.org/show_bug.cgi?id=43259 Reviewers: spatel, xbolva00, nikic Reviewed By: spatel Subscribers: hiraditya, majnemer, vsk, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67846 llvm-svn: 372767
* [InstCombine] Fold a shifty implementation of clamp-to-allones.Huihui Zhang2019-09-241-0/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Fold or(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) into X s> Y ? -1 : X https://rise4fun.com/Alive/d8Ab clamp255 is a common operator in image processing, can be implemented in a shifty way "(255 - X) >> 31 | X & 255". Fold shift into select enables more optimization, e.g., vmin generation for ARM target. Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon Reviewed By: lebedev.ri Subscribers: kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67800 llvm-svn: 372678
* [InstCombine] Fold a shifty implementation of clamp-to-zero.Huihui Zhang2019-09-241-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: Fold and(ashr(subNSW(Y, X), ScalarSizeInBits(Y)-1), X) into X s> Y ? X : 0 https://rise4fun.com/Alive/lFH Fold shift into select enables more optimization, e.g., vmax generation for ARM target. Reviewers: lebedev.ri, efriedma, spatel, kparzysz, bcahoon Reviewed By: lebedev.ri Subscribers: xbolva00, andreadb, craig.topper, RKSimon, kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67799 llvm-svn: 372676
* [InstCombine] foldOrOfICmps(): Acquire SimplifyQuery with set CxtIRoman Lebedev2019-09-231-2/+4
| | | | | | Extracted from https://reviews.llvm.org/D67849#inline-610377 llvm-svn: 372654
* [InstCombine] foldAndOfICmps(): Acquire SimplifyQuery with set CxtIRoman Lebedev2019-09-231-2/+4
| | | | | | Extracted from https://reviews.llvm.org/D67849#inline-610377 llvm-svn: 372653
* [InstCombine] foldUnsignedUnderflowCheck(): s/Subtracted/ZeroCmpOp/Roman Lebedev2019-09-231-7/+7
| | | | llvm-svn: 372625
* [InstCombine] Simplify @llvm.usub.with.overflow+non-zero check (PR43251)Roman Lebedev2019-09-191-0/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This is again motivated by D67122 sanitizer check enhancement. That patch seemingly worsens `-fsanitize=pointer-overflow` overhead from 25% to 50%, which strongly implies missing folds. In this particular case, given ``` char* test(char& base, unsigned long offset) { return &base - offset; } ``` it will end up producing something like https://godbolt.org/z/luGEju which after optimizations reduces down to roughly ``` declare void @use64(i64) define i1 @test(i8* dereferenceable(1) %base, i64 %offset) { %base_int = ptrtoint i8* %base to i64 %adjusted = sub i64 %base_int, %offset call void @use64(i64 %adjusted) %not_null = icmp ne i64 %adjusted, 0 %no_underflow = icmp ule i64 %adjusted, %base_int %no_underflow_and_not_null = and i1 %not_null, %no_underflow ret i1 %no_underflow_and_not_null } ``` Without D67122 there was no `%not_null`, and in this particular case we can "get rid of it", by merging two checks: Here we are checking: `Base u>= Offset && (Base u- Offset) != 0`, but that is simply `Base u> Offset` Alive proofs: https://rise4fun.com/Alive/QOs The `@llvm.usub.with.overflow` pattern itself is not handled here because this is the main pattern, that we currently consider canonical. https://bugs.llvm.org/show_bug.cgi?id=43251 Reviewers: spatel, nikic, xbolva00, majnemer Reviewed By: xbolva00, majnemer Subscribers: vsk, majnemer, xbolva00, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67356 llvm-svn: 372341
* [InstCombine] foldUnsignedUnderflowCheck(): handle last few cases (PR43251)Roman Lebedev2019-09-181-0/+52
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: I don't have a direct motivational case for this, but it would be good to have this for completeness/symmetry. This pattern is basically the motivational pattern from https://bugs.llvm.org/show_bug.cgi?id=43251 but with different predicate that requires that the offset is non-zero. The completeness bit comes from the fact that a similar pattern (offset != zero) will be needed for https://bugs.llvm.org/show_bug.cgi?id=43259, so it'd seem to be good to not overlook very similar patterns.. Proofs: https://rise4fun.com/Alive/21b Also, there is something odd with `isKnownNonZero()`, if the non-zero knowledge was specified as an assumption, it didn't pick it up (PR43267) With this, i see no other missing folds for https://bugs.llvm.org/show_bug.cgi?id=43251 Reviewers: spatel, nikic, xbolva00 Reviewed By: spatel Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67412 llvm-svn: 372257
* [InstCombine][NFC] Rename IsFreeToInvert() -> isFreeToInvert() for consistencyRoman Lebedev2019-08-131-12/+12
| | | | | | As per https://reviews.llvm.org/D65530#inline-592325 llvm-svn: 368686
* [InstCombine] foldXorOfICmps(): don't give up on non-single-use ICmp's if ↵Roman Lebedev2019-08-131-9/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | all users are freely invertible Summary: This is rather unconventional.. As the comment there says, we don't have much folds for xor-of-icmps, we try to turn them into an and-of-icmps, for which we have plenty of folds. But if the ICmp we need to invert is not single-use - we give up. As discussed in https://reviews.llvm.org/D65148#1603922, we may have a non-canonical CLAMP pattern, with bit match and select-of-threshold that we'll potentially clamp. As it can be seen in `canonicalize-clamp-with-select-of-constant-threshold-pattern.ll`, out of all 8 variations of the pattern, only two are **not** canonicalized into the variant with and+icmp instead of bit math. The reason is because the ICmp we need to invert is not single-use - we give up. We indeed can't perform this fold at will, the general rule is that we should not increase instruction count in InstCombine, But we wouldn't end up increasing instruction count if we can adapt every other user to the inverted value. This way the `not` we create **will** get folded, and in the end the instruction count did not increase. For that, of course, we need to look at the users of a Value, which is again rather unconventional for InstCombine :S Thus i'm proposing to be a little bit more insistive in `foldXorOfICmps()`. The alternatives would be to not create that `not`, but add duplicate code to manually invert all users; or to add some even less general combine to handle some more specific pattern[s]. Reviewers: spatel, nikic, RKSimon, craig.topper Reviewed By: spatel Subscribers: hiraditya, jdoerfert, dmgreen, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D65530 llvm-svn: 368685
* [InstCombine] Teach foldOrOfICmps to allow icmp eq MIN_INT/MAX to be part of ↵Craig Topper2019-07-241-8/+40
| | | | | | | | | | | | | | | a range comparision. Similar for foldAndOfICmps We can treat icmp eq X, MIN_UINT as icmp ule X, MIN_UINT and allow it to merge with icmp ugt X, C. Similar for the other constants. We can do simliar for icmp ne X, (U)INT_MIN/MAX in foldAndOfICmps. And we already handled UINT_MIN there. Fixes PR42691. Differential Revision: https://reviews.llvm.org/D65017 llvm-svn: 366945
* [InstCombine] Update comment I missed in r366649. NFCCraig Topper2019-07-211-1/+1
| | | | llvm-svn: 366658
* [InstCombine] Remove insertRangeTest code that handles the equality case.Craig Topper2019-07-211-4/+2
| | | | | | | | | | | | | For equality, the function called getTrue/getFalse with the VT of the comparison input. But getTrue/getFalse need the boolean VT. So if this code ever executed, it would assert. I believe these cases are removed by InstSimplify so we don't get here. So this patch just fixes up an assert to exclude the equality possibility and removes the broken code. llvm-svn: 366649
* [InstCombine] Don't use AddOne/SubOne to see if two APInts are 1 apart. Use ↵Craig Topper2019-07-211-5/+9
| | | | | | | | | | APInt operations instead. NFCI AddOne/SubOne create new Constant objects. That seems heavy for comparing ConstantInts which wrap APInts. Just do the math on on the APInts and compare them. llvm-svn: 366648
* Fix parameter name comments using clang-tidy. NFC.Rui Ueyama2019-07-161-1/+1
| | | | | | | | | | | | | | | | | | | | | This patch applies clang-tidy's bugprone-argument-comment tool to LLVM, clang and lld source trees. Here is how I created this patch: $ git clone https://github.com/llvm/llvm-project.git $ cd llvm-project $ mkdir build $ cd build $ cmake -GNinja -DCMAKE_BUILD_TYPE=Debug \ -DLLVM_ENABLE_PROJECTS='clang;lld;clang-tools-extra' \ -DCMAKE_EXPORT_COMPILE_COMMANDS=On -DLLVM_ENABLE_LLD=On \ -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ ../llvm $ ninja $ parallel clang-tidy -checks='-*,bugprone-argument-comment' \ -config='{CheckOptions: [{key: StrictMode, value: 1}]}' -fix \ ::: ../llvm/lib/**/*.{cpp,h} ../clang/lib/**/*.{cpp,h} ../lld/**/*.{cpp,h} llvm-svn: 366177
* [InstCombine] squash is-not-power-of-2 using ctpopSanjay Patel2019-06-241-5/+18
| | | | | | | | | | | | | | | | This is the Demorgan'd 'not' of the pattern handled in: D63660 / rL364153 This is another intermediate IR step towards solving PR42314: https://bugs.llvm.org/show_bug.cgi?id=42314 We can test if a value is not a power-of-2 using ctpop(X) > 1, so combining that with an is-zero check of the input is the same as testing if not exactly 1 bit is set: (X == 0) || (ctpop(X) u> 1) --> ctpop(X) != 1 llvm-svn: 364246
* [InstCombine] squash is-power-of-2 that uses ctpopSanjay Patel2019-06-231-0/+23
| | | | | | | | | | | | | | | This is another intermediate IR step towards solving PR42314: https://bugs.llvm.org/show_bug.cgi?id=42314 We can test if a value is power-of-2-or-0 using ctpop(X) < 2, so combining that with a non-zero check of the input is the same as testing if exactly 1 bit is set: (X != 0) && (ctpop(X) u< 2) --> ctpop(X) == 1 Differential Revision: https://reviews.llvm.org/D63660 llvm-svn: 364153
* [InstCombine] try harder to form rotate (funnel shift) (PR20750)Sanjay Patel2019-05-131-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have a similar match for patterns ending in a truncate. This should be ok for all targets because the default expansion would still likely be better from replacing 2 'and' ops with 1. Attempt to show the logic equivalence in Alive (which doesn't currently have funnel-shift in its vocabulary AFAICT): %shamt = zext i8 %i to i32 %m = and i32 %shamt, 31 %neg = sub i32 0, %shamt %and4 = and i32 %neg, 31 %shl = shl i32 %v, %m %shr = lshr i32 %v, %and4 %or = or i32 %shr, %shl => %a = and i8 %i, 31 %shamt2 = zext i8 %a to i32 %neg2 = sub i32 0, %shamt2 %and4 = and i32 %neg2, 31 %shl = shl i32 %v, %shamt2 %shr = lshr i32 %v, %and4 %or = or i32 %shr, %shl https://rise4fun.com/Alive/V9r llvm-svn: 360605
* [InstCombine] Don't transform ((C1 OP zext(X)) & C2) -> zext((C1 OP X) & C2) ↵Craig Topper2019-03-211-1/+5
| | | | | | | | | | | | | | if either zext or OP has another use. If they have other users we'll just end up increasing the instruction count. We might be able to weaken this to only one of them having a single use if we can prove that the and will be removed. Fixes PR41164. Differential Revision: https://reviews.llvm.org/D59630 llvm-svn: 356690
* [InstCombine] fold logic-of-nan-fcmps (PR41069)Sanjay Patel2019-03-191-0/+52
| | | | | | | | | | | | | | | Combine 2 fcmps that are checking for nan-ness: and (fcmp ord X, 0), (and (fcmp ord Y, 0), Z) --> and (fcmp ord X, Y), Z or (fcmp uno X, 0), (or (fcmp uno Y, 0), Z) --> or (fcmp uno X, Y), Z This is an exact match for a minimal reassociation pattern. If we want to handle this more generally that should go in the reassociate pass and allow removing this code. This should fix: https://bugs.llvm.org/show_bug.cgi?id=41069 llvm-svn: 356471
* [InstCombine] Fix matchRotate bug when one operand is a ConstantExpr shiftSanjay Patel2019-02-111-3/+7
| | | | | | | | | | | | | | | | | | This bug seems to be harmless in release builds, but will cause an error in UBSAN builds or an assertion failure in debug builds. When it gets to this opcode comparison, it assumes both of the operands are BinaryOperators, but the prior m_LogicalShift will also match a ConstantExpr. The cast<BinaryOperator> will assert in a debug build, or reading an invalid value for BinaryOp from memory with ((BinaryOperator*)constantExpr)->getOpcode() will cause an error in a UBSAN build. The test I added will fail without this change in debug/UBSAN builds, but not in release. Patch by: @AndrewScheidecker (Andrew Scheidecker) Differential Revision: https://reviews.llvm.org/D58049 llvm-svn: 353736
* Update the file headers across all of the LLVM projects in the monorepoChandler Carruth2019-01-191-4/+3
| | | | | | | | | | | | | | | | | to reflect the new license. We understand that people may be surprised that we're moving the header entirely to discuss the new license. We checked this carefully with the Foundation's lawyer and we believe this is the correct approach. Essentially, all code in the project is now made available by the LLVM project under our new license, so you will see that the license headers include that license only. Some of our contributors have contributed code under our old license, and accordingly, we have retained a copy of our old license notice in the top-level files in each project and repository. llvm-svn: 351636
* [InstCombine] canonicalize another raw IR rotate pattern to funnel shiftSanjay Patel2019-01-081-0/+54
| | | | | | | | | This is matching the equivalent of the DAG expansion, so it should never end up with worse perf than the original code even if the target doesn't have a rotate instruction. llvm-svn: 350672
* [CmpInstAnalysis] fix function signature for ICmp code to predicate; NFCSanjay Patel2018-12-041-9/+9
| | | | | | | The old function underspecified the return type, took an unused parameter, and had a misleading name. llvm-svn: 348292
* [CmpInstAnalysis] fix formatting; NFCSanjay Patel2018-12-031-5/+5
| | | | | | | | There are potential improvements to the structure of this API raised by D54994, but remove some cosmetic blemishes before making any functional changes. llvm-svn: 348149
* [InstCombine] fix formatting for matchBSwap(); NFCSanjay Patel2018-11-141-6/+5
| | | | | | | We should have a similar function for matching rotate and/or funnel shift, so tidy up the related existing call. llvm-svn: 346871
* [InstCombine] try harder to form select from logic ops (2nd try)Sanjay Patel2018-10-241-30/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The original patch was committed here: rL344609 ...and reverted: rL344612 ...because it did not properly check/test data types before calling ComputeNumSignBits(). The tests that caused bot failures for the previous commit are over-reaching front-end tests that run the entire -O optimizer pipeline: Clang :: CodeGen/builtins-systemz-zvector.c Clang :: CodeGen/builtins-systemz-zvector2.c I've added a negative test here to ensure coverage for that case. The new early exit check also tests the type of the 'B' parameter, so we don't waste time on matching if either value is unsuitable. Original commit message: This is part of solving PR37549: https://bugs.llvm.org/show_bug.cgi?id=37549 The patterns shown here are a special case of something that we already convert to select. Using ComputeNumSignBits() catches that case (but not the more complicated motivating patterns yet). The backend has hooks/logic to convert back to logic ops if that's better for the target. llvm-svn: 345149
* revert rL344609: [InstCombine] try harder to form select from logic opsSanjay Patel2018-10-161-38/+29
| | | | | | | | I noticed a missing check and added it at rL344610, but there actually are codegen tests that will fail without that, so I'll edit those and submit a fixed patch with more tests. llvm-svn: 344612
* [InstCombine] make sure type is integer before calling ComputeNumSignBitsSanjay Patel2018-10-161-1/+2
| | | | llvm-svn: 344610
* [InstCombine] try harder to form select from logic opsSanjay Patel2018-10-161-29/+37
| | | | | | | | | | | | | | | This is part of solving PR37549: https://bugs.llvm.org/show_bug.cgi?id=37549 The patterns shown here are a special case of something that we already convert to select. Using ComputeNumSignBits() catches that case (but not the more complicated motivating patterns yet). The backend has hooks/logic to convert back to logic ops if that's better for the target. llvm-svn: 344609
* [InstCombine] name change: foldShuffledBinop -> foldVectorBinop; NFCSanjay Patel2018-10-031-3/+3
| | | | | | | This function will deal with more than shuffles with D50992, and I have another potential per-element fold that could live here. llvm-svn: 343692
* [InstCombine] Fold (xor (min/max X, Y), -1) -> (max/min ~X, ~Y) when X and Y ↵Craig Topper2018-09-131-0/+11
| | | | | | | | | | | | | | are freely invertible. This allows the xor to be removed completely. This might help with recomitting r341674, but seems good regardless. Coincidentally fixes PR38915. Differential Revision: https://reviews.llvm.org/D51964 llvm-svn: 342163
* [InstCombine] Fold (min/max ~X, Y) -> ~(max/min X, ~Y) when Y is freely ↵Craig Topper2018-09-071-5/+11
| | | | | | | | | | | | invertible If the ~X wasn't able to simplify above the max/min, we might be able to simplify it by moving it below the max/min. I had to modify the ~(min/max ~X, Y) transform to prevent getting stuck in a loop when we saw the new ~(max/min X, ~Y) before the ~Y had been folded away to remove the new not. Differential Revision: https://reviews.llvm.org/D51398 llvm-svn: 341674
* [InstCombine] add xor+not foldsSanjay Patel2018-09-061-0/+16
| | | | | | | | | | This fold is needed to avoid a regression when we try to recommit rL300977. We can't see the most basic win currently because demanded bits changes the patterns: https://rise4fun.com/Alive/plpp llvm-svn: 341559
* [InstCombine] fix xor-or-xor fold to check uses and handle commutesSanjay Patel2018-09-041-32/+21
| | | | | | | | | | | | I'm probably missing some way to use m_Deferred to remove the code duplication, but that can be a follow-up. The improvement in demand_shrink_nsw.ll is an example of missing the fold because the pattern matching was deficient. I didn't try to follow the bits in that test, but Alive says it's correct: https://rise4fun.com/Alive/ugc llvm-svn: 341426
* [InstCombine] make ((X & C) ^ C) form consistent for vectorsSanjay Patel2018-09-041-4/+2
| | | | | | It would be better to create a 'not' here, but that's not possible yet. llvm-svn: 341410
* [InstCombine] simplify code for xor folds; NFCISanjay Patel2018-09-041-40/+23
| | | | | | | | | This is just a cleanup step. The TODO comments show what is wrong with the 'and' version of the fold. Fixing this should be part of recommitting: rL300977 llvm-svn: 341405
* [InstCombine] simplify xor/not folds; NFCISanjay Patel2018-09-031-22/+16
| | | | llvm-svn: 341336
* [InstCombine] allow add+not --> sub for arbitrary vector constants.Sanjay Patel2018-09-031-5/+4
| | | | llvm-svn: 341335
* [InstCombine] allow not+sub fold for arbitrary vector constantsSanjay Patel2018-09-021-8/+4
| | | | | | | | The fold was implemented for the general case but use-limitation, but the later constant version which didn't check uses was only matching splat constants. llvm-svn: 341292
* [InstCombine] simplify code for 'or' foldSanjay Patel2018-09-011-28/+13
| | | | | | | | | | | This is no-outwardly-visible-change intended, so no test. But the code is smaller and more efficient. The check for a 'not' op is intended to avoid the expensive value tracking call when it should not be necessary, and it might prevent infinite looping when we resurrect: rL300977 llvm-svn: 341280
* [InstCombine] Pull simple checks above a more complicated one. NFCICraig Topper2018-08-211-4/+2
| | | | | | I'm assuming its easier to make sure the RHS of an XOR is all ones than it is to check for the many select patterns we have. So lets check that first. Same with the one use check. llvm-svn: 340321
* [InstCombine] Fix IC trying to create a xor of pointer types.Amara Emerson2018-08-151-1/+2
| | | | | | | | rdar://42473741 Differential Revision: https://reviews.llvm.org/D50775 llvm-svn: 339796
* [InstCombine] Re-land: Optimize redundant 'signed truncation check pattern'.Roman Lebedev2018-08-131-0/+127
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This comes with `Implicit Conversion Sanitizer - integer sign change` (D50250): ``` signed char test(unsigned int x) { return x; } ``` `clang++ -fsanitize=implicit-conversion -S -emit-llvm -o - /tmp/test.cpp -O3` * Old: {F6904292} * With this patch: {F6904294} General pattern: X & Y Where `Y` is checking that all the high bits (covered by a mask `4294967168`) are uniform, i.e. `%arg & 4294967168` can be either `4294967168` or `0` Pattern can be one of: %t = add i32 %arg, 128 %r = icmp ult i32 %t, 256 Or %t0 = shl i32 %arg, 24 %t1 = ashr i32 %t0, 24 %r = icmp eq i32 %t1, %arg Or %t0 = trunc i32 %arg to i8 %t1 = sext i8 %t0 to i32 %r = icmp eq i32 %t1, %arg This pattern is a signed truncation check. And `X` is checking that some bit in that same mask is zero. I.e. can be one of: %r = icmp sgt i32 %arg, -1 Or %t = and i32 %arg, 2147483648 %r = icmp eq i32 %t, 0 Since we are checking that all the bits in that mask are the same, and a particular bit is zero, what we are really checking is that all the masked bits are zero. So this should be transformed to: %r = icmp ult i32 %arg, 128 The transform itself ended up being rather horrible, even though i omitted some cases. Surely there is some infrastructure that can help clean this up that i missed? https://rise4fun.com/Alive/3Ou The initial commit (rL339610) was reverted, since the first assert was being triggered. The @positive_with_extra_and test now has coverage for that case. Reviewers: spatel, craig.topper Reviewed By: spatel Subscribers: RKSimon, erichkeane, vsk, llvm-commits Differential Revision: https://reviews.llvm.org/D50465 llvm-svn: 339621
* Revert "[InstCombine] Optimize redundant 'signed truncation check pattern'."Roman Lebedev2018-08-131-128/+0
| | | | | | | | | At least one buildbot was able to actually trigger that assert on the top of the function. Will investigate. This reverts commit r339610. llvm-svn: 339612
OpenPOWER on IntegriCloud