summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
Commit message (Collapse)AuthorAgeFilesLines
...
* [InstCombine] Fold ((C1 OP zext(X)) & C2) -> zext((C1 OP X) & C2)David Majnemer2017-01-171-15/+28
| | | | | | | This further extends r292179 to support additional binary operators beyond subtraction. llvm-svn: 292238
* [InstCombine] Fold ((C1-zext(X)) & C2) -> zext((C1-X) & C2)David Majnemer2017-01-171-0/+15
| | | | | | | This is valid if C2 fits within the bitwidth of X thanks to two's complement modulo arithmetic. llvm-svn: 292179
* [InstCombine] add a wrapper for a common pair of transforms; NFCISanjay Patel2017-01-101-22/+6
| | | | | | | Some of the callers are artificially limiting this transform to integer types; this should make it easier to incrementally remove that restriction. llvm-svn: 291620
* Revert "[InstCombine] New opportunities for FoldAndOfICmp and FoldXorOfICmp"David Majnemer2016-12-211-97/+2
| | | | | | This reverts commit r289813, it caused PR31449. llvm-svn: 290266
* [InstCombine] use commutative matcher for pattern with commutative operatorsSanjay Patel2016-12-191-3/+5
| | | | | | | | This is a case that was missed in: https://reviews.llvm.org/rL290067 ...and it would regress if we fix operand complexity (PR28296). llvm-svn: 290127
* Revert @llvm.assume with operator bundles (r289755-r289757)Daniel Jasper2016-12-191-9/+9
| | | | | | | This creates non-linear behavior in the inliner (see more details in r289755's commit thread). llvm-svn: 290086
* [InstCombine] use commutative matchers for patterns with commutative operatorsSanjay Patel2016-12-181-20/+35
| | | | | | | | | | | | | | | | | | | | | | | | Background/motivation - I was circling back around to: https://llvm.org/bugs/show_bug.cgi?id=28296 I made a simple patch for that and noticed some regressions, so added test cases for those with rL281055, and this is hopefully the minimal fix for just those cases. But as you can see from the surrounding untouched folds, we are missing commuted patterns all over the place, and of course there are no regression tests to cover any of those cases. We could sprinkle "m_c_" dust all over this file and catch most of the missing folds, but then we still wouldn't have test coverage, and we'd still miss some fraction of commuted patterns because they require adjustments to the match order. I'm aware of the concern about the potential compile-time performance impact of adding matches like this (currently being discussed on llvm-dev), but I don't think there's any evidence yet to suggest that handling commutative pattern matching more thoroughly is not a worthwhile goal of InstCombine. Differential Revision: https://reviews.llvm.org/D24419 llvm-svn: 290067
* [InstCombine] New opportunities for FoldAndOfICmp and FoldXorOfICmpEhsan Amiri2016-12-151-2/+97
| | | | | | | | | | | | | | | | | A number of new patterns for simplifying and/xor of icmp: (icmp ne %x, 0) ^ (icmp ne %y, 0) => icmp ne %x, %y if the following is true: 1- (%x = and %a, %mask) and (%y = and %b, %mask) 2- %mask is a power of 2. (icmp eq %x, 0) & (icmp ne %y, 0) => icmp ult %x, %y if the following is true: 1- (%x = and %a, %mask1) and (%y = and %b, %mask2) 2- Let %t be the smallest power of 2 where %mask1 & %t != 0. Then for any %s that is a power of 2 and %s & %mask2 != 0, we must have %s <= %t. For example if %mask1 = 24 and %mask2 = 16, setting %s = 16 and %t = 8 violates condition (2) above. So this optimization cannot be applied. llvm-svn: 289813
* Remove the AssumptionCacheHal Finkel2016-12-151-9/+9
| | | | | | | | | After r289755, the AssumptionCache is no longer needed. Variables affected by assumptions are now found by using the new operand-bundle-based scheme. This new scheme is more computationally efficient, and also we need much less code... llvm-svn: 289756
* add and use isBitwiseLogicOp() helper function; NFCISanjay Patel2016-11-221-15/+5
| | | | llvm-svn: 287712
* [InstCombine] add helper function for folding {and,or,xor} (cast X), C ; NFCISanjay Patel2016-09-121-28/+41
| | | | llvm-svn: 281187
* [InstCombine] change insertRangeTest() to use APInt instead of Constant; NFCISanjay Patel2016-08-311-14/+19
| | | | | | | | This is prep work before changing the callers to also use APInt which will allow folds for splat vectors. Currently, the callers have ConstantInt guards in place, so no functional change intended with this commit. llvm-svn: 280282
* [InstCombine] clean up InsertRangeTest; NFCISanjay Patel2016-08-311-35/+15
| | | | | | | | | | It's much less code and easier to read if we don't duplicate everything between the 'Inside' and not 'Inside' cases. As noted with the FIXME, the goal is to make this vector-friendly in a follow-up patch. llvm-svn: 280183
* InstCombine: Clean up some trailing whitespace. NFCJustin Bogner2016-08-051-2/+2
| | | | llvm-svn: 277793
* InstCombine: Replace some never-null pointers with references. NFCJustin Bogner2016-08-051-11/+11
| | | | llvm-svn: 277792
* [InstCombine] Refactor optimization of zext(or(icmp, icmp)) to enable more ↵Tobias Grosser2016-08-031-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | aggressive cast-folding Summary: InstCombine unfolds expressions of the form `zext(or(icmp, icmp))` to `or(zext(icmp), zext(icmp))` such that in a later iteration of InstCombine the exposed `zext(icmp)` instructions can be optimized. We now combine this unfolding and the subsequent `zext(icmp)` optimization to be performed together. Since the unfolding doesn't happen separately anymore, we also again enable the folding of `logic(cast(icmp), cast(icmp))` expressions to `cast(logic(icmp, icmp))` which had been disabled due to its interference with the unfolding transformation. Tested via `make check` and `lnt`. Background ========== For a better understanding on how it came to this change we subsequently summarize its history. In commit r275989 we've already tried to enable the folding of `logic(cast(icmp), cast(icmp))` to `cast(logic(icmp, icmp))` which had to be reverted in r276106 because it could lead to an endless loop in InstCombine (also see http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20160718/374347.html). The root of this problem is that in `visitZExt()` in InstCombineCasts.cpp there also exists a reverse of the above folding transformation, that unfolds `zext(or(icmp, icmp))` to `or(zext(icmp), zext(icmp))` in order to expose `zext(icmp)` operations which would then possibly be eliminated by subsequent iterations of InstCombine. However, before these `zext(icmp)` would be eliminated the folding from r275989 could kick in and cause InstCombine to endlessly switch back and forth between the folding and the unfolding transformation. This is the reason why we now combine the `zext`-unfolding and the elimination of the exposed `zext(icmp)` to happen at one go because this enables us to still allow the cast-folding in `logic(cast(icmp), cast(icmp))` without entering an endless loop again. Details on the submitted changes ================================ - In `visitZExt()` we combine the unfolding and optimization of `zext` instructions. - In `transformZExtICmp()` we have to use `Builder->CreateIntCast()` instead of `CastInst::CreateIntegerCast()` to make sure that the new `CastInst` is inserted in a `BasicBlock`. The new calls to `transformZExtICmp()` that we introduce in `visitZExt()` would otherwise cause according assertions to be triggered (in our case this happend, for example, with lnt for the MultiSource/Applications/sqlite3 and SingleSource/Regression/C++/EH/recursive-throw tests). The subsequent usage of `replaceInstUsesWith()` is necessary to ensure that the new `CastInst` replaces the `ZExtInst` accordingly. - In InstCombineAndOrXor.cpp we again allow the folding of casts on `icmp` instructions. - The instruction order in the optimized IR for the zext-or-icmp.ll test case is different with the introduced changes. - The test cases in zext.ll have been adopted from the reverted commits r275989 and r276105. Reviewers: grosser, majnemer, spatel Subscribers: eli.friedman, majnemer, llvm-commits Differential Revision: https://reviews.llvm.org/D22864 Contributed-by: Matthias Reisinger <d412vv1n@gmail.com> llvm-svn: 277635
* [InstCombine] LogicOpc (zext X), C --> zext (LogicOpc X, C) (PR28476)Sanjay Patel2016-07-211-20/+16
| | | | | | | | | | | | | | | | The benefits of this change include: 1. Remove DeMorgan-matching code that was added specifically to work-around the missing transform in http://reviews.llvm.org/rL248634. 2. Makes the DeMorgan transform work for vectors too. 3. Fix PR28476: https://llvm.org/bugs/show_bug.cgi?id=28476 Extending this transform to other casts and other associative operators may be useful too. See https://reviews.llvm.org/D22421 for a prerequisite for doing that though. Differential Revision: https://reviews.llvm.org/D22271 llvm-svn: 276221
* move decomposeBitTestICmp() to Transforms/Utils; NFCSanjay Patel2016-07-201-47/+0
| | | | | | | | As noted in https://reviews.llvm.org/D22537 , we can use this functionality in visitSelectInstWithICmp() and InstSimplify, but currently we have duplicated code. llvm-svn: 276140
* Revert "[InstCombine] Enable cast-folding in logic(cast(icmp), cast(icmp))"Benjamin Kramer2016-07-201-8/+2
| | | | | | | | Makes InstCombine infloop when compiling v8. This reverts commit r275989 and r276105. llvm-svn: 276106
* [InstCombine] Enable cast-folding in logic(cast(icmp), cast(icmp))Tobias Grosser2016-07-191-2/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Currently, InstCombine is already able to fold expressions of the form `logic(cast(A), cast(B))` to the simpler form `cast(logic(A, B))`, where logic designates one of `and`/`or`/`xor`. This transformation is implemented in `foldCastedBitwiseLogic()` in InstCombineAndOrXor.cpp. However, this optimization will not be performed if both `A` and `B` are `icmp` instructions. The decision to preclude casts of `icmp` instructions originates in r48715 in combination with r261707, and can be best understood by the title of the former one: > Transform (zext (or (icmp), (icmp))) to (or (zext (cimp), (zext icmp))) if at least one of the (zext icmp) can be transformed to eliminate an icmp. Apparently, it introduced a transformation that is a reverse of the transformation that is done in `foldCastedBitwiseLogic()`. Its purpose is to expose pairs of `zext icmp` that would subsequently be optimized by `transformZExtICmp()` in InstCombineCasts.cpp. Therefore, in order to avoid an endless loop of switching back and forth between these two transformations, the one in `foldCastedBitwiseLogic()` has been restricted to exclude `icmp` instructions which is mirrored in the responsible check: `if ((!isa<ICmpInst>(Cast0Src) || !isa<ICmpInst>(Cast1Src)) && ...` This check seems to sort out more cases than necessary because: - the reverse transformation is obviously done for `or` instructions only - and also not every `zext icmp` pair is necessarily the result of this reverse transformation Therefore we now remove this check and replace it by a more finegrained one in `shouldOptimizeCast()` that now rejects only those `logic(zext(icmp), zext(icmp))` that would be able to be optimized by `transformZExtICmp()`, which also avoids the mentioned endless loop. That means we are now able to also simplify expressions of the form `logic(cast(icmp), cast(icmp))` to `cast(logic(icmp, icmp))` (`cast` being an arbitrary `CastInst`). As an example, consider the following IR snippet ``` %1 = icmp sgt i64 %a, %b %2 = zext i1 %1 to i8 %3 = icmp slt i64 %a, %c %4 = zext i1 %3 to i8 %5 = and i8 %2, %4 ``` which would now be transformed to ``` %1 = icmp sgt i64 %a, %b %2 = icmp slt i64 %a, %c %3 = and i1 %1, %2 %4 = zext i1 %3 to i8 ``` This issue became apparent when experimenting with the programming language Julia, which makes use of LLVM. Currently, Julia lowers its `Bool` datatype to LLVM's `i8` (also see https://github.com/JuliaLang/julia/pull/17225). In fact, the above IR example is the lowered form of the Julia snippet `(a > b) & (a < c)`. Like shown above, this may introduce `zext` operations, casting between `i1` and `i8`, which could for example hinder ScalarEvolution and Polly on certain code. Reviewers: grosser, vtjnash, majnemer Subscribers: majnemer, llvm-commits Differential Revision: https://reviews.llvm.org/D22511 Contributed-by: Matthias Reisinger llvm-svn: 275989
* [InstCombine] Minor cleanup of cast simplification code [NFC]Tobias Grosser2016-07-191-5/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This patch cleans up parts of InstCombine to raise its compliance with the LLVM coding standards and to increase its readability. The changes and according rationale are summarized in the following: - Rename `ShouldOptimizeCast()` to `shouldOptimizeCast()` since functions should start with a lower case letter. - Move `shouldOptimizeCast()` from InstCombineCasts.cpp to InstCombineAndOrXor.cpp since it's only used there. - Simplify interface of `shouldOptimizeCast()`. - Minor code style adaptions in `shouldOptimizeCast()`. - Remove the documentation on the function definition of `shouldOptimizeCast()` since it just repeats the documentation on its declaration. Also enhance the documentation on its declaration with more information describing its intended use and make it doxygen-compliant. - Change a comment in `foldCastedBitwiseLogic()` from `fold (logic (cast A), (cast B)) -> (cast (logic A, B))` to `fold logic(cast(A), cast(B)) -> cast(logic(A, B))` since the surrounding comments use this format. - Remove comment `Only do this if the casts both really cause code to be generated.` in `foldCastedBitwiseLogic()` since it just repeats parts of the documentation of `shouldOptimizeCast()` and does not help to improve readability. - Simplify the interface of `isEliminableCastPair()`. - Removed the documentation on the function definition of `isEliminableCastPair()` which only contained obvious statements about its implementation. Instead added more general doxygen-compliant documentation to its declaration. - Renamed parameter `DoXform` of `transformZExtIcmp()` to `DoTransform` to make its intention clearer. - Moved documentation of `transformZExtIcmp()` from its definition to its declaration and made it doxygen-compliant. Reviewers: vtjnash, grosser Subscribers: majnemer, llvm-commits Differential Revision: https://reviews.llvm.org/D22449 Contributed-by: Matthias Reisinger llvm-svn: 275964
* [InstCombine] extend vector select matching for non-splat constantsSanjay Patel2016-07-131-3/+40
| | | | | | | | | | | | | | | In D21740, we discussed trying to make this a more general matcher. However, I didn't see a clean way to handle the regular m_Not cases and these non-splat vector patterns, so I've opted for the direct approach here. If there are other potential uses of areInverseVectorBitmasks(), we could move that helper function to a higher level. There is an open question as to which is of these forms should be considered the canonical IR: %sel = select <4 x i1> <i1 true, i1 false, i1 false, i1 true>, <4 x i32> %a, <4 x i32> %b %shuf = shufflevector <4 x i32> %a, <4 x i32> %b, <4 x i32> <i32 0, i32 5, i32 6, i32 3> Differential Revision: http://reviews.llvm.org/D22114 llvm-svn: 275289
* [InstCombine] don't form select from bitcasted logic ops if bitcasts have >1 useSanjay Patel2016-07-081-2/+2
| | | | | | | | | | | | | | | This isn't a sure thing (are 2 extra bitcasts less expensive than a logic op?), but we'll try to err on the conservative side by going with the case that has less IR instructions. Note: This question came up in http://reviews.llvm.org/D22114 , but this part is independent of that patch proposal, so I'm making this small change ahead of that one. See also: http://reviews.llvm.org/rL274926 llvm-svn: 274932
* [InstCombine] don't form select from logic ops if it's unlikely that we'll ↵Sanjay Patel2016-07-081-17/+22
| | | | | | eliminate any ops llvm-svn: 274926
* [InstCombine] check for one-use before turning simple logic op into a selectSanjay Patel2016-07-081-2/+2
| | | | llvm-svn: 274891
* [InstCombine] allow or(sext(A), B) --> A ? -1 : B transform for vectorsSanjay Patel2016-07-081-4/+5
| | | | llvm-svn: 274883
* [InstCombine] use ConstantExpr::getBitCast() instead of creating useless ↵Sanjay Patel2016-06-301-2/+1
| | | | | | instruction llvm-svn: 274229
* [InstCombine] extend matchSelectFromAndOr() to work with i1 scalar typesSanjay Patel2016-06-301-10/+26
| | | | | | | | If the incoming types are i1, then we don't have to pattern match any sext ops. Differential Revision: http://reviews.llvm.org/D21740 llvm-svn: 274228
* [InstCombine] Simplify and correct folding fcmps with the same childrenTim Shen2016-06-291-122/+76
| | | | | | | | | | | | | | Summary: Take advantage of FCmpInst::Predicate's bit pattern and handle (fcmp *, x, y) | (fcmp *, x, y) and (fcmp *, x, y) & (fcmp *, x, y) more consistently. Also fold more FCmpInst::FCMP_FALSE and FCmpInst::FCMP_TRUE to constants. Currently InstCombine wrongly folds (fcmp ogt, x, y) | (fcmp ord, x, y) to (fcmp ogt, x, y); this patch also fixes that. Reviewers: spatel Subscribers: llvm-commits, iteratee, echristo Differential Revision: http://reviews.llvm.org/D21775 llvm-svn: 274156
* [InstCombine, NFC] Change the generated variable names by creating new ↵Tim Shen2016-06-291-6/+6
| | | | | | | | instructions This removes some noise for D21775's test changes. llvm-svn: 274155
* [InstCombine] refactor optional bitcasting in matchSelectFromAndOr() into ↵Sanjay Patel2016-06-241-45/+39
| | | | | | | | | one code path (NFCI) Tests to verify that the commuted variants are all exercised were added with: http://reviews.llvm.org/rL273702 llvm-svn: 273706
* [InstCombine] consolidate commutation variants of matchSelectFromAndOr() in ↵Sanjay Patel2016-06-241-27/+18
| | | | | | | | | | one place; NFCI By putting all the possible commutations together, we simplify the code. Note that this is NFCI, but I'm adding tests that actually exercise each commutation pattern because we don't have this anywhere else. llvm-svn: 273702
* [InstSimplify] analyze (optionally casted) icmps to eliminate obviously ↵Sanjay Patel2016-06-201-10/+0
| | | | | | | | | | | | | | | false logic (PR27869) By moving this transform to InstSimplify from InstCombine, we sidestep the problem/question raised by PR27869: https://llvm.org/bugs/show_bug.cgi?id=27869 ...where InstCombine turns an icmp+zext into a shift causing us to miss the fold. Credit to David Majnemer for a draft patch of the changes to InstructionSimplify.cpp. Differential Revision: http://reviews.llvm.org/D21512 llvm-svn: 273200
* [InstCombine] look through bitcasts to find selectsSanjay Patel2016-06-031-18/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There was concern that creating bitcasts for the simpler potential select pattern: define <2 x i64> @vecBitcastOp1(<4 x i1> %cmp, <2 x i64> %a) { %a2 = add <2 x i64> %a, %a %sext = sext <4 x i1> %cmp to <4 x i32> %bc = bitcast <4 x i32> %sext to <2 x i64> %and = and <2 x i64> %a2, %bc ret <2 x i64> %and } might lead to worse code for some targets, so this patch is matching the larger patterns seen in the test cases. The motivating example for this patch is this IR produced via SSE intrinsics in C: define <2 x i64> @gibson(<2 x i64> %a, <2 x i64> %b) { %t0 = bitcast <2 x i64> %a to <4 x i32> %t1 = bitcast <2 x i64> %b to <4 x i32> %cmp = icmp sgt <4 x i32> %t0, %t1 %sext = sext <4 x i1> %cmp to <4 x i32> %t2 = bitcast <4 x i32> %sext to <2 x i64> %and = and <2 x i64> %t2, %a %neg = xor <4 x i32> %sext, <i32 -1, i32 -1, i32 -1, i32 -1> %neg2 = bitcast <4 x i32> %neg to <2 x i64> %and2 = and <2 x i64> %neg2, %b %or = or <2 x i64> %and, %and2 ret <2 x i64> %or } For an AVX target, this is currently: vpcmpgtd %xmm1, %xmm0, %xmm2 vpand %xmm0, %xmm2, %xmm0 vpandn %xmm1, %xmm2, %xmm1 vpor %xmm1, %xmm0, %xmm0 retq With this patch, it becomes: vpmaxsd %xmm1, %xmm0, %xmm0 Differential Revision: http://reviews.llvm.org/D20774 llvm-svn: 271676
* transform obscured FP sign bit ops into a fabs/fneg using TLI hookSanjay Patel2016-06-021-18/+0
| | | | | | | | | | | | | | | | | | | This is effectively a revert of: http://reviews.llvm.org/rL249702 - [InstCombine] transform masking off of an FP sign bit into a fabs() intrinsic call (PR24886) and: http://reviews.llvm.org/rL249701 - [ValueTracking] teach computeKnownBits that a fabs() clears sign bits and a reimplementation as a DAG combine for targets that have IEEE754-compliant fabs/fneg instructions. This is intended to resolve the objections raised on the dev list: http://lists.llvm.org/pipermail/llvm-dev/2016-April/098154.html and: https://llvm.org/bugs/show_bug.cgi?id=24886#c4 In the interest of patch minimalism, I've only partly enabled AArch64. PowerPC, MIPS, x86 and others can enable later. Differential Revision: http://reviews.llvm.org/D19391 llvm-svn: 271573
* [InstCombine] remove guard for generating a vector selectSanjay Patel2016-06-021-15/+11
| | | | | | | | | | | | | | | | | This is effectively NFC because we already do this transform after r175380: http://reviews.llvm.org/rL175380 and also via foldBoolSextMaskToSelect(). This change should just make it a bit more efficient to match the pattern. The original guard was added in r95058: http://reviews.llvm.org/rL95058 A sampling of codegen for current in-tree targets shows no problems. This makes sense given that we're already producing the vector selects via the other transforms. llvm-svn: 271554
* [InstCombine] move and/sext fold to helper function; NFCISanjay Patel2016-05-271-27/+28
| | | | | | We need to enhance the pattern matching on these to look through bitcasts. llvm-svn: 271051
* [InstCombine] Catch more bswap cases missed due to zext and truncs.Chad Rosier2016-05-261-14/+28
| | | | | | | Fixes PR27824. Differential Revision: http://reviews.llvm.org/D20591. llvm-svn: 270853
* Clarify that we match BSwap in InstCombine and BitReverse in CGP. NFC.Chad Rosier2016-05-251-5/+5
| | | | | | | | Also, rename recognizeBitReverseOrBSwapIdiom to recognizeBSwapOrBitReverseIdiom, so the ordering of the MatchBSwaps and MatchBitReversals arguments are consistent with the function name. llvm-svn: 270715
* Typo. NFC.Chad Rosier2016-05-091-1/+1
| | | | llvm-svn: 268975
* Cleanup redundant expression in InstCombineAndOrXor.Etienne Bergeron2016-04-251-2/+0
| | | | | | | | | | | | | | | Summary: The expression is redundant on both side of operator |. detected by : http://reviews.llvm.org/D19451 Reviewers: rnk, majnemer Subscribers: cfe-commits Differential Revision: http://reviews.llvm.org/D19459 llvm-svn: 267458
* [InstCombine] transform bitcasted bitwise logic ops with constants (PR26702)Sanjay Patel2016-03-031-7/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Given that we're not actually reducing the instruction count in the included regression tests, I think we would call this a canonicalization step. The motivation comes from the example in PR26702: https://llvm.org/bugs/show_bug.cgi?id=26702 If we hoist the bitwise logic ahead of the bitcast, the previously unoptimizable example of: define <4 x i32> @is_negative(<4 x i32> %x) { %lobit = ashr <4 x i32> %x, <i32 31, i32 31, i32 31, i32 31> %not = xor <4 x i32> %lobit, <i32 -1, i32 -1, i32 -1, i32 -1> %bc = bitcast <4 x i32> %not to <2 x i64> %notnot = xor <2 x i64> %bc, <i64 -1, i64 -1> %bc2 = bitcast <2 x i64> %notnot to <4 x i32> ret <4 x i32> %bc2 } Simplifies to the expected: define <4 x i32> @is_negative(<4 x i32> %x) { %lobit = ashr <4 x i32> %x, <i32 31, i32 31, i32 31, i32 31> ret <4 x i32> %lobit } Differential Revision: http://reviews.llvm.org/D17583 llvm-svn: 262645
* [InstCombine] enable optimization of casted vector xor instructionsSanjay Patel2016-02-241-18/+8
| | | | | | | | | | | | | | | | This is part of the payoff for the refactoring in: http://reviews.llvm.org/rL261649 http://reviews.llvm.org/rL261707 In addition to removing a pile of duplicated code, the xor case was missing the optimization for vector types because it checked "SrcTy->isIntegerTy()" rather than "SrcTy->isIntOrIntVectorTy()" like 'and' and 'or' were already doing. This solves part of: https://llvm.org/bugs/show_bug.cgi?id=26702 llvm-svn: 261750
* [InstCombine] refactor visitOr() to use foldCastedBitwiseLogic()Sanjay Patel2016-02-231-47/+31
| | | | | | | | | | | | | | | | Note: The 'and' case in foldCastedBitwiseLogic() is inheriting one extra check from the nearly identical 'or' case: if ((!isa<ICmpInst>(Cast0Src) || !isa<ICmpInst>(Cast1Src)) But I'm not sure how to expose that difference in a regression test. Without that check, the 'or' path will infinite loop on: test/Transforms/InstCombine/zext-or-icmp.ll because the zext-or-icmp fold is attempting a reverse transform. The refactoring should extend to the 'xor' case next to solve part of PR26702. llvm-svn: 261707
* [InstCombine] improve readability ; NFCISanjay Patel2016-02-231-30/+36
| | | | | | Less indenting, named local variables, more descriptive names. llvm-svn: 261659
* [InstCombine] less indenting; NFCSanjay Patel2016-02-231-31/+32
| | | | llvm-svn: 261652
* [InstCombine] add helper function to foldCastedBitwiseLogic() ; NFCISanjay Patel2016-02-231-29/+40
| | | | | | | | | | | | This is a straight cut and paste of the existing code and is intended to be the first step in solving part of PR26702: https://llvm.org/bugs/show_bug.cgi?id=26702 We should be able to reuse most of this and delete the nearly identical existing code in visitOr(). Then, we can enhance visitXor() to use the same code too. llvm-svn: 261649
* function names start with a lowercase letter; NFCSanjay Patel2016-02-011-27/+27
| | | | llvm-svn: 259425
* combine clauses with same output ; NFCISanjay Patel2016-01-181-8/+3
| | | | llvm-svn: 258062
* use m_OneUse ; NFCISanjay Patel2016-01-181-4/+2
| | | | llvm-svn: 258059
OpenPOWER on IntegriCloud