summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/InstCombine
Commit message (Collapse)AuthorAgeFilesLines
...
* [InstCombine] Cleanup some duplicated one use checksCraig Topper2017-06-191-10/+4
| | | | | | | | | | | | | | | | | | | Summary: These 4 patterns have the same one use check repeated twice for each. Once without a cast and one with. But the cast has no effect on what method is called. For the OR case I believe it is always profitable regardless of the number of uses since we'll never increase the instruction count. For the AND case I believe it is profitable if the pair of xors has one use such that we'll get rid of it completely. Or if the C value is something freely invertible, in which case the not doesn't cost anything. Reviewers: spatel, majnemer Reviewed By: spatel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D34308 llvm-svn: 305705
* [InstCombine] Set correct insertion point for selects generated while ↵Anna Thomas2017-06-161-1/+11
| | | | | | | | | | | | | | | | | | | | | | folding phis Summary: When we fold vector constants that are operands of phi's that feed into select, we need to set the correct insertion point for the *new* selects that get generated. The correct insertion point is the incoming block for the phi. Such cases can occur with patch r298845, which fixed folding of vector constants, but the new selects could be inserted incorrectly (as the added test case shows). Reviewers: majnemer, spatel, sanjoy Reviewed by: spatel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D34162 llvm-svn: 305591
* [Atomics] Rename and change prototype for atomic memcpy intrinsicDaniel Neilson2017-06-162-59/+65
| | | | | | | | | | | | | | | | | | Summary: Background: http://lists.llvm.org/pipermail/llvm-dev/2017-May/112779.html This change is to alter the prototype for the atomic memcpy intrinsic. The prototype itself is being changed to more closely resemble the semantics and parameters of the llvm.memcpy intrinsic -- to ease later combination of the llvm.memcpy and atomic memcpy intrinsics. Furthermore, the name of the atomic memcpy intrinsic is being changed to make it clear that it is not a generic atomic memcpy, but specifically a memcpy is unordered atomic. Reviewers: reames, sanjoy, efriedma Reviewed By: reames Subscribers: mzolotukhin, anna, llvm-commits, skatkov Differential Revision: https://reviews.llvm.org/D33240 llvm-svn: 305558
* [InstCombine] Fold (!iszero(A & K1) & !iszero(A & K2)) -> (A & (K1 | K2)) ↵Craig Topper2017-06-162-32/+61
| | | | | | | | | | | | | | | | == (K1 | K2) if K1 and K2 are a 1-bit mask Summary: This is the demorganed version of the case we already handle for the OR of iszero. Reviewers: spatel Reviewed By: spatel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D34244 llvm-svn: 305548
* [InstCombine] Add two FIXMEs for bad single use checks. NFCCraig Topper2017-06-151-0/+4
| | | | llvm-svn: 305510
* [InstCombine] Make the context instruction parameter of foldOrOfICmps a ↵Craig Topper2017-06-152-10/+10
| | | | | | reference to discourage passing nullptr and to remove the '&' from all of the call sites. NFC llvm-svn: 305493
* [InstCombine] Handle (iszero(A & K1) | iszero(A & K2)) -> (A & (K1 | K2)) != ↵Craig Topper2017-06-151-20/+14
| | | | | | | | | | | | (K1 | K2) when the one of the Ands is commuted relative to the other Currently we expect A to be on the same side in both Ands but nothing guarantees that. While there also switch to using matchers for some of the code. Differential Revision: https://reviews.llvm.org/D34230 llvm-svn: 305487
* [InstCombine] lshr (sext iM X to iN), N-M --> zext (ashr X, min(N-M, M-1)) to iNSanjay Patel2017-06-121-4/+10
| | | | | | | | | | | | | | | | | | | This is a follow-up to https://reviews.llvm.org/D33879 / https://reviews.llvm.org/rL304939 , and was discussed in https://reviews.llvm.org/D33338. We prefer this form because a narrower shift may be cheaper, and we can more easily fold a zext than a sext. http://rise4fun.com/Alive/slVe Name: shz %s = sext i8 %x to i12 %r = lshr i12 %s, 4 => %a = ashr i8 %x, 4 %r = zext i8 %a to i12 llvm-svn: 305190
* [InstSimplify] Don't constant fold or DCE calls that are marked nobuiltinAndrew Kaylor2017-06-091-2/+2
| | | | | | Differential Revision: https://reviews.llvm.org/D33737 llvm-svn: 305132
* [InstCombine] Pass a proper context instruction to all of the calls into ↵Craig Topper2017-06-099-45/+66
| | | | | | | | | | | | | | | | InstSimplify Summary: This matches the behavior we already had for compares and makes us consistent everywhere. Reviewers: dberlin, hfinkel, spatel Reviewed By: dberlin Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D33604 llvm-svn: 305049
* [InstCombine] fold lshr (sext X), C1 --> zext (lshr X, C2)Sanjay Patel2017-06-071-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This was discussed in D33338. We have larger pattern-matching ending in a truncate that we can reduce or remove by handling these smaller patterns first. Further motivation is that narrower shift ops are easier for value tracking and zext is better than sext. http://rise4fun.com/Alive/rhh Name: boolshift %sext = sext i1 %x to i8 %r = lshr i8 %sext, 7 => %r = zext i1 %x to i8 Name: noboolshift %sext = sext i3 %x to i8 %r = lshr i8 %sext, 7 => %sh = lshr i3 %x, 2 %r = zext i3 %sh to i8 Differential Revision: https://reviews.llvm.org/D33879 llvm-svn: 304939
* [InstCombine][InstSimplify] Use APInt::isNullValue/isOneValue to reduce ↵Craig Topper2017-06-078-57/+63
| | | | | | | | compiled code for comparing APInts with 0 and 1. NFC These methods are specifically optimized to only counting leading zeros without an additional uint64_t compare. llvm-svn: 304876
* [InstCombine] Fix two asserts that were accidentally checking that an APInt ↵Craig Topper2017-06-071-2/+2
| | | | | | | | pointer is non-zero instead of checking that the APInt self is non-zero. I believe this code used to use APInt references which would have worked. But then they were changed to pointers to allow m_APInt to be used. llvm-svn: 304875
* Move Object format code to lib/BinaryFormat.Zachary Turner2017-06-071-1/+1
| | | | | | | | | | | | This creates a new library called BinaryFormat that has all of the headers from llvm/Support containing structure and layout definitions for various types of binary formats like dwarf, coff, elf, etc as well as the code for identifying a file from its magic. Differential Revision: https://reviews.llvm.org/D33843 llvm-svn: 304864
* Sort the remaining #include lines in include/... and lib/....Chandler Carruth2017-06-063-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | I did this a long time ago with a janky python script, but now clang-format has built-in support for this. I fed clang-format every line with a #include and let it re-sort things according to the precise LLVM rules for include ordering baked into clang-format these days. I've reverted a number of files where the results of sorting includes isn't healthy. Either places where we have legacy code relying on particular include ordering (where possible, I'll fix these separately) or where we have particular formatting around #include lines that I didn't want to disturb in this patch. This patch is *entirely* mechanical. If you get merge conflicts or anything, just ignore the changes in this patch and run clang-format over your #include lines in the files. Sorry for any noise here, but it is important to keep these things stable. I was seeing an increasing number of patches with irrelevant re-ordering of #include lines because clang-format was used. This patch at least isolates that churn, makes it easy to skip when resolving conflicts, and gets us to a clean baseline (again). llvm-svn: 304787
* [InstCombine] Fix extractelement use before defSven van Haastregt2017-06-051-1/+1
| | | | | | | | | | | | This fixes a bug that can cause extractelements with operands that haven't been defined yet to be inserted at a wrong point when optimising insertelements. Patch by Karl Hylen. Differential Revision: https://reviews.llvm.org/D33449 llvm-svn: 304701
* [InstCombine] Add support for simplifying ctlz/cttz intrinsics based on ↵Craig Topper2017-06-031-5/+1
| | | | | | known bits. llvm-svn: 304669
* [InstCombine] fix icmp with not op and constant to work with splat vector ↵Sanjay Patel2017-06-021-3/+3
| | | | | | constant llvm-svn: 304562
* [InstCombine] improve perf by not creating a known non-canonical instructionSanjay Patel2017-06-021-3/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Op1 (RHS) is a constant, so putting it on the LHS makes us churn through visitICmp an extra time to canonicalize it: INSTCOMBINE ITERATION #1 on cmpnot IC: ADDING: 3 instrs to worklist IC: Visiting: %notx = xor i8 %x, -1 IC: Visiting: %cmp = icmp sgt i8 %notx, 42 IC: Old = %cmp = icmp sgt i8 %notx, 42 New = <badref> = icmp sgt i8 -43, %x IC: ADD: %cmp = icmp sgt i8 -43, %x IC: ERASE %1 = icmp sgt i8 %notx, 42 IC: ADD: %notx = xor i8 %x, -1 IC: DCE: %notx = xor i8 %x, -1 IC: ERASE %notx = xor i8 %x, -1 IC: Visiting: %cmp = icmp sgt i8 -43, %x IC: Mod = %cmp = icmp sgt i8 -43, %x New = %cmp = icmp slt i8 %x, -43 IC: ADD: %cmp = icmp slt i8 %x, -43 IC: Visiting: %cmp = icmp slt i8 %x, -43 IC: Visiting: ret i1 %cmp If we create the swapped ICmp directly, we go faster: INSTCOMBINE ITERATION #1 on cmpnot IC: ADDING: 3 instrs to worklist IC: Visiting: %notx = xor i8 %x, -1 IC: Visiting: %cmp = icmp sgt i8 %notx, 42 IC: Old = %cmp = icmp sgt i8 %notx, 42 New = <badref> = icmp slt i8 %x, -43 IC: ADD: %cmp = icmp slt i8 %x, -43 IC: ERASE %1 = icmp sgt i8 %notx, 42 IC: ADD: %notx = xor i8 %x, -1 IC: DCE: %notx = xor i8 %x, -1 IC: ERASE %notx = xor i8 %x, -1 IC: Visiting: %cmp = icmp slt i8 %x, -43 IC: Visiting: ret i1 %cmp llvm-svn: 304558
* [IR] Add additional addParamAttr/removeParamAttr to AttributeList APIReid Kleckner2017-05-311-5/+5
| | | | | | | | | | | | | | | | | | | Summary: Fairly straightforward patch to fill in some of the holes in the attributes API with respect to accessing parameter/argument attributes. The patch aims to step further towards encapsulating the idx+FirstArgIndex pattern to access these attributes to within the AttributeList. Patch by Daniel Neilson! Reviewers: rnk, chandlerc, pete, javed.absar, reames Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D33355 llvm-svn: 304329
* [InstCombine] Pass the DominatorTree, AssumptionCache, and context ↵Craig Topper2017-05-263-4/+7
| | | | | | | | | | instruction to a few calls to isKnownPositive, isKnownNegative, and isKnownNonZero Every other place in InstCombine that uses these methods in ValueTracking already pass this information. This makes the remaining sites consistent. Differential Revision: https://reviews.llvm.org/D33567 llvm-svn: 304018
* [InstCombine] Add an InstCombine specific wrapper around ↵Craig Topper2017-05-254-14/+14
| | | | | | | | isKnownToBeAPowerOfTwo to shorten code. NFC We have wrappers for several other ValueTracking methods that take care of passing all of the analysis and assumption cache parameters. This extends it to isKnownToBeAPowerOfTwo. llvm-svn: 303924
* [InstCombine] Teach isAllocSiteRemovable to look through addrspacecastsArtur Pilipenko2017-05-251-1/+3
| | | | | | | | Reviewed By: reames Differential Revision: https://reviews.llvm.org/D28565 llvm-svn: 303870
* [InstCombine] make icmp-mul fold more efficientSanjay Patel2017-05-251-5/+7
| | | | | | | | | | | There's probably a lot more like this (see also comments in D33338 about responsibility), but I suspect we don't usually get a visible manifestation. Given the recent interest in improving InstCombine efficiency, another potential micro-opt that could be repeated several times in this function: morph the existing icmp pred/operands instead of creating a new instruction. llvm-svn: 303860
* [InstCombine] use m_APInt to allow icmp-mul-mul vector foldSanjay Patel2017-05-241-11/+12
| | | | | | | | | The swapped operands in the first test is a manifestation of an inefficiency for vectors that doesn't exist for scalars because the IRBuilder checks for an all-ones mask for scalars, but not vectors. llvm-svn: 303818
* [InstCombine] Merge together the SimplifyDemandedUseBits implementations for ↵Craig Topper2017-05-241-21/+10
| | | | | | | | | | ZExt and Trunc. NFC While there avoid resizing the DemandedMask twice. Make a copy into a separate variable instead. This potentially removes an allocation on large bit widths. With the use of the zextOrTrunc methods on APInt and KnownBits these can be made almost source identical. The only difference is the zero of the upper bits for ZExt. This is similar to how its done in computeKnownBits in ValueTracking. llvm-svn: 303791
* [InstCombine] Use less bitwise operations to handle Instruction::SExt in ↵Craig Topper2017-05-241-19/+14
| | | | | | | | | | | | SimplifyDemandedUseBits. Other improvements. The current code created a NewBits mask and used it as a mask several times. One of them just before a call to trunc making it unnecessary. A call to getActiveBits can get us the same information for the case. We also ORed with this mask later when we should have just sign extended the known bits. We also called trunc on the guaranteed to be zero KnownZeros/Ones masks entering this code. Creating appropriately sized temporary APInts is probably better. Differential Revision: https://reviews.llvm.org/D32098 llvm-svn: 303779
* [ValueTracking] Convert most of the calls to computeKnownBits to use the ↵Craig Topper2017-05-245-36/+16
| | | | | | | | | | version that returns the KnownBits object. This continues the changes started when computeSignBit was replaced with this new version of computeKnowBits. Differential Revision: https://reviews.llvm.org/D33431 llvm-svn: 303773
* [InstCombine] allow icmp-xor folds for vectors (PR33138)Sanjay Patel2017-05-231-5/+9
| | | | | | | | | This fixes the first part of: https://bugs.llvm.org/show_bug.cgi?id=33138 More work is needed for the bitcasted variant. llvm-svn: 303660
* [KnownBits] Use !hasConflict() in asserts in place of Zero & One == 0 or ↵Craig Topper2017-05-231-16/+16
| | | | | | similar. NFC llvm-svn: 303614
* [InstCombine] Cleanup the interface for overflow checksCraig Topper2017-05-224-39/+50
| | | | | | | | | | | | | | | | | | Summary: Fix naming conventions and const correctness. This completes the changes made in rL303029. Patch by Yoav Ben-Shalom. Reviewers: craig.topper Reviewed By: craig.topper Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D33377 llvm-svn: 303529
* [KnownBits] Use isNegative/isNonNegative to shorten some code. NFCCraig Topper2017-05-221-2/+2
| | | | llvm-svn: 303522
* [InstCombine] Take in account the size in sext->lshr->trunc patterns.Davide Italiano2017-05-211-6/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | Otherwise we end up miscompiling, transforming: define i8 @tinky() { %sext = sext i1 1 to i16 %hibit = lshr i16 %sext, 15 %tr = trunc i16 %hibit to i8 ret i8 %tr } into: %sext = sext i1 1 to i8 ret i8 %sext and the first get folded to ret i8 1, while the second gets folded to ret i8 -1. Eventually we should get rid of this transform entirely, but for now, this at least fixes a know correctness bug. Differential Revision: https://reviews.llvm.org/D33338 llvm-svn: 303513
* [InstCombine] add helper to foldXorOfICmps(); NFCISanjay Patel2017-05-182-41/+48
| | | | | | | | | | | Also, fix the old-style capitalization of the related functions and move them to the 'private' section of the class since they are just helpers of the visit* functions. As shown in the post-commit comments for D32143, we are missing folds for xor-of-icmps. llvm-svn: 303381
* [InstCombine] handle icmp i1 X, C early to avoid creating an unknown patternSanjay Patel2017-05-171-0/+23
| | | | | | | | | | | The missing optimization for xor-of-icmps still needs to be added, but by being more efficient (not generating unnecessary logic ops with constants) we avoid the bug. See discussion in post-commit comments: https://reviews.llvm.org/D32143 llvm-svn: 303312
* [InstCombine] move icmp bool canonicalizations to helper; NFCSanjay Patel2017-05-171-43/+54
| | | | | | | | As noted in the post-commit comments in D32143, we should be catching the constant operand cases sooner to be more efficient and less likely to expose a missing fold. llvm-svn: 303309
* [InstCombine] add isCanonicalPredicate() helper function and use it; NFCISanjay Patel2017-05-172-31/+32
| | | | | | | | | | | | | | | | | | | | There should be a slight efficiency improvement from handling icmp/fcmp with one matcher and reducing duplicated code. The larger motivation is that there are questions about how predicate canonicalization is handled, and the refactoring should make it easier if we want to change any of that behavior. 1. As noted in the code comment, we've chosen 3 of the 16 FCMP preds as not canonical. Why those 3? It goes back to rL32751 from what I can tell, but I'm not sure if there's a justification for that rule. 2. We currently do not canonicalize integer select conditions. Should we use the same rule that applies to branches for selects? 3. We currently do canonicalize some FP select conditions, and those rules would conflict with the rule shown here. Should one or both be changed? No-functional-change-intended, but adding tests anyway because there's no coverage for most of the predicates. Differential Revision: https://reviews.llvm.org/D33247 llvm-svn: 303261
* In debug builds non-trivial amount of time is spent in InstCombine processingDmitry Mikulin2017-05-161-1/+4
| | | | | | @llvm.dbg.* calls in visitCallInst(). They can be safely ignored. llvm-svn: 303202
* [InstCombine] restrict icmp fold with 2 sdiv exact operands (PR32949)Sanjay Patel2017-05-151-2/+9
| | | | | | | | | | | | | | | | This is the InstCombine counterpart to D32954. I added some comments about the code duplication in: rL302436 Alive-based verification: http://rise4fun.com/Alive/dPw This is a 2nd fix for the problem reported in: https://bugs.llvm.org/show_bug.cgi?id=32949 Differential Revision: https://reviews.llvm.org/D32970 llvm-svn: 303105
* [InstCombine] use m_OneUse to reduce code; NFCISanjay Patel2017-05-151-6/+7
| | | | llvm-svn: 303090
* [ValueTracking] Replace all uses of ComputeSignBit with computeKnownBits.Craig Topper2017-05-156-26/+17
| | | | | | | | This patch finishes off the conversion of ComputeSignBit to computeKnownBits. Differential Revision: https://reviews.llvm.org/D33166 llvm-svn: 303035
* [InstCombine] Merge duplicate functionality between InstCombine and ↵Craig Topper2017-05-153-104/+26
| | | | | | | | | | | | | | | | | | | | | | | ValueTracking Summary: Merge overflow computation for signed add, appearing both in InstCombine and ValueTracking. As part of the merge, cleanup the interface for overflow checks in InstCombine. Patch by Yoav Ben-Shalom. Reviewers: craig.topper, majnemer Reviewed By: craig.topper Subscribers: takuto.ikuta, llvm-commits Differential Revision: https://reviews.llvm.org/D32946 llvm-svn: 303029
* [InstCombine] Remove 'return' of a called function that also returned void. NFCCraig Topper2017-05-151-3/+2
| | | | llvm-svn: 303028
* [InstCombine] Prevent InstCombine from triggering an extra iteration if ↵Craig Topper2017-05-131-5/+4
| | | | | | | | | | | | | | | | | | | | | | | | | something changed in the initial Worklist creation Summary: If the Worklist build causes an IR change this change flag currently factors into the flag for running another iteration of the iteration loop. But only changes during processing should trigger another loop. This patch captures the worklist creation change flag into the outside the loop flag currently used for DbgDeclares and only sends that flag up to the caller. Rerunning the loop only depends on IC.run() now. This uses the debug output of InstCombine to determine if one or two iterations run. I couldn't think of a better way to detect it since the second spurious iteration shoudn't make any visible changes. Just wasted computation. I can do a pre-commit of the test case with the CHECK-NOT as a CHECK if this is an ok way to check this. This is a subset of D31678 as I'm still not sure how to verify the analysis behavior for that. Reviewers: davide, majnemer, spatel, chandlerc Reviewed By: davide Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D32453 llvm-svn: 302982
* [KnownBits] Add bit counting methods to KnownBits struct and use them where ↵Craig Topper2017-05-123-7/+7
| | | | | | | | | | | | possible This patch adds min/max population count, leading/trailing zero/one bit counting methods. The min methods return answers based on bits that are known without considering unknown bits. The max methods give answers taking into account the largest count that unknown bits could give. Differential Revision: https://reviews.llvm.org/D32931 llvm-svn: 302925
* [InstCombine] remove fold that swaps xor/or with constants; NFCISanjay Patel2017-05-101-12/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | // (X ^ C1) | C2 --> (X | C2) ^ (C1&~C2) This canonicalization was added at: https://reviews.llvm.org/rL7264 By moving xors out/down, we can more easily combine constants. I'm adding tests that do not change with this patch, so we can verify that those kinds of transforms are still happening. This is no-functional-change-intended because there's a later fold: // (X^C)|Y -> (X|Y)^C iff Y&C == 0 ...and demanded-bits appears to guarantee that any fold that would have hit the fold we're removing here would be caught by that 2nd fold. Similar reasoning was used in: https://reviews.llvm.org/rL299384 The larger motivation for removing this code is that it could interfere with the fix for PR32706: https://bugs.llvm.org/show_bug.cgi?id=32706 Ie, we're not checking if the 'xor' is actually a 'not', so we could reverse a 'not' optimization and cause an infinite loop by altering an 'xor X, -1'. Differential Revision: https://reviews.llvm.org/D33050 llvm-svn: 302733
* [InstCombine] add (ashr (shl i32 X, 31), 31), 1 --> and (not X), 1Sanjay Patel2017-05-101-0/+10
| | | | | | | | | | | | | | This is another step towards favoring 'not' ops over random 'xor' in IR: https://bugs.llvm.org/show_bug.cgi?id=32706 This transformation may have occurred in longer IR sequences using computeKnownBits, but that could be much more expensive to calculate. As the scalar result shows, we do not currently favor 'not' in all cases. The 'not' created by the transform is transformed again (unnecessarily). Vectors don't have this problem because vectors are (wrongly) excluded from several other combines. llvm-svn: 302659
* [InstCombine] add helper function for add X, C folds; NFCISanjay Patel2017-05-101-34/+45
| | | | llvm-svn: 302605
* [InstCombine] clean up matchDeMorgansLaws(); NFCISanjay Patel2017-05-091-32/+13
| | | | | | | | | | | | | | The motivation for getting rid of dyn_castNotVal is to allow fixing: https://bugs.llvm.org/show_bug.cgi?id=32706 So this was supposed to be functional-change-intended for the case of inverting constants and applying DeMorgan. However, I can't find any cases where that pattern will actually get to matchDeMorgansLaws() because we have other folds in visitAnd/visitOr that do the same thing. So this ends up just being a clean-up patch with slight efficiency improvement, but no-functional-change-intended. llvm-svn: 302581
* [InstCombineCasts] Fix checks in sext->lshr->trunc pattern.Sanjay Patel2017-05-091-6/+14
| | | | | | | | | | | | | | | The comment says to avoid the case where zero bits are shifted into the truncated value, but the code checks that the shift is smaller than the truncated value instead of the number of bits added by the sign extension. Fixing this allows a shift by more than the value size to be introduced, which is undefined behavior, so the shift is capped at the value size minus one, which has the expected behavior of filling the value with the sign bit. Patch by Jacob Young! Differential Revision: https://reviews.llvm.org/D32285 llvm-svn: 302548
OpenPOWER on IntegriCloud