summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/InstCombine
Commit message (Collapse)AuthorAgeFilesLines
* Reduce the complexity of the signbit/branch test functions.Eric Christopher2017-06-301-3/+3
| | | | llvm-svn: 306779
* [InstCombine] In visitXor, use m_Not on the instruction itself instead of ↵Craig Topper2017-06-291-3/+2
| | | | | | looking for all ones in Op1. This is consistent with 3 other not checks before this one. NFCI llvm-svn: 306617
* [InstCombine] Retain TBAA when narrowing memory accessesKeno Fischer2017-06-282-3/+29
| | | | | | | | | | | | Summary: As discussed on the mailing list it is legal to propagate TBAA to loads/stores from/to smaller regions of a larger load tagged with TBAA. Do so for (load->extractvalue)=>(gep->load) and similar foldings. Reviewed By: sanjoy Differential Revision: https://reviews.llvm.org/D31954 llvm-svn: 306615
* [InstCombine] use local variable to reduce code; NFCISanjay Patel2017-06-281-18/+14
| | | | llvm-svn: 306560
* [InstCombine] Canonicalize clamp of float types to minmax in fast mode.Nikolai Bozhenov2017-06-281-3/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This commit allows matchSelectPattern to recognize clamp of float arguments in the presence of FMF the same way as already done for integers. This case is a little different though. With integers, given the min/max pattern is recognized, DAGBuilder starts selecting MIN/MAX "automatically". That is not the case for float, because for them only full FMINNAN/FMINNUM/FMAXNAN/FMAXNUM ISD nodes exist and they do care about NaNs. On the other hand, some backends (e.g. X86) have only FMIN/FMAX nodes that do not care about NaNS and the former NAN/NUM nodes are illegal thus selection is not happening. So I decided to do such kind of transformation in IR (InstCombiner) instead of complicating the logic in the backend. Reviewers: spatel, jmolloy, majnemer, efriedma, craig.topper Reviewed By: efriedma Subscribers: hiraditya, javed.absar, n.bozhenov, llvm-commits Patch by Andrei Elovikov <andrei.elovikov@intel.com> Differential Revision: https://reviews.llvm.org/D33186 llvm-svn: 306525
* [InstCombine] Propagate nsw flag when turning mul by pow2 into shift when ↵Craig Topper2017-06-271-2/+2
| | | | | | | | | | | | the constant is a vector splat or the scalar bit width is larger than 64-bits The check to see if we can propagate the nsw flag used m_ConstantInt(uint64_t*&) which doesn't work with splat vectors and has a restriction that the bitwidth of the ConstantInt must be 64-bits are less. This patch changes it to use m_APInt to remove both these issues Differential Revision: https://reviews.llvm.org/D34699 llvm-svn: 306457
* [InstCombine] canonicalize icmp predicate feeding selectSanjay Patel2017-06-271-0/+17
| | | | | | | | | | | | | | | | | | | | | This canonicalization was suggested in D33172 as a way to make InstCombine behavior more uniform. We have this transform for icmp+br, so unless there's some reason that icmp+select should be treated differently, we should do the same thing here. The benefit comes from increasing the chances of creating identical instructions. This is shown in the tests in logical-select.ll (PR32791). InstCombine doesn't fold those directly, but EarlyCSE can simplify the identical cmps, and then InstCombine can fold the selects together. The possible regression for the tests in select.ll raises questions about poison/undef: http://lists.llvm.org/pipermail/llvm-dev/2017-May/113261.html ...but that transform is just as likely to be triggered by this canonicalization as it is to be missed, so we're just pointing out a commutation deficiency in the pattern matching: https://reviews.llvm.org/rL228409 Differential Revision: https://reviews.llvm.org/D34242 llvm-svn: 306435
* [InstCombine] Factor the logic for propagating !nonnull and !rangeChandler Carruth2017-06-261-26/+2
| | | | | | | | | | | | | | | | metadata out of InstCombine and into helpers. NFC, this just exposes the logic used by InstCombine when propagating metadata from one load instruction to another. The plan is to use this in SROA to address PR32902. If anyone has better ideas about how to factor this or name variables, I'm all ears, but this seemed like a pretty good start and lets us make progress on the PR. This is based on a patch by Ariel Ben-Yehuda (D34285). llvm-svn: 306267
* [InstCombine] add (sext i1 X), 1 --> zext (not X)Sanjay Patel2017-06-251-9/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | http://rise4fun.com/Alive/i8Q A narrow bitwise logic op is obviously better than math for value tracking, and zext is better than sext. Typically, the 'not' will be folded into an icmp predicate. The IR difference would even survive through codegen for x86, so we would see worse code: https://godbolt.org/g/C14HMF one_or_zero(int, int): # @one_or_zero(int, int) xorl %eax, %eax cmpl %esi, %edi setle %al retq one_or_zero_alt(int, int): # @one_or_zero_alt(int, int) xorl %ecx, %ecx cmpl %esi, %edi setg %cl movl $1, %eax subl %ecx, %eax retq llvm-svn: 306243
* [ValueTracking][InstCombine] Use m_Shr instead m_CombineOr(m_LShr, m_AShr). NFCCraig Topper2017-06-241-2/+1
| | | | llvm-svn: 306205
* [Analysis][Transforms] Use commutable matchers instead of m_CombineOr in a ↵Craig Topper2017-06-241-2/+1
| | | | | | few places. NFC llvm-svn: 306204
* [InstCombine] Don't replace allocas with smaller globalsVitaly Buka2017-06-241-1/+14
| | | | | | | | | | | | | | | | | | Summary: InstCombine replaces large allocas with small globals consts causing buffer overflows on valid code, see PR33372. This fix permits this optimization only if the global is dereference for alloca size. Fixes PR33372 Reviewers: eugenis, majnemer, chandlerc Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D34311 llvm-svn: 306194
* [InstCombine] Recognize and simplify three way comparison idiomsAnna Thomas2017-06-232-4/+103
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Many languages have a three way comparison idiom where comparing two values produces not a boolean, but a tri-state value. Typical values (e.g. as used in the lcmp/fcmp bytecodes from Java) are -1 for less than, 0 for equality, and +1 for greater than. We actually do a great job already of converting three way comparisons into binary comparisons when the result produced has one a single use. Unfortunately, such values can have more than one use, and in that case, our existing optimizations break down. The patch adds a peephole which converts a three-way compare + test idiom into a binary comparison on the original inputs. It focused on replacing the test on the result of the three way compare and does nothing about removing the three way compare itself. That's left to other optimizations (which do actually kick in commonly.) We currently recognize one idiom on signed integer compare. In the future, we plan to recognize and simplify other comparison idioms on other signed/unsigned datatypes such as floats, vectors etc. This is a resurrection of Philip Reames' original patch: https://reviews.llvm.org/D19452 Reviewers: majnemer, apilipenko, reames, sanjoy, mkazantsev Reviewed by: mkazantsev Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D34278 llvm-svn: 306100
* [InstCombine] Teach foldSelectICmpAndOr to recognize (select (icmp slt ↵Craig Topper2017-06-221-11/+38
| | | | | | | | | | | | | | | | | | | (trunc (X)), 0), Y, (or Y, C2)) Summary: InstCombine likes to turn (icmp eq (and X, C1), 0) into (icmp slt (trunc (X)), 0) sometimes. This breaks foldSelectICmpAndOr's ability to recognize (select (icmp eq (and X, C1), 0), Y, (or Y, C2))->(or (shl (and X, C1), C3), y). This patch tries to recover this. I had to flip around some of the early out checks so that I could create a new And instruction during the compare processing without it possibly never getting used. Reviewers: spatel, majnemer, davide Reviewed By: spatel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D34184 llvm-svn: 306029
* [InstCombine] Add one use checks to or/and->xnor foldingCraig Topper2017-06-221-6/+8
| | | | | | | | | | If the components of the and/or had multiple uses, this transform created an additional instruction. This patch makes sure we remove one of the components. Differential Revision: https://reviews.llvm.org/D34498 llvm-svn: 306027
* [InstCombine] reverse bitcast + bitwise-logic canonicalization (PR33138)Sanjay Patel2017-06-222-16/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are 2 parts to this patch made simultaneously to avoid a regression. We're reversing the canonicalization that moves bitwise vector ops before bitcasts. We're moving bitwise vector ops *after* bitcasts instead. That's the 1st and 3rd hunks of the patch. The motivation is that there's only one fold that currently depends on the existing canonicalization (see next), but there are many folds that would automatically benefit from the new canonicalization. PR33138 ( https://bugs.llvm.org/show_bug.cgi?id=33138 ) shows why/how we have these patterns in IR. There's an or(and,andn) pattern that requires an adjustment in order to continue matching to 'select' because the bitcast changes position. This match is unfortunately complicated because it requires 4 logic ops with optional bitcast and sext ops. Test diffs: 1. The bitcast.ll and bitcast-bigendian.ll changes show the most basic difference - bitcast comes before logic. 2. There are also tests with no diffs in bitcast.ll that verify that we're still doing folds that were enabled by the previous canonicalization. 3. icmp-xor-signbit.ll shows the payoff. We don't need to adjust existing icmp patterns to look through bitcasts. 4. logical-select.ll contains several tests for the or(and,andn) --> select fold to verify that we are still handling those cases. The lone diff shows the movement of the bitcast from the new canonicalization rule. Differential Revision: https://reviews.llvm.org/D33517 llvm-svn: 306011
* [InstCombine] add peekThroughBitcast() helper; NFCSanjay Patel2017-06-222-6/+14
| | | | | | This is an NFC portion of D33517. We have similar helpers in the backend. llvm-svn: 306008
* [InstCombine] Cleanup using commutable matchers. Make a couple helper ↵Craig Topper2017-06-212-25/+19
| | | | | | methods standalone static functions. Put 'if' around variable declaration instead of after. NFC llvm-svn: 305941
* [InstCombine] Add range metadata to cttz/ctlz/ctpop intrinsic calls based on ↵Craig Topper2017-06-211-0/+46
| | | | | | | | | | | | | | | | | | | | | known bits Summary: I noticed that passing known bits across these intrinsics isn't great at capturing the information we really know. Turning known bits of the input into known bits of a count output isn't able to convey a lot of what we really know. This patch adds range metadata to these intrinsics based on the known bits. Currently the patch punts if we already have range metadata present. Reviewers: spatel, RKSimon, davide, majnemer Reviewed By: RKSimon Subscribers: sanjoy, hfinkel, llvm-commits Differential Revision: https://reviews.llvm.org/D32582 llvm-svn: 305927
* [InstCombine] Don't let folding (select (icmp eq (and X, C1), 0), Y, (or Y, ↵Craig Topper2017-06-211-4/+16
| | | | | | | | | | | | | | | | | | | C2)) create more instructions than it removes Summary: Previously this folding had no checks to see if it was going to result in less instructions. This was pointed out during the review of D34184 This patch adds code to count how many instructions its going to create vs how many its going to remove so we can make a proper decision. Reviewers: spatel, majnemer Reviewed By: spatel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D34437 llvm-svn: 305926
* [InstCombine] fix code/test comments for r305792; NFCSanjay Patel2017-06-201-1/+1
| | | | | | | These diffs were in the last version of the patch in D33342, but I accidentally committed the previous rev. llvm-svn: 305793
* [InstCombine] try to canonicalize xor-of-icmps to and-of-icmpsSanjay Patel2017-06-201-0/+24
| | | | | | | | | | | | | | | | We have a large portfolio of folds for and-of-icmps and or-of-icmps in InstSimplify and InstCombine, but hardly anything for xor-of-icmps. Rather than trying to rethink and translate all of those folds, we can use the truth table definition of xor: X ^ Y --> (X | Y) & !(X & Y) ...to see if we can convert the xor to and/or and then use the existing folds. http://rise4fun.com/Alive/J9v Differential Revision: https://reviews.llvm.org/D33342 llvm-svn: 305792
* [InstCombine] Make sure AddReachableCodeToWorklist sets MadeIRChangeBjorn Pettersson2017-06-191-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Some optimizations in AddReachableCodeToWorklist did not update the MadeIRChange state. This could happen both when removing trivially dead instructions (DCE) and at constant folds. It is essential that changes to the IR is reported correctly, since for example InstCombinePass::run() will indicate that all analyses are preserved otherwise. And the CGPassManager determines if the CallGraph is up-to-date based on status from InstructionCombiningPass::runOnFunction(). The new test case early_dce_clobbers_callgraph.ll is a reproducer for some asserts that started to trigger after changes in the inliner in r305245. With this patch the test case passes again. Reviewers: sanjoy, craig.topper, dblaikie Reviewed By: craig.topper Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D34346 llvm-svn: 305725
* [InstCombine] Cleanup some duplicated one use checksCraig Topper2017-06-191-10/+4
| | | | | | | | | | | | | | | | | | | Summary: These 4 patterns have the same one use check repeated twice for each. Once without a cast and one with. But the cast has no effect on what method is called. For the OR case I believe it is always profitable regardless of the number of uses since we'll never increase the instruction count. For the AND case I believe it is profitable if the pair of xors has one use such that we'll get rid of it completely. Or if the C value is something freely invertible, in which case the not doesn't cost anything. Reviewers: spatel, majnemer Reviewed By: spatel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D34308 llvm-svn: 305705
* [InstCombine] Set correct insertion point for selects generated while ↵Anna Thomas2017-06-161-1/+11
| | | | | | | | | | | | | | | | | | | | | | folding phis Summary: When we fold vector constants that are operands of phi's that feed into select, we need to set the correct insertion point for the *new* selects that get generated. The correct insertion point is the incoming block for the phi. Such cases can occur with patch r298845, which fixed folding of vector constants, but the new selects could be inserted incorrectly (as the added test case shows). Reviewers: majnemer, spatel, sanjoy Reviewed by: spatel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D34162 llvm-svn: 305591
* [Atomics] Rename and change prototype for atomic memcpy intrinsicDaniel Neilson2017-06-162-59/+65
| | | | | | | | | | | | | | | | | | Summary: Background: http://lists.llvm.org/pipermail/llvm-dev/2017-May/112779.html This change is to alter the prototype for the atomic memcpy intrinsic. The prototype itself is being changed to more closely resemble the semantics and parameters of the llvm.memcpy intrinsic -- to ease later combination of the llvm.memcpy and atomic memcpy intrinsics. Furthermore, the name of the atomic memcpy intrinsic is being changed to make it clear that it is not a generic atomic memcpy, but specifically a memcpy is unordered atomic. Reviewers: reames, sanjoy, efriedma Reviewed By: reames Subscribers: mzolotukhin, anna, llvm-commits, skatkov Differential Revision: https://reviews.llvm.org/D33240 llvm-svn: 305558
* [InstCombine] Fold (!iszero(A & K1) & !iszero(A & K2)) -> (A & (K1 | K2)) ↵Craig Topper2017-06-162-32/+61
| | | | | | | | | | | | | | | | == (K1 | K2) if K1 and K2 are a 1-bit mask Summary: This is the demorganed version of the case we already handle for the OR of iszero. Reviewers: spatel Reviewed By: spatel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D34244 llvm-svn: 305548
* [InstCombine] Add two FIXMEs for bad single use checks. NFCCraig Topper2017-06-151-0/+4
| | | | llvm-svn: 305510
* [InstCombine] Make the context instruction parameter of foldOrOfICmps a ↵Craig Topper2017-06-152-10/+10
| | | | | | reference to discourage passing nullptr and to remove the '&' from all of the call sites. NFC llvm-svn: 305493
* [InstCombine] Handle (iszero(A & K1) | iszero(A & K2)) -> (A & (K1 | K2)) != ↵Craig Topper2017-06-151-20/+14
| | | | | | | | | | | | (K1 | K2) when the one of the Ands is commuted relative to the other Currently we expect A to be on the same side in both Ands but nothing guarantees that. While there also switch to using matchers for some of the code. Differential Revision: https://reviews.llvm.org/D34230 llvm-svn: 305487
* [InstCombine] lshr (sext iM X to iN), N-M --> zext (ashr X, min(N-M, M-1)) to iNSanjay Patel2017-06-121-4/+10
| | | | | | | | | | | | | | | | | | | This is a follow-up to https://reviews.llvm.org/D33879 / https://reviews.llvm.org/rL304939 , and was discussed in https://reviews.llvm.org/D33338. We prefer this form because a narrower shift may be cheaper, and we can more easily fold a zext than a sext. http://rise4fun.com/Alive/slVe Name: shz %s = sext i8 %x to i12 %r = lshr i12 %s, 4 => %a = ashr i8 %x, 4 %r = zext i8 %a to i12 llvm-svn: 305190
* [InstSimplify] Don't constant fold or DCE calls that are marked nobuiltinAndrew Kaylor2017-06-091-2/+2
| | | | | | Differential Revision: https://reviews.llvm.org/D33737 llvm-svn: 305132
* [InstCombine] Pass a proper context instruction to all of the calls into ↵Craig Topper2017-06-099-45/+66
| | | | | | | | | | | | | | | | InstSimplify Summary: This matches the behavior we already had for compares and makes us consistent everywhere. Reviewers: dberlin, hfinkel, spatel Reviewed By: dberlin Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D33604 llvm-svn: 305049
* [InstCombine] fold lshr (sext X), C1 --> zext (lshr X, C2)Sanjay Patel2017-06-071-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This was discussed in D33338. We have larger pattern-matching ending in a truncate that we can reduce or remove by handling these smaller patterns first. Further motivation is that narrower shift ops are easier for value tracking and zext is better than sext. http://rise4fun.com/Alive/rhh Name: boolshift %sext = sext i1 %x to i8 %r = lshr i8 %sext, 7 => %r = zext i1 %x to i8 Name: noboolshift %sext = sext i3 %x to i8 %r = lshr i8 %sext, 7 => %sh = lshr i3 %x, 2 %r = zext i3 %sh to i8 Differential Revision: https://reviews.llvm.org/D33879 llvm-svn: 304939
* [InstCombine][InstSimplify] Use APInt::isNullValue/isOneValue to reduce ↵Craig Topper2017-06-078-57/+63
| | | | | | | | compiled code for comparing APInts with 0 and 1. NFC These methods are specifically optimized to only counting leading zeros without an additional uint64_t compare. llvm-svn: 304876
* [InstCombine] Fix two asserts that were accidentally checking that an APInt ↵Craig Topper2017-06-071-2/+2
| | | | | | | | pointer is non-zero instead of checking that the APInt self is non-zero. I believe this code used to use APInt references which would have worked. But then they were changed to pointers to allow m_APInt to be used. llvm-svn: 304875
* Move Object format code to lib/BinaryFormat.Zachary Turner2017-06-071-1/+1
| | | | | | | | | | | | This creates a new library called BinaryFormat that has all of the headers from llvm/Support containing structure and layout definitions for various types of binary formats like dwarf, coff, elf, etc as well as the code for identifying a file from its magic. Differential Revision: https://reviews.llvm.org/D33843 llvm-svn: 304864
* Sort the remaining #include lines in include/... and lib/....Chandler Carruth2017-06-063-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | I did this a long time ago with a janky python script, but now clang-format has built-in support for this. I fed clang-format every line with a #include and let it re-sort things according to the precise LLVM rules for include ordering baked into clang-format these days. I've reverted a number of files where the results of sorting includes isn't healthy. Either places where we have legacy code relying on particular include ordering (where possible, I'll fix these separately) or where we have particular formatting around #include lines that I didn't want to disturb in this patch. This patch is *entirely* mechanical. If you get merge conflicts or anything, just ignore the changes in this patch and run clang-format over your #include lines in the files. Sorry for any noise here, but it is important to keep these things stable. I was seeing an increasing number of patches with irrelevant re-ordering of #include lines because clang-format was used. This patch at least isolates that churn, makes it easy to skip when resolving conflicts, and gets us to a clean baseline (again). llvm-svn: 304787
* [InstCombine] Fix extractelement use before defSven van Haastregt2017-06-051-1/+1
| | | | | | | | | | | | This fixes a bug that can cause extractelements with operands that haven't been defined yet to be inserted at a wrong point when optimising insertelements. Patch by Karl Hylen. Differential Revision: https://reviews.llvm.org/D33449 llvm-svn: 304701
* [InstCombine] Add support for simplifying ctlz/cttz intrinsics based on ↵Craig Topper2017-06-031-5/+1
| | | | | | known bits. llvm-svn: 304669
* [InstCombine] fix icmp with not op and constant to work with splat vector ↵Sanjay Patel2017-06-021-3/+3
| | | | | | constant llvm-svn: 304562
* [InstCombine] improve perf by not creating a known non-canonical instructionSanjay Patel2017-06-021-3/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Op1 (RHS) is a constant, so putting it on the LHS makes us churn through visitICmp an extra time to canonicalize it: INSTCOMBINE ITERATION #1 on cmpnot IC: ADDING: 3 instrs to worklist IC: Visiting: %notx = xor i8 %x, -1 IC: Visiting: %cmp = icmp sgt i8 %notx, 42 IC: Old = %cmp = icmp sgt i8 %notx, 42 New = <badref> = icmp sgt i8 -43, %x IC: ADD: %cmp = icmp sgt i8 -43, %x IC: ERASE %1 = icmp sgt i8 %notx, 42 IC: ADD: %notx = xor i8 %x, -1 IC: DCE: %notx = xor i8 %x, -1 IC: ERASE %notx = xor i8 %x, -1 IC: Visiting: %cmp = icmp sgt i8 -43, %x IC: Mod = %cmp = icmp sgt i8 -43, %x New = %cmp = icmp slt i8 %x, -43 IC: ADD: %cmp = icmp slt i8 %x, -43 IC: Visiting: %cmp = icmp slt i8 %x, -43 IC: Visiting: ret i1 %cmp If we create the swapped ICmp directly, we go faster: INSTCOMBINE ITERATION #1 on cmpnot IC: ADDING: 3 instrs to worklist IC: Visiting: %notx = xor i8 %x, -1 IC: Visiting: %cmp = icmp sgt i8 %notx, 42 IC: Old = %cmp = icmp sgt i8 %notx, 42 New = <badref> = icmp slt i8 %x, -43 IC: ADD: %cmp = icmp slt i8 %x, -43 IC: ERASE %1 = icmp sgt i8 %notx, 42 IC: ADD: %notx = xor i8 %x, -1 IC: DCE: %notx = xor i8 %x, -1 IC: ERASE %notx = xor i8 %x, -1 IC: Visiting: %cmp = icmp slt i8 %x, -43 IC: Visiting: ret i1 %cmp llvm-svn: 304558
* [IR] Add additional addParamAttr/removeParamAttr to AttributeList APIReid Kleckner2017-05-311-5/+5
| | | | | | | | | | | | | | | | | | | Summary: Fairly straightforward patch to fill in some of the holes in the attributes API with respect to accessing parameter/argument attributes. The patch aims to step further towards encapsulating the idx+FirstArgIndex pattern to access these attributes to within the AttributeList. Patch by Daniel Neilson! Reviewers: rnk, chandlerc, pete, javed.absar, reames Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D33355 llvm-svn: 304329
* [InstCombine] Pass the DominatorTree, AssumptionCache, and context ↵Craig Topper2017-05-263-4/+7
| | | | | | | | | | instruction to a few calls to isKnownPositive, isKnownNegative, and isKnownNonZero Every other place in InstCombine that uses these methods in ValueTracking already pass this information. This makes the remaining sites consistent. Differential Revision: https://reviews.llvm.org/D33567 llvm-svn: 304018
* [InstCombine] Add an InstCombine specific wrapper around ↵Craig Topper2017-05-254-14/+14
| | | | | | | | isKnownToBeAPowerOfTwo to shorten code. NFC We have wrappers for several other ValueTracking methods that take care of passing all of the analysis and assumption cache parameters. This extends it to isKnownToBeAPowerOfTwo. llvm-svn: 303924
* [InstCombine] Teach isAllocSiteRemovable to look through addrspacecastsArtur Pilipenko2017-05-251-1/+3
| | | | | | | | Reviewed By: reames Differential Revision: https://reviews.llvm.org/D28565 llvm-svn: 303870
* [InstCombine] make icmp-mul fold more efficientSanjay Patel2017-05-251-5/+7
| | | | | | | | | | | There's probably a lot more like this (see also comments in D33338 about responsibility), but I suspect we don't usually get a visible manifestation. Given the recent interest in improving InstCombine efficiency, another potential micro-opt that could be repeated several times in this function: morph the existing icmp pred/operands instead of creating a new instruction. llvm-svn: 303860
* [InstCombine] use m_APInt to allow icmp-mul-mul vector foldSanjay Patel2017-05-241-11/+12
| | | | | | | | | The swapped operands in the first test is a manifestation of an inefficiency for vectors that doesn't exist for scalars because the IRBuilder checks for an all-ones mask for scalars, but not vectors. llvm-svn: 303818
* [InstCombine] Merge together the SimplifyDemandedUseBits implementations for ↵Craig Topper2017-05-241-21/+10
| | | | | | | | | | ZExt and Trunc. NFC While there avoid resizing the DemandedMask twice. Make a copy into a separate variable instead. This potentially removes an allocation on large bit widths. With the use of the zextOrTrunc methods on APInt and KnownBits these can be made almost source identical. The only difference is the zero of the upper bits for ZExt. This is similar to how its done in computeKnownBits in ValueTracking. llvm-svn: 303791
* [InstCombine] Use less bitwise operations to handle Instruction::SExt in ↵Craig Topper2017-05-241-19/+14
| | | | | | | | | | | | SimplifyDemandedUseBits. Other improvements. The current code created a NewBits mask and used it as a mask several times. One of them just before a call to trunc making it unnecessary. A call to getActiveBits can get us the same information for the case. We also ORed with this mask later when we should have just sign extended the known bits. We also called trunc on the guaranteed to be zero KnownZeros/Ones masks entering this code. Creating appropriately sized temporary APInts is probably better. Differential Revision: https://reviews.llvm.org/D32098 llvm-svn: 303779
OpenPOWER on IntegriCloud