summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
Commit message (Collapse)AuthorAgeFilesLines
...
* [InstCombine] reduce more checks for power-of-2-or-zero using ctpopSanjay Patel2019-07-011-1/+7
| | | | | | | | | Extends the transform from: rL364341 ...to include another (more common?) pattern that tests whether a value is a power-of-2 (including or excluding zero). llvm-svn: 364856
* [InstCombine] Shift amount reassociation in bittest (PR42399)Roman Lebedev2019-07-011-0/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Given pattern: `icmp eq/ne (and ((x shift Q), (y oppositeshift K))), 0` we should move shifts to the same hand of 'and', i.e. rewrite as `icmp eq/ne (and (x shift (Q+K)), y), 0` iff `(Q+K) u< bitwidth(x)` It might be tempting to not restrict this to situations where we know we'd fold two shifts together, but i'm not sure what rules should there be to avoid endless combine loops. We pick the same shift that was originally used to shift the variable we picked to shift: https://rise4fun.com/Alive/6x1v Should fix [[ https://bugs.llvm.org/show_bug.cgi?id=42399 | PR42399]]. Reviewers: spatel, nikic, RKSimon Reviewed By: spatel Subscribers: llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D63829 llvm-svn: 364791
* [InstCombine] Omit 'urem' where possibleRoman Lebedev2019-07-011-4/+20
| | | | | | | | This was added in D63390 / rL364286 to backend, but it makes sense to also handle it in middle-end. https://rise4fun.com/Alive/Zsln llvm-svn: 364738
* [InstCombine] Simplify icmp ult/uge (shl %x, C2), C1 iff C1 is power of two ↵Huihui Zhang2019-06-251-0/+21
| | | | | | | | | | | | | | | | | | | | | | | -> icmp eq/ne (and %x, (lshr -C1, C2)), 0. Simplify 'shl' inequality test into 'and' equality test. This pattern happens in the middle-end while simplifying bitfield access, Exposed in https://reviews.llvm.org/D63505 https://rise4fun.com/Alive/6uz Reviewers: lebedev.ri, efriedma Reviewed By: lebedev.ri Subscribers: spatel, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D63675 llvm-svn: 364348
* [InstCombine] reduce checks for power-of-2-or-zero using ctpopSanjay Patel2019-06-251-12/+10
| | | | | | | | | | | | | | | | | | | | | | | This follows up the transform from rL363956 to use the ctpop intrinsic when checking for power-of-2-or-zero. This is matching the isPowerOf2() patterns used in PR42314: https://bugs.llvm.org/show_bug.cgi?id=42314 But there's at least 1 instcombine follow-up needed to match the alternate form: (v & (v - 1)) == 0; We should have all of the backend expansions handled with: rL364319 (x86-specific changes still needed for optimal code based on subtarget) And the larger patterns to exclude zero as a power-of-2 are joining with this change after: rL364153 ( D63660 ) rL364246 Differential Revision: https://reviews.llvm.org/D63777 llvm-svn: 364341
* [InstCombine] Fold icmp eq/ne (and %x, C), 0 iff (-C) is power of two -> %x ↵Huihui Zhang2019-06-251-11/+9
| | | | | | | | | | | | | | | | | | | | | | | | | u</u>= (-C) earlier. Summary: To generate simplified IR, make sure fold (X & ~C) ==/!= 0 --> X u</u>= C+1 is scheduled before fold ((X << Y) & C) == 0 -> (X & (C >> Y)) == 0. https://rise4fun.com/Alive/7ZN Reviewers: lebedev.ri, efriedma, spatel, craig.topper Reviewed By: lebedev.ri Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D63505 llvm-svn: 364255
* [InstCombine] fix typo in comment; NFCSanjay Patel2019-06-201-1/+1
| | | | llvm-svn: 363974
* [InstCombine] canonicalize check for power-of-2Sanjay Patel2019-06-201-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The form that compares against 0 is better because: 1. It removes a use of the input value. 2. It's the more standard form for this pattern: https://graphics.stanford.edu/~seander/bithacks.html#DetermineIfPowerOf2 3. It results in equal or better codegen (tested with x86, AArch64, ARM, PowerPC, MIPS). This is a root cause for PR42314, but probably doesn't completely answer the codegen request: https://bugs.llvm.org/show_bug.cgi?id=42314 Alive proof: https://rise4fun.com/Alive/9kG Name: is power-of-2 %neg = sub i32 0, %x %a = and i32 %neg, %x %r = icmp eq i32 %a, %x => %dec = add i32 %x, -1 %a2 = and i32 %dec, %x %r = icmp eq i32 %a2, 0 Name: is not power-of-2 %neg = sub i32 0, %x %a = and i32 %neg, %x %r = icmp ne i32 %a, %x => %dec = add i32 %x, -1 %a2 = and i32 %dec, %x %r = icmp ne i32 %a2, 0 llvm-svn: 363956
* [InstCombine] Fold icmp eq/ne (and %x, signbit), 0 -> %x s>=/s< 0 earlierHuihui Zhang2019-06-191-10/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | Summary: To generate simplified IR, make sure fold ``` (X & signbit) ==/!= 0) -> X s>=/s< 0; ``` is scheduled before fold ``` ((X << Y) & C) == 0 -> (X & (C >> Y)) == 0. ``` https://rise4fun.com/Alive/fbdh Reviewers: lebedev.ri, efriedma, spatel, craig.topper Reviewed By: lebedev.ri Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D63026 llvm-svn: 363845
* [InstCombine] foldICmpWithLowBitMaskedVal(): 'icmp sgt/sle': avoid miscompilesRoman Lebedev2019-06-091-0/+8
| | | | | | | | | | | | | A precondition 'x != 0' was forgotten by me: https://rise4fun.com/Alive/JFNP https://rise4fun.com/Alive/jHvL These 4 folds with non-constants could be re-enabled, but for now let's go for the simplest solution. https://bugs.llvm.org/show_bug.cgi?id=42198 llvm-svn: 362911
* [InstCombine] Avoid use after free in DenseMap, when built with GCCMartin Storsjo2019-05-301-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, this used a statement like this: Map[A] = Map[B]; This is equivalent to the following: const auto &Src = Map[B]; auto &Dest = Map[A]; Dest = Src; The second statement, "auto &Dest = Map[A];" can insert a new element into the DenseMap, which can potentially grow and reallocate the DenseMap's internal storage, which will invalidate the existing reference to the source. When doing the actual assignment, the Src reference is dereferenced, accessing memory that was freed when the DenseMap grew. This issue hasn't shown up when LLVM was built with Clang, because the right hand side ended up dereferenced before evaulating the left hand side. (If the value type is a larger data type, Clang doesn't do this but behaves like GCC.) With GCC, a cast to Value* isn't enough to make it dereference the right hand side reference before invoking operator[] (while that is enough to make Clang/LLVM do the right thing for larger types), but storing it in an intermediate variable in a separate statement works. This fixes PR42065. Differential Revision: https://reviews.llvm.org/D62624 llvm-svn: 362150
* [ValueTracking][ConstantRange] Distinguish low/high always overflowNikita Popov2019-05-281-1/+2
| | | | | | | | | | | | In order to fold an always overflowing signed saturating add/sub, we need to know in which direction the always overflow occurs. This patch splits up AlwaysOverflows into AlwaysOverflowsLow and AlwaysOverflowsHigh to pass through this information (but it is not used yet). Differential Revision: https://reviews.llvm.org/D62463 llvm-svn: 361858
* [InstCombine] Refactor OptimizeOverflowCheck; NFCINikita Popov2019-05-261-78/+60
| | | | | | | | | Extract method to compute overflow based on binop and signedness, and then make the result handling code generic. This extends the always-overflow handling to signed muls, but has currently no effect, as we don't compute always overflow for them (thus NFC). llvm-svn: 361721
* [InstCombine] Remove OverflowCheckFlavor; NFCNikita Popov2019-05-261-17/+14
| | | | | | | Instead pass binary op and signedness. The extra enum only makes things more complicated in this case. llvm-svn: 361720
* [InstCombine] allow sinking fneg operands through an FP min/maxSanjay Patel2019-05-071-5/+5
| | | | | | | | | Fundamentally/generally, we should not have to rely on bailouts/crippling of folds. In this particular case, I think we always recognize the inverted predicate min/max pattern, so there should not be any loss of optimization. Codegen looks better because we are eliminating an fneg. llvm-svn: 360180
* Move Value *RHSCIOp def into the scope where its actually used. NFCI.Simon Pilgrim2019-05-051-2/+1
| | | | llvm-svn: 359973
* [InstCombine] reduce code duplication; NFCSanjay Patel2019-04-291-5/+7
| | | | | | | | | | Follow-up to: rL359482 Avoid this potential problem throughout by giving the type a name and verifying the assumption that both operands are the same type. llvm-svn: 359485
* [InstCombine] visitFCmpInst - appease copy+paste pattern warning. NFCI.Simon Pilgrim2019-04-291-1/+1
| | | | | | | | PVS Studio's copy+paste recognizer was seeing this as a typo, technically Op0/Op1 in a fcmp should always be the same type, but we might as well avoid the issue. Reported in https://www.viva64.com/en/b/0629/ llvm-svn: 359482
* [InstCombine] Remove redundant/bogus mul_with_overflow combinesNikita Popov2019-04-131-8/+0
| | | | | | | | | | | | As pointed out in D60518 folding mulo(%x, undef) to {undef, undef} isn't correct. As a correct version of this already exists in InstructionSimplify (https://github.com/llvm-mirror/llvm/blob/bd8056ef326e075cc500f3f0cfcd1193bc200594/lib/Analysis/InstructionSimplify.cpp#L4750-L4757) this is just dead code though. Drop it together with the mul(%x, 0) -> {0, false} fold that is also already handled by InstSimplify. Differential Revision: https://reviews.llvm.org/D60649 llvm-svn: 358339
* [InstCombine] Handle ssubo always overflowNikita Popov2019-04-101-3/+3
| | | | | | | | | Following D60483 and D60497, this adds support for AlwaysOverflows handling for ssubo. This is the last case we can handle right now. Differential Revision: https://reviews.llvm.org/D60518 llvm-svn: 358100
* [InstCombine] Handle saddo always overflowNikita Popov2019-04-101-3/+3
| | | | | | | | | Followup to D60483: Handle AlwaysOverflow conditions for saddo as well. Differential Revision: https://reviews.llvm.org/D60497 llvm-svn: 358095
* [InstCombine] Handle usubo always overflowNikita Popov2019-04-101-0/+3
| | | | | | | | | | | Check AlwaysOverflow condition for usubo. The implementation is the same as the existing handling for uaddo and umulo. Handling for saddo and ssubo will follow (smulo doesn't have the necessary ValueTracking support). Differential Revision: https://reviews.llvm.org/D60483 llvm-svn: 358052
* [InstCombine] Directly call computeOverflow methods in ↵Nikita Popov2019-04-101-6/+13
| | | | | | | | | OptimizeOverflowCheck; NFC Instead of using the willOverflow helpers. This makes it easier to extend handling of AlwaysOverflows. llvm-svn: 358051
* [InstCombine] Restructure OptimizeOverflowCheck; NFCNikita Popov2019-04-091-31/+28
| | | | | | | | | Change the code to always handle the unsigned+signed cases together with the same basic structure for add/sub/mul. The simple folds are always handled first and then the ValueTracking overflow checks are used. llvm-svn: 358025
* [InstCombine] Combine no-wrap sub and icmp w/ constant.Luqman Aden2019-04-041-1/+10
| | | | | | | | | | | | | | | | Teach InstCombine the transformation `(icmp P (sub nuw|nsw C2, Y), C) -> (icmp swap(P) Y, C2-C)` Reviewers: majnemer, apilipenko, sanjoy, spatel, lebedev.ri Reviewed By: lebedev.ri Subscribers: dmgreen, lebedev.ri, nikic, hiraditya, JDevlieghere, jfb, jdoerfert, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D59916 llvm-svn: 357674
* [InstCombine] Fix crashing from (icmp (bitcast ([su]itofp X)), Y)Sanjay Patel2019-02-071-29/+33
| | | | | | | | | | | | | | | This fixes a class of bugs introduced by D44367, which transforms various cases of icmp (bitcast ([su]itofp X)), Y to icmp X, Y. If the bitcast is between vector types with a different number of elements, the current code will produce bad IR along the lines of: icmp <N x i32> ..., <M x i32> <...>. This patch suppresses the transform if the bitcast changes the number of vector elements. Patch by: @AndrewScheidecker (Andrew Scheidecker) Differential Revision: https://reviews.llvm.org/D57871 llvm-svn: 353467
* [InstCombine] refactor folds for (icmp (bitcast X), Y); NFCISanjay Patel2019-02-071-75/+69
| | | | llvm-svn: 353462
* [InstCombine] X | C == C --> (X & ~C) == 0Sanjay Patel2019-02-061-9/+18
| | | | | | | | | | We should canonicalize to one of these forms, and compare-with-zero could be more conducive to follow-on transforms. This also leads to generally better codegen as shown in PR40611: https://bugs.llvm.org/show_bug.cgi?id=40611 llvm-svn: 353313
* [opaque pointer types] Pass function types to CallInst creation.James Y Knight2019-02-011-4/+4
| | | | | | | | | This cleans up all CallInst creation in LLVM to explicitly pass a function type rather than deriving it from the pointer's element-type. Differential Revision: https://reviews.llvm.org/D57170 llvm-svn: 352909
* [InstCombine] Simplify cttz/ctlz + icmp ugt/ultNikita Popov2019-01-191-10/+66
| | | | | | | | | | | | Followup to D55745, this time handling comparisons with ugt and ult predicates (which are the canonical forms for non-equality predicates). For ctlz we can convert into a simple icmp, for cttz we can convert into a mask check. Differential Revision: https://reviews.llvm.org/D56355 llvm-svn: 351645
* Update the file headers across all of the LLVM projects in the monorepoChandler Carruth2019-01-191-4/+3
| | | | | | | | | | | | | | | | | to reflect the new license. We understand that people may be surprised that we're moving the header entirely to discuss the new license. We checked this carefully with the Foundation's lawyer and we believe this is the correct approach. Essentially, all code in the project is now made available by the LLVM project under our new license, so you will see that the license headers include that license only. Some of our contributors have contributed code under our old license, and accordingly, we have retained a copy of our old license notice in the top-level files in each project and repository. llvm-svn: 351636
* [InstCombine] Simplify cttz/ctlz + icmp eq/ne into mask checkNikita Popov2018-12-181-3/+22
| | | | | | | | | | | | | Checking whether a number has a certain number of trailing / leading zeros means checking whether it is of the form XXXX1000 / 0001XXXX, which can be done with an and+icmp. Related to https://bugs.llvm.org/show_bug.cgi?id=28668. As a next step, this can be extended to non-equality predicates. Differential Revision: https://reviews.llvm.org/D55745 llvm-svn: 349530
* [InstCombine] Fix negative GEP offset evaluation for 32-bit pointersNikita Popov2018-12-121-5/+3
| | | | | | | | | | | | | | | | | This fixes https://bugs.llvm.org/show_bug.cgi?id=39908. The evaluateGEPOffsetExpression() function simplifies GEP offsets for use in comparisons against zero, basically by converting X*Scale+Offset==0 to X+Offset/Scale==0 if Scale divides Offset. However, before this is done, Offset is masked down to the pointer size. This results in incorrect results for negative Offsets, because we basically end up dividing the 32-bit offset *zero* extended to 64-bit bits (rather than sign extended). Fix this by explicitly sign extending the truncated value. Differential Revision: https://reviews.llvm.org/D55449 llvm-svn: 348987
* [InstCombine] foldICmpWithLowBitMaskedVal(): don't miscompile -1 vector eltsRoman Lebedev2018-12-061-0/+4
| | | | | | | | | | | | | | | | I was finally able to quantify what i thought was missing in the fix, it was vector constants. If we have a scalar (and %x, -1), it will be instsimplified before we reach this code, but if it is a vector, we may still have a -1 element. Thus, we want to avoid the fold if *at least one* element is -1. Or in other words, ignoring the undef elements, no sign bits should be set. Thus, m_NonNegative(). A follow-up for rL348181 https://bugs.llvm.org/show_bug.cgi?id=39861 llvm-svn: 348462
* [InstCombine] simplify icmps with same operands based on dominating cmpSanjay Patel2018-12-051-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The tests here are based on the motivating cases from D54827. More background: 1. We don't get these cases in general with SimplifyCFG because the root of the pattern match is an icmp, not a branch. I'm not sure how often we encounter this pattern vs. the seemingly more likely case with branches, but I don't see evidence to leave the minimal pattern unoptimized. 2. This has a chance of increasing compile-time because we're using a ValueTracking call to handle the match. The motivating cases could be handled with a simpler pair of calls to isImpliedTrueByMatchingCmp/ isImpliedFalseByMatchingCmp, but I saw that we have a more comprehensive wrapper around those, so we might as well use it here unless there's evidence that it's significantly slower. 3. Ideally, we'd handle the fold to constants in InstSimplify, but as with the existing code here, we could extend this to handle cases where the result is not a constant, but a new combined predicate. That would mean splitting the logic across the 2 passes and possibly duplicating the pattern-matching cost. 4. As mentioned in D54827, this seems like the kind of thing that should be handled in Correlated Value Propagation, but that pass is currently limited to dealing with instructions with constant operands, so extending this bit of InstCombine is the smallest/easiest way to get these patterns optimized. llvm-svn: 348367
* [InstCombine] rearrange foldICmpWithDominatingICmp; NFCSanjay Patel2018-12-041-33/+45
| | | | | | | | Move it out from under the constant check, reorder predicates, add comments. This makes it easier to extend to handle the non-constant case. llvm-svn: 348284
* [InstCombine] add helper for icmp with dominator; NFCSanjay Patel2018-12-041-20/+23
| | | | | | | | | | | | | | | | | | | There's a potential small enhancement to this code that could solve the cases currently under proposal in D54827 via SimplifyCFG. Whether instcombine should be doing this kind of semi-non-local analysis in the first place is an open question, but separating the logic out can only help if/when we decide to move it to a different pass. AFAICT, any proposal to do this in SimplifyCFG could also be seen as an overreach + it would be incomplete to start the fold from a branch rather than an icmp. There's another question here about the code for processUGT_ADDCST_ADD(). That part may be completely dead after rL234638 ? llvm-svn: 348273
* [InstCombine] foldICmpWithLowBitMaskedVal(): disable 2 faulty folds.Roman Lebedev2018-12-031-0/+4
| | | | | | | | | | | These two folds are invalid for this non-constant pattern when the mask ends up being all-ones: https://rise4fun.com/Alive/9au https://rise4fun.com/Alive/UcQM Fixes https://bugs.llvm.org/show_bug.cgi?id=39861 llvm-svn: 348181
* [InstCombine] propagate FMF for fcmp+fabs foldsSanjay Patel2018-11-071-7/+13
| | | | | | | By morphing the instruction rather than deleting and creating a new one, we retain fast-math-flags and potentially other metadata (profile info?). llvm-svn: 346331
* [InstCombine] peek through fabs() when checking isnan()Sanjay Patel2018-11-071-1/+6
| | | | | | | | | That should be the end of the missing cases for this fold. See earlier patches in this series: rL346321 rL346324 llvm-svn: 346327
* [InstCombine] add folds for fcmp Pred fabs(X), 0.0Sanjay Patel2018-11-071-0/+8
| | | | | | | | Similar to rL346321, we had folds for the ordered versions of these compares already, so add the unordered siblings for completeness. llvm-svn: 346324
* [InstCombine] add fold for fabs(X) u< 0.0Sanjay Patel2018-11-071-0/+5
| | | | | | | | | | | | | | The sibling fold for 'oge' --> 'ord' was already here, but this half was missing. The result of fabs() must be positive or nan, so asking if the result is negative or nan is the same as asking if the result is nan. This is another step towards fixing: https://bugs.llvm.org/show_bug.cgi?id=39475 llvm-svn: 346321
* [IR] add optional parameter for copying IR flags to compare instructionsSanjay Patel2018-11-071-29/+14
| | | | | | | | As shown, this is used to eliminate redundant code in InstCombine, and there are more cases where we should be using this pattern, but we're currently unintentionally dropping flags. llvm-svn: 346282
* [InstCombine] allow vector types for fcmp+fpext foldSanjay Patel2018-11-061-9/+9
| | | | llvm-svn: 346245
* [InstCombine] propagate fast-math-flags when folding fcmp+fpext, part 2Sanjay Patel2018-11-061-3/+6
| | | | llvm-svn: 346242
* [InstCombine] rearrange code for fcmp+fpext; NFCISanjay Patel2018-11-061-29/+27
| | | | llvm-svn: 346241
* [InstCombine] propagate fast-math-flags when folding fcmp+fpextSanjay Patel2018-11-061-5/+7
| | | | llvm-svn: 346240
* [InstCombine] propagate fast-math-flags when folding fcmp+fneg, part 2Sanjay Patel2018-11-061-2/+3
| | | | llvm-svn: 346238
* [InstCombine] reduce code; NFCSanjay Patel2018-11-061-1/+1
| | | | llvm-svn: 346235
* [InstCombine] propagate fast-math-flags when folding fcmp+fnegSanjay Patel2018-11-061-11/+16
| | | | | | | | | | This is another part of solving PR39475: https://bugs.llvm.org/show_bug.cgi?id=39475 This might be enough to fix that particular issue, but as noted with the FIXME, we're still dropping FMF on other folds around here. llvm-svn: 346234
OpenPOWER on IntegriCloud