summaryrefslogtreecommitdiffstats
path: root/llvm/test/Transforms/InstCombine
Commit message (Collapse)AuthorAgeFilesLines
...
* [NFC] Adjust tests for new foldDavid Bolvansky2019-09-041-4/+4
| | | | llvm-svn: 370886
* [NFC] Added tests for new foldDavid Bolvansky2019-09-041-0/+101
| | | | llvm-svn: 370885
* [InstCombine] Fold sub (or A, B) (and A, B) to (xor A, B)David Bolvansky2019-09-041-20/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: ``` Name: sub or and to xor %or = or i32 %y, %x %and = and i32 %x, %y %sub = sub i32 %or, %and => %sub = xor i32 %x, %y Optimization: sub or and to xor Done: 1 Optimization is correct! ``` https://rise4fun.com/Alive/eJu Reviewers: spatel, lebedev.ri Reviewed By: lebedev.ri Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67153 llvm-svn: 370883
* [NFC] Added a new test for D67153David Bolvansky2019-09-041-0/+13
| | | | llvm-svn: 370881
* [NFC] Added tests for 'SUB of OR and AND to XOR' foldDavid Bolvansky2019-09-041-0/+103
| | | | llvm-svn: 370878
* [InstCombine] recognize bswap disguised as shufflevectorSanjay Patel2019-09-021-8/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | bitcast <N x i8> (shuf X, undef, <N, N-1,...0>) to i{N*8} --> bswap (bitcast X to i{N*8}) In PR43146: https://bugs.llvm.org/show_bug.cgi?id=43146 ...we have a more complicated case where SLP is making a mess of bswap. This patch won't do anything for that currently, but we need to improve bswap recognition in instcombine, SLP, and/or a standalone pass to avoid that problem. This is limited using the data-layout so we don't try to do this transform with actual vector types. The backend does not appear to have folds to convert in either direction, so we don't want to mess up something that is actually better lowered as a shuffle. On x86, we're trading something like this: vmovd %edi, %xmm0 vpshufb LCPI0_0(%rip), %xmm0, %xmm0 ## xmm0 = xmm0[3,2,1,0,u,u,u,u,u,u,u,u,u,u,u,u] vmovd %xmm0, %eax For: movl %edi, %eax bswapl %eax Differential Revision: https://reviews.llvm.org/D66965 llvm-svn: 370659
* [InstCombine] mempcpy(d,s,n) to memcpy(d,s,n) + nDavid Bolvansky2019-08-311-6/+31
| | | | | | | | | | | | | | | | | | | | Summary: Back-end currently expands mempcpy, but middle-end should work with memcpy instead of mempcpy to enable more memcpy-optimization. GCC backend emits mempcpy, so LLVM backend could form it too, if we know mempcpy libcall is better than memcpy + n. https://godbolt.org/z/dOCG96 Reviewers: efriedma, spatel, craig.topper, RKSimon, jdoerfert Reviewed By: efriedma Subscribers: hjl.tools, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D65737 llvm-svn: 370593
* [InstCombine][AMDGPU] Simplify tbuffer loadsPiotr Sobczak2019-08-301-0/+658
| | | | | | | | | | | | | | | | Summary: Add missing tbuffer loads intrinsics in SimplifyDemandedVectorElts. Reviewers: arsenm, nhaehnle Reviewed By: arsenm Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D66926 llvm-svn: 370475
* [InstCombine] add possible bswap as widening shuffle test; NFCSanjay Patel2019-08-291-2/+13
| | | | | | Goes with the proposal in D66965. llvm-svn: 370407
* [InstCombine] add tests for bswap disguised as shuffle; NFCSanjay Patel2019-08-291-17/+80
| | | | | | | | | Somewhat motivating case In PR43146: https://bugs.llvm.org/show_bug.cgi?id=43146 But that's a lot more complicated. llvm-svn: 370381
* [InstCombine] Fold '((%x * %y) u/ %x) != %y' to '@llvm.umul.with.overflow' + ↵Roman Lebedev2019-08-292-50/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | overflow bit extraction Summary: `((%x * %y) u/ %x) != %y` is one of (3?) common ways to check that some unsigned multiplication (will not) overflow. Currently, we don't catch it. We could: ``` $ /repositories/alive2/build-Clang-unknown/alive -root-only ~/llvm-patch1.ll Processing /home/lebedevri/llvm-patch1.ll.. ---------------------------------------- Name: no overflow %o0 = mul i4 %y, %x %o1 = udiv i4 %o0, %x %r = icmp ne i4 %o1, %y ret i1 %r => %n0 = umul_overflow i4 %x, %y %o0 = extractvalue {i4, i1} %n0, 0 %o1 = udiv %o0, %x %r = extractvalue {i4, i1} %n0, 1 ret %r Done: 1 Optimization is correct! ---------------------------------------- Name: no overflow %o0 = mul i4 %y, %x %o1 = udiv i4 %o0, %x %r = icmp eq i4 %o1, %y ret i1 %r => %n0 = umul_overflow i4 %x, %y %o0 = extractvalue {i4, i1} %n0, 0 %o1 = udiv %o0, %x %n1 = extractvalue {i4, i1} %n0, 1 %r = xor %n1, -1 ret i1 %r Done: 1 Optimization is correct! ``` Reviewers: nikic, spatel, efriedma, xbolva00, RKSimon Reviewed By: nikic Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D65144 llvm-svn: 370348
* [InstCombine] Fold '(-1 u/ %x) u< %y' to '@llvm.umul.with.overflow' + ↵Roman Lebedev2019-08-292-24/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | overflow bit extraction Summary: `(-1 u/ %x) u< %y` is one of (3?) common ways to check that some unsigned multiplication (will not) overflow. Currently, we don't catch it. We could: ``` ---------------------------------------- Name: no overflow %o0 = udiv i4 -1, %x %r = icmp ult i4 %o0, %y => %o0 = udiv i4 -1, %x %n0 = umul_overflow i4 %x, %y %r = extractvalue {i4, i1} %n0, 1 Done: 1 Optimization is correct! ---------------------------------------- Name: no overflow, swapped %o0 = udiv i4 -1, %x %r = icmp ugt i4 %y, %o0 => %o0 = udiv i4 -1, %x %n0 = umul_overflow i4 %x, %y %r = extractvalue {i4, i1} %n0, 1 Done: 1 Optimization is correct! ---------------------------------------- Name: overflow %o0 = udiv i4 -1, %x %r = icmp uge i4 %o0, %y => %o0 = udiv i4 -1, %x %n0 = umul_overflow i4 %x, %y %n1 = extractvalue {i4, i1} %n0, 1 %r = xor %n1, -1 Done: 1 Optimization is correct! ---------------------------------------- Name: overflow %o0 = udiv i4 -1, %x %r = icmp ule i4 %y, %o0 => %o0 = udiv i4 -1, %x %n0 = umul_overflow i4 %x, %y %n1 = extractvalue {i4, i1} %n0, 1 %r = xor %n1, -1 Done: 1 Optimization is correct! ``` As it can be observed from tests, while simply forming the `@llvm.umul.with.overflow` is easy, if we were looking for the inverted answer, then more work needs to be done to cleanup the now-pointless control-flow that was guarding against division-by-zero. This is being addressed in follow-up patches. Reviewers: nikic, spatel, efriedma, xbolva00, RKSimon Reviewed By: nikic, xbolva00 Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D65143 llvm-svn: 370347
* [InstCombine] Shift amount reassociation in bittest: trunc-of-lshr (PR42399)Roman Lebedev2019-08-292-119/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Finally, the fold i was looking forward to :) The legality check is muddy, i doubt i've groked the full generalization, but it handles all the cases i care about, and can come up with: https://rise4fun.com/Alive/26j I.e. we can perform the fold if **any** of the following is true: * The shift amount is either zero or one less than widest bitwidth * Either of the values being shifted has at most lowest bit set * The value that is being shifted by `shl` (which is not truncated) should have no less leading zeros than the total shift amount; * The value that is being shifted by `lshr` (which **is** truncated) should have no less leading zeros than the widest bit width minus total shift amount minus one I strongly suspect there is some better generalization, but i'm not aware of it as of right now. For now i also avoided using actual `computeKnownBits()`, but restricted it to constants. Reviewers: spatel, nikic, xbolva00 Reviewed By: spatel Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D66383 llvm-svn: 370324
* [NFC] Added more tests for D66651David Bolvansky2019-08-281-0/+18
| | | | llvm-svn: 370222
* [InstCombine] Disable recursion in foldGEPICmp for vector pointer GEPsCraig Topper2019-08-281-0/+12
| | | | | | | Due to missing vector support in this function, recursion can generate worse code in some cases. llvm-svn: 370221
* [NFC] Unbreak testsDavid Bolvansky2019-08-281-2/+0
| | | | llvm-svn: 370170
* [NFC] Updated testDavid Bolvansky2019-08-281-5/+22
| | | | llvm-svn: 370169
* Annotate return values of allocation functions with dereferenceable_or_nullDavid Bolvansky2019-08-285-105/+191
| | | | | | | | | | | | | | | | | | | | | | | Summary: Example define dso_local noalias i8* @_Z6maixxnv() local_unnamed_addr #0 { entry: %call = tail call noalias dereferenceable_or_null(64) i8* @malloc(i64 64) #6 ret i8* %call } Reviewers: jdoerfert Reviewed By: jdoerfert Subscribers: aaron.ballman, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D66651 llvm-svn: 370168
* [InstCombine] Disable some portions of foldGEPICmp for GEPs that return a ↵Craig Topper2019-08-271-0/+15
| | | | | | vector of pointers. Fix other portions. llvm-svn: 370114
* [Analysis] Improve EmitGEPOffset handling of vector GEPs with scalar indices.Craig Topper2019-08-271-0/+15
| | | | | | | This patch splats the scalar index if necessary before using it in any integer casts or other arithmetic. llvm-svn: 370112
* [NFC] Added tests for D66651David Bolvansky2019-08-271-0/+181
| | | | llvm-svn: 370046
* [InstCombine] Fold select with ctlz to cttzDavid Bolvansky2019-08-271-38/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Handle pattern [0]: int ctz(unsigned int a) { int c = __clz(a & -a); return a ? 31 - c : c; } In reality, the compiler can generate much better code for cttz, so fold away this pattern. https://godbolt.org/z/c5kPtV [0] https://community.arm.com/community-help/f/discussions/2114/count-trailing-zeros Reviewers: spatel, nikic, lebedev.ri, dmgreen, hfinkel Reviewed By: hfinkel Subscribers: hfinkel, javed.absar, kristof.beyls, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D66308 llvm-svn: 370037
* msan, codegen, instcombine: Keep more lifetime markers used for msanVitaly Buka2019-08-261-0/+15
| | | | | | | | | | | | Reviewers: eugenis Subscribers: hiraditya, cfe-commits, #sanitizers, llvm-commits Tags: #clang, #sanitizers, #llvm Differential Revision: https://reviews.llvm.org/D66695 llvm-svn: 369979
* [InstCombine] icmp eq/ne (gep inbounds P, Idx..), null -> icmp eq/ne P, null ↵Philip Reames2019-08-261-7/+5
| | | | | | | | | | for vectors Extend the transform introduced in https://reviews.llvm.org/D66608 to work for vector geps as well. Differential Revision: https://reviews.llvm.org/D66671 llvm-svn: 369949
* [InstCombine] matchThreeWayIntCompare(): commutativity awarenessRoman Lebedev2019-08-241-52/+24
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: `matchThreeWayIntCompare()` looks for ``` select i1 (a == b), i32 Equal, i32 (select i1 (a < b), i32 Less, i32 Greater) ``` but both of these selects/compares can be in it's commuted form, so out of 8 variants, only the two most basic ones is handled. This fixes regression being introduced in D66232. Reviewers: spatel, nikic, efriedma, xbolva00 Reviewed By: spatel Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D66607 llvm-svn: 369841
* [InstCombine] Try to reuse constant from select in leading comparisonRoman Lebedev2019-08-244-77/+85
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: If we have e.g.: ``` %t = icmp ult i32 %x, 65536 %r = select i1 %t, i32 %y, i32 65535 ``` the constants `65535` and `65536` are suspiciously close. We could perform a transformation to deduplicate them: ``` Name: ult %t = icmp ult i32 %x, 65536 %r = select i1 %t, i32 %y, i32 65535 => %t.inv = icmp ugt i32 %x, 65535 %r = select i1 %t.inv, i32 65535, i32 %y ``` https://rise4fun.com/Alive/avb While this may seem esoteric, this should certainly be good for vectors (less constant pool usage) and for opt-for-size - need to have only one constant. But the real fun part here is that it allows further transformation, in particular it finishes cleaning up the `clamp` folding, see e.g. `canonicalize-clamp-with-select-of-constant-threshold-pattern.ll`. We start with e.g. ``` %dont_need_to_clamp_positive = icmp sle i32 %X, 32767 %dont_need_to_clamp_negative = icmp sge i32 %X, -32768 %clamp_limit = select i1 %dont_need_to_clamp_positive, i32 -32768, i32 32767 %dont_need_to_clamp = and i1 %dont_need_to_clamp_positive, %dont_need_to_clamp_negative %R = select i1 %dont_need_to_clamp, i32 %X, i32 %clamp_limit ``` without this patch we currently produce ``` %1 = icmp slt i32 %X, 32768 %2 = icmp sgt i32 %X, -32768 %3 = select i1 %2, i32 %X, i32 -32768 %R = select i1 %1, i32 %3, i32 32767 ``` which isn't really a `clamp` - both comparisons are performed on the original value, this patch changes it into ``` %1.inv = icmp sgt i32 %X, 32767 %2 = icmp sgt i32 %X, -32768 %3 = select i1 %2, i32 %X, i32 -32768 %R = select i1 %1.inv, i32 32767, i32 %3 ``` and then the magic happens! Some further transform finishes polishing it and we finally get: ``` %t1 = icmp sgt i32 %X, -32768 %t2 = select i1 %t1, i32 %X, i32 -32768 %t3 = icmp slt i32 %t2, 32767 %R = select i1 %t3, i32 %t2, i32 32767 ``` which is beautiful and just what we want. Proofs for `getFlippedStrictnessPredicateAndConstant()` for de-canonicalization: https://rise4fun.com/Alive/THl Proofs for the fold itself: https://rise4fun.com/Alive/THl Reviewers: spatel, dmgreen, nikic, xbolva00 Reviewed By: spatel Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D66232 llvm-svn: 369840
* [InstCombine][NFC] reuse-constant-from-select-in-icmp.ll - revisit testsRoman Lebedev2019-08-241-3/+13
| | | | llvm-svn: 369839
* NFC: Rename lifetime-asan.ll -> lifetime-sanitizer.llVitaly Buka2019-08-241-0/+0
| | | | llvm-svn: 369831
* Fix a bug in just submitted rL369789Philip Reames2019-08-231-0/+26
| | | | | | | | Started implementing the vector case and realized the scalar case hadn't handled the GEP producing a different type than the base correctly. It's entertaining seeing what slips through review when we're focused on the 'hard' parts. :( Also adding an extra vector test as it happened to be in workspace and wasn't worth separating. llvm-svn: 369795
* [InstCombine] icmp eq/ne (gep inbounds P, Idx..), null -> icmp eq/ne P, nullPhilip Reames2019-08-231-0/+184
| | | | | | | | | | This generalizes the isGEPKnownNonNull rule from ValueTracking to apply when we do not know if the base is non-null, and thus need to replace one condition with another. The core notion is that since an inbounds GEP can only form null if the base pointer is null and the offset is zero. However, if the offset is non-zero, the the "inbounds" marker makes the result poison. Thus, we're free to ignore the case where the offset is non-zero. Similarly, there's no case under which a non-null base can result in a null result without generating poison. Differential Revision: https://reviews.llvm.org/D66608 llvm-svn: 369789
* [NFC][InstCombine] Fixup few new tests in unrecognized_three-way-comparison.llRoman Lebedev2019-08-221-6/+6
| | | | llvm-svn: 369701
* IR. Change strip* family of functions to not look through aliases.Peter Collingbourne2019-08-224-240/+222
| | | | | | | | | | | | | | | | | | | | | | | | I noticed another instance of the issue where references to aliases were being replaced with aliasees, this time in InstCombine. In the instance that I saw it turned out to be only a QoI issue (a symbol ended up being missing from the symbol table due to the last reference to the alias being removed, preventing HWASAN from symbolizing a global reference), but it could easily have manifested as incorrect behaviour. Since this is the third such issue encountered (previously: D65118, D65314) it seems to be time to address this common error/QoI issue once and for all and make the strip* family of functions not look through aliases. Includes a test for the specific issue that I saw, but no doubt there are other similar bugs fixed here. As with D65118 this has been tested to make sure that the optimization isn't load bearing. I built Clang, Chromium for Linux, Android and Windows as well as the test-suite and there were no size regressions. Differential Revision: https://reviews.llvm.org/D66606 llvm-svn: 369697
* [NFC][InstCombine] New tests: unrecognized_three-way-comparison.ll is ↵Roman Lebedev2019-08-221-0/+121
| | | | | | ignorant about commutative variants part 2 llvm-svn: 369696
* [NFC][InstCombine] New tests: unrecognized_three-way-comparison.ll is ↵Roman Lebedev2019-08-221-0/+121
| | | | | | | | ignorant about commutative variants D66232 "exposes" the problem. llvm-svn: 369667
* Add a couple of extra test noticed in post-commit discussion of rL369541Philip Reames2019-08-211-0/+19
| | | | llvm-svn: 369546
* [instcombine] icmp eq/ne (sub C, Y), C -> icmp eq/ne Y, 0Philip Reames2019-08-211-0/+40
| | | | | | Noticed while looking at pr43028. llvm-svn: 369541
* [InstCombine] narrow icmp with extended operands of different widthsSanjay Patel2019-08-211-37/+30
| | | | | | | | | | | An intermediate extend is used to widen the narrow operand to the width of the other (wider) operand. At that point, we have the same logic as the existing transform that was restricted to folds of equal width zext/sext. This mostly solves PR42700: https://bugs.llvm.org/show_bug.cgi?id=42700 llvm-svn: 369519
* [InstCombine] add more extra use tests for icmp with extends; NFCSanjay Patel2019-08-201-2/+34
| | | | llvm-svn: 369447
* [InstCombine] add tests for mismatched cast ops for icmp; NFCSanjay Patel2019-08-201-18/+190
| | | | | | | Motivating case is shown in PR42700: https://bugs.llvm.org/show_bug.cgi?id=42700 llvm-svn: 369439
* [InstCombine] simplify min/max of min/max with same operands (PR35607)Sanjay Patel2019-08-201-96/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is the original integer variant requested in: https://bugs.llvm.org/show_bug.cgi?id=35607 As noted in the TODO and several similar TODOs around this block, we could do this in instsimplify, but then it would cost more because we would be trying to match min/max via ValueTracking in 2 different places. There are 4 commuted variants for each of smin/smax/umin/umax that are not matched here. There are also icmp predicate variants that are not included in the affected test file because they are already handled by instsimplify by folding the final icmp to true/false. https://rise4fun.com/Alive/3KVc Name: smax(smax, smin) %c1 = icmp slt i32 %x, %y %c2 = icmp slt i32 %y, %x %min = select i1 %c1, i32 %x, i32 %y %max = select i1 %c2, i32 %x, i32 %y %c3 = icmp sgt i32 %max, %min %r = select i1 %c3, i32 %max, i32 %min => %r = %max Name: smin(smax, smin) %c1 = icmp slt i32 %x, %y %c2 = icmp slt i32 %y, %x %min = select i1 %c1, i32 %x, i32 %y %max = select i1 %c2, i32 %x, i32 %y %c3 = icmp sgt i32 %max, %min %r = select i1 %c3, i32 %min, i32 %max => %r = %min Name: umax(umax, umin) %c1 = icmp ult i32 %x, %y %c2 = icmp ult i32 %y, %x %min = select i1 %c1, i32 %x, i32 %y %max = select i1 %c2, i32 %x, i32 %y %c3 = icmp ult i32 %min, %max %r = select i1 %c3, i32 %max, i32 %min => %r = %max Name: umin(umax, umin) %c1 = icmp ult i32 %x, %y %c2 = icmp ult i32 %y, %x %min = select i1 %c1, i32 %x, i32 %y %max = select i1 %c2, i32 %x, i32 %y %c3 = icmp ult i32 %min, %max %r = select i1 %c3, i32 %min, i32 %max => %r = %min llvm-svn: 369386
* [InstCombine] add tests for min/max with min/max of same operands; NFCSanjay Patel2019-08-201-0/+382
| | | | llvm-svn: 369376
* [NFC][InstCombine] Some tests for 'shift amount reassoc in bit test - ↵Roman Lebedev2019-08-171-0/+546
| | | | | | | | | | | | | | trunc-of-lshr' (PR42399) Finally, the fold i was looking forward to :) The legality check is muddy, i doubt i've groked the full generalization, but it handles all the cases i care about, and can come up with: https://rise4fun.com/Alive/26j https://bugs.llvm.org/show_bug.cgi?id=42399 llvm-svn: 369197
* Revert r367891 - "[InstCombine] combine mul+shl separated by zext"Sanjay Patel2019-08-161-6/+8
| | | | | | | | | | | | | This reverts commit 5dbb90bfe14ace30224239cac7c61a1422fa5144. As noted in the post-commit thread for r367891, this can create a multiply that is lowered to a libcall that may not exist. We need to improve the backend decomposition for integer multiply before trying to re-land this (if it's still worthwhile after doing the backend work). llvm-svn: 369174
* [InstCombine][NFC] reuse-constant-from-select-in-icmp.ll - check ↵Roman Lebedev2019-08-161-2/+8
| | | | | | branch_weights too llvm-svn: 369166
* [InstCombine][NFC] Revisit tests in reuse-constant-from-select-in-icmp.llRoman Lebedev2019-08-161-8/+30
| | | | llvm-svn: 369163
* [InstCombine] canonicalize a scalar-select-of-vectors to vector selectSanjay Patel2019-08-161-4/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This pattern may arise more frequently with an enhancement to SLP vectorization suggested in PR42755: https://bugs.llvm.org/show_bug.cgi?id=42755 ...but we should handle this pattern to make things easier for the backend either way. For all in-tree targets that I looked at, codegen for typical vector sizes looks better when we change to a vector select, so this is safe to do without a cost model (in other words, as a target-independent canonicalization). For example, if the condition of the select is a scalar, we end up with something like this on x86: vpcmpgtd %xmm0, %xmm1, %xmm0 vpextrb $12, %xmm0, %eax testb $1, %al jne LBB0_2 ## %bb.1: vmovaps %xmm3, %xmm2 LBB0_2: vmovaps %xmm2, %xmm0 Rather than the splat-condition variant: vpcmpgtd %xmm0, %xmm1, %xmm0 vpshufd $255, %xmm0, %xmm0 ## xmm0 = xmm0[3,3,3,3] vblendvps %xmm0, %xmm2, %xmm3, %xmm0 Differential Revision: https://reviews.llvm.org/D66095 llvm-svn: 369140
* [InstCombine] Simplify pow(2.0, itofp(y)) to ldexp(1.0, y)Evandro Menezes2019-08-161-0/+63
| | | | | | | | Simplify `pow(2.0, itofp(y))` to `ldexp(1.0, y)`. Differential revision: https://reviews.llvm.org/D65979 llvm-svn: 369120
* [InstCombine] Shift amount reassociation in bittest: trunc-of-shl (PR42399)Roman Lebedev2019-08-161-46/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This is continuation of D63829 / https://bugs.llvm.org/show_bug.cgi?id=42399 I thought naive pattern would solve my issue, but nope, it involved truncation, thus more folds needed.. This isn't really the fold i'm interested in, i need trunc-of-lshr, but i'we decided to start with `shl` because it's simpler. In this case, no extra legality checks are needed: https://rise4fun.com/Alive/CAb We should be careful about not increasing instruction count, since we need to produce `zext` because `and` is done in wider type. Reviewers: spatel, nikic, xbolva00 Reviewed By: spatel Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D66057 llvm-svn: 369117
* [ValueTracking] Fix recurrence detection to check both PHI operands.Florian Hahn2019-08-161-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: Currently we fail to compute known bits for recurrences where the first incoming value is the start value of the recurrence. Instead of exiting the loop when the first incoming value is not the step of the recurrence, continue to check the second incoming value. The original code uses a loop to handle both cases, but incorrectly exits instead of continuing. Reviewers: lebedev.ri, spatel, nikic Reviewed By: lebedev.ri Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D66216 llvm-svn: 369088
* [NFC] Added tests for 'select with ctlz to cttz' foldDavid Bolvansky2019-08-151-0/+249
| | | | llvm-svn: 369032
OpenPOWER on IntegriCloud