| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows us to fold fma's that multiply with 0.0. Also, the
multiply by 1.0 case is handled there as well. The fneg/fabs cases
are not handled by SimplifyFMulInst, so we need to keep them.
Reviewers: spatel, anemet, lebedev.ri
Reviewed By: spatel
Differential Revision: https://reviews.llvm.org/D67351
llvm-svn: 371518
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is similar to the existing fold for splats added with:
rL365379
If we can adjust the shuffle mask to include another element
in an identity mask (if it changes vector length, that's an
extract/insert subvector operation in the backend), then that
can eliminate extractelement/insertelement pairs in IR.
All targets are expected to lower shuffles with identity masks
efficiently.
llvm-svn: 371340
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This is the first change to enable the TLI to be built per-function so
that -fno-builtin* handling can be migrated to use function attributes.
See discussion on D61634 for background. This is an enabler for fixing
handling of these options for LTO, for example.
This change should not affect behavior, as the provided function is not
yet used to build a specifically per-function TLI, but rather enables
that migration.
Most of the changes were very mechanical, e.g. passing a Function to the
legacy analysis pass's getTLI interface, or in Module level cases,
adding a callback. This is similar to the way the per-function TTI
analysis works.
There was one place where we were looking for builtins but not in the
context of a specific function. See FindCXAAtExit in
lib/Transforms/IPO/GlobalOpt.cpp. I'm somewhat concerned my workaround
could provide the wrong behavior in some corner cases. Suggestions
welcome.
Reviewers: chandlerc, hfinkel
Subscribers: arsenm, dschuff, jvesely, nhaehnle, mehdi_amini, javed.absar, sbc100, jgravelle-google, eraman, aheejin, steven_wu, george.burgess.iv, dexonsmith, jfb, asbirlea, gchatelet, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66428
llvm-svn: 371284
|
| |
|
|
| |
llvm-svn: 371146
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
overflow' check
A follow-up for r329011.
This may be changed to produce @llvm.sub.with.overflow in a later patch,
but for now just make things more consistent overall.
A few observations stem from this:
* There does not seem to be a similar one-instruction fold for uadd-overflow
* I'm not sure we'll want to canonicalize `B u> A` as `usub.with.overflow`,
so since the `icmp` here no longer refers to `sub`,
reconstructing `usub.with.overflow` will be problematic,
and will likely require standalone pass (similar to DivRemPairs).
https://rise4fun.com/Alive/Zqs
Name: (A - B) u> A --> B u> A
%t0 = sub i8 %A, %B
%r = icmp ugt i8 %t0, %A
=>
%r = icmp ugt i8 %B, %A
Name: (A - B) u<= A --> B u<= A
%t0 = sub i8 %A, %B
%r = icmp ule i8 %t0, %A
=>
%r = icmp ule i8 %B, %A
Name: C u< (C - D) --> C u< D
%t0 = sub i8 %C, %D
%r = icmp ult i8 %C, %t0
=>
%r = icmp ult i8 %C, %D
Name: C u>= (C - D) --> C u>= D
%t0 = sub i8 %C, %D
%r = icmp uge i8 %C, %t0
=>
%r = icmp uge i8 %C, %D
llvm-svn: 371101
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
overflow' check
A follow-up for r342004.
This will be changed to produce @llvm.add.with.overflow in a later patch,
but for now just make things more consistent overall.
https://rise4fun.com/Alive/qxE
Name: (Op1 + X) u< Op1 --> ~Op1 u< X
%t0 = add i8 %Op1, %X
%r = icmp ult i8 %t0, %Op1
=>
%n = xor i8 %Op1, -1
%r = icmp ult i8 %n, %X
Name: (Op1 + X) u>= Op1 --> ~Op1 u>= X
%t0 = add i8 %Op1, %X
%r = icmp uge i8 %t0, %Op1
=>
%n = xor i8 %Op1, -1
%r = icmp uge i8 %n, %X
;-------------------------------------------------------------------------------
Name: Op0 u> (Op0 + X) --> X u> ~Op0
%t0 = add i8 %Op0, %X
%r = icmp ugt i8 %Op0, %t0
=>
%n = xor i8 %Op0, -1
%r = icmp ugt i8 %X, %n
Name: Op0 u<= (Op0 + X) --> X u<= ~Op0
%t0 = add i8 %Op0, %X
%r = icmp ule i8 %Op0, %t0
=>
%n = xor i8 %Op0, -1
%r = icmp ule i8 %X, %n
llvm-svn: 371100
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
```
Name: sub(xor(x, y), or(x, y)) -> neg(and(x, y))
%or = or i32 %y, %x
%xor = xor i32 %x, %y
%sub = sub i32 %xor, %or
=>
%sub1 = and i32 %x, %y
%sub = sub i32 0, %sub1
Optimization: sub(xor(x, y), or(x, y)) -> neg(and(x, y))
Done: 1
Optimization is correct!
```
https://rise4fun.com/Alive/8OI
Reviewers: lebedev.ri
Reviewed By: lebedev.ri
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67188
llvm-svn: 370945
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
```
Name: sub(and(x, y), or(x, y)) -> neg(xor(x, y))
%or = or i32 %y, %x
%and = and i32 %x, %y
%sub = sub i32 %and, %or
=>
%sub1 = xor i32 %x, %y
%sub = sub i32 0, %sub1
Optimization: sub(and(x, y), or(x, y)) -> neg(xor(x, y))
Done: 1
Optimization is correct!
```
https://rise4fun.com/Alive/VI6
Found by @lebedev.ri. Also author of the proof.
Reviewers: lebedev.ri, spatel
Reviewed By: lebedev.ri
Subscribers: llvm-commits, lebedev.ri
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67155
llvm-svn: 370934
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
```
Name: sub or and to xor
%or = or i32 %y, %x
%and = and i32 %x, %y
%sub = sub i32 %or, %and
=>
%sub = xor i32 %x, %y
Optimization: sub or and to xor
Done: 1
Optimization is correct!
```
https://rise4fun.com/Alive/eJu
Reviewers: spatel, lebedev.ri
Reviewed By: lebedev.ri
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67153
llvm-svn: 370883
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bitcast <N x i8> (shuf X, undef, <N, N-1,...0>) to i{N*8} --> bswap (bitcast X to i{N*8})
In PR43146:
https://bugs.llvm.org/show_bug.cgi?id=43146
...we have a more complicated case where SLP is making a mess of bswap. This patch won't
do anything for that currently, but we need to improve bswap recognition in instcombine,
SLP, and/or a standalone pass to avoid that problem.
This is limited using the data-layout so we don't try to do this transform with actual
vector types. The backend does not appear to have folds to convert in either direction,
so we don't want to mess up something that is actually better lowered as a shuffle.
On x86, we're trading something like this:
vmovd %edi, %xmm0
vpshufb LCPI0_0(%rip), %xmm0, %xmm0 ## xmm0 = xmm0[3,2,1,0,u,u,u,u,u,u,u,u,u,u,u,u]
vmovd %xmm0, %eax
For:
movl %edi, %eax
bswapl %eax
Differential Revision: https://reviews.llvm.org/D66965
llvm-svn: 370659
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary: Add missing tbuffer loads intrinsics in SimplifyDemandedVectorElts.
Reviewers: arsenm, nhaehnle
Reviewed By: arsenm
Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66926
llvm-svn: 370475
|
| |
|
|
| |
llvm-svn: 370399
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
overflow bit extraction
Summary:
`((%x * %y) u/ %x) != %y` is one of (3?) common ways to check that
some unsigned multiplication (will not) overflow.
Currently, we don't catch it. We could:
```
$ /repositories/alive2/build-Clang-unknown/alive -root-only ~/llvm-patch1.ll
Processing /home/lebedevri/llvm-patch1.ll..
----------------------------------------
Name: no overflow
%o0 = mul i4 %y, %x
%o1 = udiv i4 %o0, %x
%r = icmp ne i4 %o1, %y
ret i1 %r
=>
%n0 = umul_overflow i4 %x, %y
%o0 = extractvalue {i4, i1} %n0, 0
%o1 = udiv %o0, %x
%r = extractvalue {i4, i1} %n0, 1
ret %r
Done: 1
Optimization is correct!
----------------------------------------
Name: no overflow
%o0 = mul i4 %y, %x
%o1 = udiv i4 %o0, %x
%r = icmp eq i4 %o1, %y
ret i1 %r
=>
%n0 = umul_overflow i4 %x, %y
%o0 = extractvalue {i4, i1} %n0, 0
%o1 = udiv %o0, %x
%n1 = extractvalue {i4, i1} %n0, 1
%r = xor %n1, -1
ret i1 %r
Done: 1
Optimization is correct!
```
Reviewers: nikic, spatel, efriedma, xbolva00, RKSimon
Reviewed By: nikic
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65144
llvm-svn: 370348
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
overflow bit extraction
Summary:
`(-1 u/ %x) u< %y` is one of (3?) common ways to check that
some unsigned multiplication (will not) overflow.
Currently, we don't catch it. We could:
```
----------------------------------------
Name: no overflow
%o0 = udiv i4 -1, %x
%r = icmp ult i4 %o0, %y
=>
%o0 = udiv i4 -1, %x
%n0 = umul_overflow i4 %x, %y
%r = extractvalue {i4, i1} %n0, 1
Done: 1
Optimization is correct!
----------------------------------------
Name: no overflow, swapped
%o0 = udiv i4 -1, %x
%r = icmp ugt i4 %y, %o0
=>
%o0 = udiv i4 -1, %x
%n0 = umul_overflow i4 %x, %y
%r = extractvalue {i4, i1} %n0, 1
Done: 1
Optimization is correct!
----------------------------------------
Name: overflow
%o0 = udiv i4 -1, %x
%r = icmp uge i4 %o0, %y
=>
%o0 = udiv i4 -1, %x
%n0 = umul_overflow i4 %x, %y
%n1 = extractvalue {i4, i1} %n0, 1
%r = xor %n1, -1
Done: 1
Optimization is correct!
----------------------------------------
Name: overflow
%o0 = udiv i4 -1, %x
%r = icmp ule i4 %y, %o0
=>
%o0 = udiv i4 -1, %x
%n0 = umul_overflow i4 %x, %y
%n1 = extractvalue {i4, i1} %n0, 1
%r = xor %n1, -1
Done: 1
Optimization is correct!
```
As it can be observed from tests, while simply forming the `@llvm.umul.with.overflow`
is easy, if we were looking for the inverted answer, then more work needs to be done
to cleanup the now-pointless control-flow that was guarding against division-by-zero.
This is being addressed in follow-up patches.
Reviewers: nikic, spatel, efriedma, xbolva00, RKSimon
Reviewed By: nikic, xbolva00
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65143
llvm-svn: 370347
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Finally, the fold i was looking forward to :)
The legality check is muddy, i doubt i've groked the full generalization,
but it handles all the cases i care about, and can come up with:
https://rise4fun.com/Alive/26j
I.e. we can perform the fold if **any** of the following is true:
* The shift amount is either zero or one less than widest bitwidth
* Either of the values being shifted has at most lowest bit set
* The value that is being shifted by `shl` (which is not truncated) should have no less leading zeros than the total shift amount;
* The value that is being shifted by `lshr` (which **is** truncated) should have no less leading zeros than the widest bit width minus total shift amount minus one
I strongly suspect there is some better generalization, but i'm not aware of it as of right now.
For now i also avoided using actual `computeKnownBits()`, but restricted it to constants.
Reviewers: spatel, nikic, xbolva00
Reviewed By: spatel
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66383
llvm-svn: 370324
|
| |
|
|
| |
llvm-svn: 370317
|
| |
|
|
|
|
|
|
|
|
| |
Always true/false checks were flagged by static analysis;
https://bugs.llvm.org/show_bug.cgi?id=43143
I have not confirmed the logic difference in propagating nsw vs. nuw,
but presumably we would have noticed a bug by now if that was wrong.
llvm-svn: 370248
|
| |
|
|
| |
llvm-svn: 370224
|
| |
|
|
|
|
|
| |
Due to missing vector support in this function, recursion can
generate worse code in some cases.
llvm-svn: 370221
|
| |
|
|
|
|
| |
InstCombiner::MaxArraySizeForCombine is set outside the constructor so we need to ensure it has a default initialization value.
llvm-svn: 370220
|
| |
|
|
| |
llvm-svn: 370217
|
| |
|
|
| |
llvm-svn: 370212
|
| |
|
|
|
|
|
|
| |
variable warning. NFCI.
We have a lot of Predicate variables, all similarly named....
llvm-svn: 370207
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Example
define dso_local noalias i8* @_Z6maixxnv() local_unnamed_addr #0 {
entry:
%call = tail call noalias dereferenceable_or_null(64) i8* @malloc(i64 64) #6
ret i8* %call
}
Reviewers: jdoerfert
Reviewed By: jdoerfert
Subscribers: aaron.ballman, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66651
llvm-svn: 370168
|
| |
|
|
|
|
| |
vector of pointers. Fix other portions.
llvm-svn: 370114
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Handle pattern [0]:
int ctz(unsigned int a)
{
int c = __clz(a & -a);
return a ? 31 - c : c;
}
In reality, the compiler can generate much better code for cttz, so fold away this pattern.
https://godbolt.org/z/c5kPtV
[0] https://community.arm.com/community-help/f/discussions/2114/count-trailing-zeros
Reviewers: spatel, nikic, lebedev.ri, dmgreen, hfinkel
Reviewed By: hfinkel
Subscribers: hfinkel, javed.absar, kristof.beyls, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66308
llvm-svn: 370037
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Reviewers: eugenis
Subscribers: hiraditya, cfe-commits, #sanitizers, llvm-commits
Tags: #clang, #sanitizers, #llvm
Differential Revision: https://reviews.llvm.org/D66695
llvm-svn: 369979
|
| |
|
|
|
|
|
|
|
|
| |
for vectors
Extend the transform introduced in https://reviews.llvm.org/D66608 to work for vector geps as well.
Differential Revision: https://reviews.llvm.org/D66671
llvm-svn: 369949
|
| |
|
|
|
|
|
|
|
|
|
| |
Promoting it from InstCombine's tryToReuseConstantFromSelectInComparison().
Return true if this constant and a constant 'Y' are element-wise equal.
This is identical to just comparing the pointers, with the exception that
for vectors, if only one of the constants has an `undef` element in some
lane, the constants still match.
llvm-svn: 369842
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
`matchThreeWayIntCompare()` looks for
```
select i1 (a == b),
i32 Equal,
i32 (select i1 (a < b), i32 Less, i32 Greater)
```
but both of these selects/compares can be in it's commuted form,
so out of 8 variants, only the two most basic ones is handled.
This fixes regression being introduced in D66232.
Reviewers: spatel, nikic, efriedma, xbolva00
Reviewed By: spatel
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66607
llvm-svn: 369841
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
If we have e.g.:
```
%t = icmp ult i32 %x, 65536
%r = select i1 %t, i32 %y, i32 65535
```
the constants `65535` and `65536` are suspiciously close.
We could perform a transformation to deduplicate them:
```
Name: ult
%t = icmp ult i32 %x, 65536
%r = select i1 %t, i32 %y, i32 65535
=>
%t.inv = icmp ugt i32 %x, 65535
%r = select i1 %t.inv, i32 65535, i32 %y
```
https://rise4fun.com/Alive/avb
While this may seem esoteric, this should certainly be good for vectors
(less constant pool usage) and for opt-for-size - need to have only one constant.
But the real fun part here is that it allows further transformation,
in particular it finishes cleaning up the `clamp` folding,
see e.g. `canonicalize-clamp-with-select-of-constant-threshold-pattern.ll`.
We start with e.g.
```
%dont_need_to_clamp_positive = icmp sle i32 %X, 32767
%dont_need_to_clamp_negative = icmp sge i32 %X, -32768
%clamp_limit = select i1 %dont_need_to_clamp_positive, i32 -32768, i32 32767
%dont_need_to_clamp = and i1 %dont_need_to_clamp_positive, %dont_need_to_clamp_negative
%R = select i1 %dont_need_to_clamp, i32 %X, i32 %clamp_limit
```
without this patch we currently produce
```
%1 = icmp slt i32 %X, 32768
%2 = icmp sgt i32 %X, -32768
%3 = select i1 %2, i32 %X, i32 -32768
%R = select i1 %1, i32 %3, i32 32767
```
which isn't really a `clamp` - both comparisons are performed on the original value,
this patch changes it into
```
%1.inv = icmp sgt i32 %X, 32767
%2 = icmp sgt i32 %X, -32768
%3 = select i1 %2, i32 %X, i32 -32768
%R = select i1 %1.inv, i32 32767, i32 %3
```
and then the magic happens! Some further transform finishes polishing it and we finally get:
```
%t1 = icmp sgt i32 %X, -32768
%t2 = select i1 %t1, i32 %X, i32 -32768
%t3 = icmp slt i32 %t2, 32767
%R = select i1 %t3, i32 %t2, i32 32767
```
which is beautiful and just what we want.
Proofs for `getFlippedStrictnessPredicateAndConstant()` for de-canonicalization:
https://rise4fun.com/Alive/THl
Proofs for the fold itself: https://rise4fun.com/Alive/THl
Reviewers: spatel, dmgreen, nikic, xbolva00
Reviewed By: spatel
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66232
llvm-svn: 369840
|
| |
|
|
|
|
|
|
| |
Started implementing the vector case and realized the scalar case hadn't handled the GEP producing a different type than the base correctly. It's entertaining seeing what slips through review when we're focused on the 'hard' parts. :(
Also adding an extra vector test as it happened to be in workspace and wasn't worth separating.
llvm-svn: 369795
|
| |
|
|
|
|
|
|
|
|
| |
This generalizes the isGEPKnownNonNull rule from ValueTracking to apply when we do not know if the base is non-null, and thus need to replace one condition with another.
The core notion is that since an inbounds GEP can only form null if the base pointer is null and the offset is zero. However, if the offset is non-zero, the the "inbounds" marker makes the result poison. Thus, we're free to ignore the case where the offset is non-zero. Similarly, there's no case under which a non-null base can result in a null result without generating poison.
Differential Revision: https://reviews.llvm.org/D66608
llvm-svn: 369789
|
| |
|
|
|
|
| |
Noticed while looking at pr43028.
llvm-svn: 369541
|
| |
|
|
|
|
|
|
|
|
|
| |
An intermediate extend is used to widen the narrow operand to the width of
the other (wider) operand. At that point, we have the same logic as the
existing transform that was restricted to folds of equal width zext/sext.
This mostly solves PR42700:
https://bugs.llvm.org/show_bug.cgi?id=42700
llvm-svn: 369519
|
| |
|
|
| |
llvm-svn: 369421
|
| |
|
|
|
|
|
| |
We were creating 2 instructions and relying on a subsequent fold
to invert a not(icmp). Create the final icmp directly instead.
llvm-svn: 369411
|
| |
|
|
|
|
|
|
| |
1. Update function name and stale code comments.
2. Use variable names that are less ambiguous.
3. Move operand checks into the function as early exits.
llvm-svn: 369390
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the original integer variant requested in:
https://bugs.llvm.org/show_bug.cgi?id=35607
As noted in the TODO and several similar TODOs around this block,
we could do this in instsimplify, but then it would cost more
because we would be trying to match min/max via ValueTracking
in 2 different places.
There are 4 commuted variants for each of smin/smax/umin/umax
that are not matched here. There are also icmp predicate variants
that are not included in the affected test file because they are
already handled by instsimplify by folding the final icmp to
true/false.
https://rise4fun.com/Alive/3KVc
Name: smax(smax, smin)
%c1 = icmp slt i32 %x, %y
%c2 = icmp slt i32 %y, %x
%min = select i1 %c1, i32 %x, i32 %y
%max = select i1 %c2, i32 %x, i32 %y
%c3 = icmp sgt i32 %max, %min
%r = select i1 %c3, i32 %max, i32 %min
=>
%r = %max
Name: smin(smax, smin)
%c1 = icmp slt i32 %x, %y
%c2 = icmp slt i32 %y, %x
%min = select i1 %c1, i32 %x, i32 %y
%max = select i1 %c2, i32 %x, i32 %y
%c3 = icmp sgt i32 %max, %min
%r = select i1 %c3, i32 %min, i32 %max
=>
%r = %min
Name: umax(umax, umin)
%c1 = icmp ult i32 %x, %y
%c2 = icmp ult i32 %y, %x
%min = select i1 %c1, i32 %x, i32 %y
%max = select i1 %c2, i32 %x, i32 %y
%c3 = icmp ult i32 %min, %max
%r = select i1 %c3, i32 %max, i32 %min
=>
%r = %max
Name: umin(umax, umin)
%c1 = icmp ult i32 %x, %y
%c2 = icmp ult i32 %y, %x
%min = select i1 %c1, i32 %x, i32 %y
%max = select i1 %c2, i32 %x, i32 %y
%c3 = icmp ult i32 %min, %max
%r = select i1 %c3, i32 %min, i32 %max
=>
%r = %min
llvm-svn: 369386
|
| |
|
|
|
|
| |
foldShiftIntoShiftInAnotherHandOfAndInICmp() from D66383
llvm-svn: 369207
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 5dbb90bfe14ace30224239cac7c61a1422fa5144.
As noted in the post-commit thread for r367891, this can create
a multiply that is lowered to a libcall that may not exist.
We need to improve the backend decomposition for integer multiply
before trying to re-land this (if it's still worthwhile after
doing the backend work).
llvm-svn: 369174
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This pattern may arise more frequently with an enhancement to SLP vectorization suggested in PR42755:
https://bugs.llvm.org/show_bug.cgi?id=42755
...but we should handle this pattern to make things easier for the backend either way.
For all in-tree targets that I looked at, codegen for typical vector sizes looks better when we change
to a vector select, so this is safe to do without a cost model (in other words, as a target-independent
canonicalization).
For example, if the condition of the select is a scalar, we end up with something like this on x86:
vpcmpgtd %xmm0, %xmm1, %xmm0
vpextrb $12, %xmm0, %eax
testb $1, %al
jne LBB0_2
## %bb.1:
vmovaps %xmm3, %xmm2
LBB0_2:
vmovaps %xmm2, %xmm0
Rather than the splat-condition variant:
vpcmpgtd %xmm0, %xmm1, %xmm0
vpshufd $255, %xmm0, %xmm0 ## xmm0 = xmm0[3,3,3,3]
vblendvps %xmm0, %xmm2, %xmm3, %xmm0
Differential Revision: https://reviews.llvm.org/D66095
llvm-svn: 369140
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This is continuation of D63829 / https://bugs.llvm.org/show_bug.cgi?id=42399
I thought naive pattern would solve my issue, but nope, it involved truncation,
thus more folds needed.. This isn't really the fold i'm interested in,
i need trunc-of-lshr, but i'we decided to start with `shl` because it's simpler.
In this case, no extra legality checks are needed:
https://rise4fun.com/Alive/CAb
We should be careful about not increasing instruction count,
since we need to produce `zext` because `and` is done in wider type.
Reviewers: spatel, nikic, xbolva00
Reviewed By: spatel
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66057
llvm-svn: 369117
|
| |
|
|
|
|
|
|
|
| |
canonicalizeCmpWithConstant(), NFCI
I'd like to use it elsewhere, hopefully without reinventing the wheel.
No functional change intended so far.
llvm-svn: 368820
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Given a pattern like:
```
%old_cmp1 = icmp slt i32 %x, C2
%old_replacement = select i1 %old_cmp1, i32 %target_low, i32 %target_high
%old_x_offseted = add i32 %x, C1
%old_cmp0 = icmp ult i32 %old_x_offseted, C0
%r = select i1 %old_cmp0, i32 %x, i32 %old_replacement
```
it can be rewritten as more canonical pattern:
```
%new_cmp1 = icmp slt i32 %x, -C1
%new_cmp2 = icmp sge i32 %x, C0-C1
%new_clamped_low = select i1 %new_cmp1, i32 %target_low, i32 %x
%r = select i1 %new_cmp2, i32 %target_high, i32 %new_clamped_low
```
Iff `-C1 s<= C2 s<= C0-C1`
Also, `ULT` predicate can also be `UGE`; or `UGT` iff `C0 != -1` (+invert result)
Also, `SLT` predicate can also be `SGE`; or `SGT` iff `C2 != INT_MAX` (+invert result)
If `C1 == 0`, then all 3 instructions must be one-use; else at most either `%old_cmp1` or `%old_x_offseted` can have extra uses.
NOTE: if we could reuse `%old_cmp1` as one of the comparisons we'll have to build, this could be less limiting.
So there are two icmp's, each one with 3 predicate variants, so there are 9 fold variants:
| | ULT | UGE | UGT |
| SLT | https://rise4fun.com/Alive/yIJ | https://rise4fun.com/Alive/5BfN | https://rise4fun.com/Alive/INH |
| SGE | https://rise4fun.com/Alive/hd8 | https://rise4fun.com/Alive/Abk | https://rise4fun.com/Alive/PlzS |
| SGT | https://rise4fun.com/Alive/VYG | https://rise4fun.com/Alive/oMY | https://rise4fun.com/Alive/KrzC |
{F9730206}
This fold was brought up in https://reviews.llvm.org/D65148#1603922 by @dmgreen, and is needed to unblock that patch.
This patch requires D65530.
Reviewers: spatel, nikic, xbolva00, dmgreen
Reviewed By: spatel
Subscribers: hiraditya, llvm-commits, dmgreen
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65765
llvm-svn: 368687
|
| |
|
|
|
|
| |
As per https://reviews.llvm.org/D65530#inline-592325
llvm-svn: 368686
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
all users are freely invertible
Summary:
This is rather unconventional..
As the comment there says, we don't have much folds for xor-of-icmps,
we try to turn them into an and-of-icmps, for which we have plenty of folds.
But if the ICmp we need to invert is not single-use - we give up.
As discussed in https://reviews.llvm.org/D65148#1603922,
we may have a non-canonical CLAMP pattern, with bit match and
select-of-threshold that we'll potentially clamp.
As it can be seen in `canonicalize-clamp-with-select-of-constant-threshold-pattern.ll`,
out of all 8 variations of the pattern, only two are **not** canonicalized into
the variant with and+icmp instead of bit math.
The reason is because the ICmp we need to invert is not single-use - we give up.
We indeed can't perform this fold at will, the general rule is that
we should not increase instruction count in InstCombine,
But we wouldn't end up increasing instruction count if we can adapt every other
user to the inverted value. This way the `not` we create **will** get folded,
and in the end the instruction count did not increase.
For that, of course, we need to look at the users of a Value,
which is again rather unconventional for InstCombine :S
Thus i'm proposing to be a little bit more insistive in `foldXorOfICmps()`.
The alternatives would be to not create that `not`, but add duplicate code to
manually invert all users; or to add some even less general combine to handle
some more specific pattern[s].
Reviewers: spatel, nikic, RKSimon, craig.topper
Reviewed By: spatel
Subscribers: hiraditya, jdoerfert, dmgreen, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65530
llvm-svn: 368685
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
x / fabs(x) -> copysign(1.0, x)
fabs(x) / x -> copysign(1.0, x)
Reviewers: spatel, foad, RKSimon, efriedma
Reviewed By: spatel
Subscribers: lebedev.ri, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D65898
llvm-svn: 368570
|
| |
|
|
|
|
|
|
|
|
|
| |
constantexpr pitfail (PR42962)
Instead of matching value and then blindly casting to BinaryOperator
just to get the opcode, just match instruction and do no cast.
Fixes https://bugs.llvm.org/show_bug.cgi?id=42962
llvm-svn: 368554
|
| |
|
|
|
|
| |
SimplifyBinOp(Instruction::BinaryOps::Add, )
llvm-svn: 368521
|