| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
| |
'\0', y)"
llvm-svn: 372142
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
'\0', y)"
Summary:
This reverts commit r372101.
Causes ASAN build bot failures:
http://lab.llvm.org:8011/builders/sanitizer-ppc64be-linux/builds/14176
From http://lab.llvm.org:8011/builders/sanitizer-ppc64be-linux/builds/14176/steps/64-bit%20check-asan/logs/stdio:
```
[ RUN ] AddressSanitizer.StrNCatOOBTest
/home/buildbots/ppc64be-sanitizer/sanitizer-ppc64be/build/llvm-project/compiler-rt/lib/asan/tests/asan_str_test.cpp:462: Failure
Death test: strncat(to - 1, from, 0)
Result: failed to die.
```
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67658
llvm-svn: 372125
|
|
|
|
| |
llvm-svn: 372101
|
|
|
|
| |
llvm-svn: 372098
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reviewers: efriedma, jdoerfert
Reviewed By: jdoerfert
Subscribers: ychen, rsmith, joerg, aaron.ballman, lebedev.ri, uenoku, jdoerfert, hfinkel, javed.absar, spatel, dmgreen, llvm-commits
Differential Revision: https://reviews.llvm.org/D53342
llvm-svn: 372091
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Related folds were added in:
rL125734
...the code comment about register pressure is discussed in
more detail in:
https://bugs.llvm.org/show_bug.cgi?id=2698
But 10 years later, perf testing bzip2 with this change now
shows a slight (0.2% average) improvement on Haswell although
that's probably within test noise.
Given that this is IR canonicalization, we shouldn't be worried
about register pressure though; the backend should be able to
adjust for that as needed.
This is part of solving PR43310 the theoretically right way:
https://bugs.llvm.org/show_bug.cgi?id=43310
...ie, if we don't cripple basic transforms, then we won't
need to add special-case code to detect larger patterns.
rL371940 and rL371981 are related patches in this series.
llvm-svn: 372007
|
|
|
|
| |
llvm-svn: 372004
|
|
|
|
| |
llvm-svn: 371988
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fold and several others were added in:
rL125734 <https://reviews.llvm.org/rL125734>
...with no explanation for the one-use checks other than the code
comments about register pressure.
Given that this is IR canonicalization, we shouldn't be worried
about register pressure though; the backend should be able to
adjust for that as needed.
This is part of solving PR43310 the theoretically right way:
https://bugs.llvm.org/show_bug.cgi?id=43310
...ie, if we don't cripple basic transforms, then we won't
need to add special-case code to detect larger patterns.
rL371940 is a related patch in this series.
llvm-svn: 371981
|
|
|
|
| |
llvm-svn: 371979
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fold and several others were added in:
rL125734
...with no explanation for the one-use checks other than the code
comments about register pressure.
Given that this is IR canonicalization, we shouldn't be worried
about register pressure though; the backend should be able to
adjust for that as needed.
There are similar checks as noted with the TODO comments. I'm
hoping to remove those restrictions too, but if any of these
does cause a regression, it should be easier to correct by making
small, individual commits.
This is part of solving PR43310 the theoretically right way:
https://bugs.llvm.org/show_bug.cgi?id=43310
...ie, if we don't cripple basic transforms, then we won't
need to add special-case code to detect larger patterns.
llvm-svn: 371940
|
|
|
|
| |
llvm-svn: 371939
|
|
|
|
|
|
|
|
|
| |
Expanding the folding of `nearbyint()`, `rint()` and `trunc()` to library
functions, in addition to the current support for intrinsics.
Differential revision: https://reviews.llvm.org/D67468
llvm-svn: 371774
|
|
|
|
| |
llvm-svn: 371750
|
|
|
|
| |
llvm-svn: 371746
|
|
|
|
|
|
| |
result-of-usub-is-non-zero-and-no-overflow.ll
llvm-svn: 371737
|
|
|
|
|
|
|
|
|
|
|
| |
and no overflow" (PR43259)
https://rise4fun.com/Alive/ska
https://rise4fun.com/Alive/9iX
https://bugs.llvm.org/show_bug.cgi?id=43259
llvm-svn: 371736
|
|
|
|
|
|
|
|
| |
This introduces additional rounding error in some cases. See D67434.
This reverts r371518 (git commit 18a1f0818b659cee13865b4fad2648d85984a4ed)
llvm-svn: 371634
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(srem X, pow2C) sgt/slt 0 can be reduced using bit hacks by masking
off the sign bit and the module (low) bits:
https://rise4fun.com/Alive/jSO
A '2' divisor allows slightly more folding:
https://rise4fun.com/Alive/tDBM
Any chance to remove an 'srem' use is probably worthwhile, but this is limited
to the one-use improvement case because doing more may expose other missing
folds. That means it does nothing for PR21929 yet:
https://bugs.llvm.org/show_bug.cgi?id=21929
Differential Revision: https://reviews.llvm.org/D67334
llvm-svn: 371610
|
|
|
|
| |
llvm-svn: 371604
|
|
|
|
| |
llvm-svn: 371603
|
|
|
|
| |
llvm-svn: 371602
|
|
|
|
|
|
|
|
|
|
|
| |
Configure TLI to say that r600/amdgpu does not have any library
functions, such that InstCombine does not do anything like turn sin/cos
into the library function @tan with sufficient fast math flags.
Differential Revision: https://reviews.llvm.org/D67406
Change-Id: I02f907d3e64832117ea9800e9f9285282856e5df
llvm-svn: 371592
|
|
|
|
|
|
|
|
|
|
|
|
| |
TryToSinkInstruction() has a bug: While updating debug info for
sunk instruction, it could clone dbg.declare intrinsic.
That is wrong. There could be only one dbg.declare.
The fix is to not clone dbg.declare intrinsic and to update
it`s arguments, to not to point to sunk instruction.
Differential Revision: https://reviews.llvm.org/D67217
llvm-svn: 371587
|
|
|
|
|
|
|
|
| |
I only want to ensure that %offset is non-zero there,
it doesn't matter how that info is conveyed.
As filed in PR43267, the assumption way does not work.
llvm-svn: 371550
|
|
|
|
|
|
| |
https://rise4fun.com/Alive/21b
llvm-svn: 371537
|
|
|
|
| |
llvm-svn: 371519
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows us to fold fma's that multiply with 0.0. Also, the
multiply by 1.0 case is handled there as well. The fneg/fabs cases
are not handled by SimplifyFMulInst, so we need to keep them.
Reviewers: spatel, anemet, lebedev.ri
Reviewed By: spatel
Differential Revision: https://reviews.llvm.org/D67351
llvm-svn: 371518
|
|
|
|
| |
llvm-svn: 371517
|
|
|
|
| |
llvm-svn: 371401
|
|
|
|
|
|
|
|
|
|
| |
(PR43251)
https://rise4fun.com/Alive/kHq
https://bugs.llvm.org/show_bug.cgi?id=43251
llvm-svn: 371352
|
|
|
|
| |
llvm-svn: 371348
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is similar to the existing fold for splats added with:
rL365379
If we can adjust the shuffle mask to include another element
in an identity mask (if it changes vector length, that's an
extract/insert subvector operation in the backend), then that
can eliminate extractelement/insertelement pairs in IR.
All targets are expected to lower shuffles with identity masks
efficiently.
llvm-svn: 371340
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This isn't an important optimization at all... We're already doing:
pow(x, 0.0) -> 1.0
My patch merely teaches instcombine that -0.0 does the same.
However, doing this fixes an AMAZING bug! Compile this program:
extern "C" double pow(double, double);
double boom(double base) {
return pow(base, -0.0);
}
With:
clang++ ~/Desktop/fast-math.cpp -ffast-math -O2 -S
And clang will crash with a signal. Wow, fast math is so fast it ICEs the
compiler! Arguably, the generated math is infinitely fast.
What's actually happening is that we recurse infinitely in getPow. In debug we
hit its assertion:
assert(Exp != 0 && "Incorrect exponent 0 not handled");
We avoid this entire mess if we instead recognize that an exponent of positive
and negative zero yield 1.0.
A separate commit, r371221, fixed the same problem. This only contains the added
tests.
<rdar://problem/54598300>
Reviewers: scanon
Subscribers: hiraditya, jkorous, dexonsmith, ributzka, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67248
llvm-svn: 371224
|
|
|
|
|
|
| |
https://bugs.llvm.org/show_bug.cgi?id=43233
llvm-svn: 371221
|
|
|
|
| |
llvm-svn: 371146
|
|
|
|
|
|
| |
patterns have full test coverage
llvm-svn: 371108
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
overflow' check
A follow-up for r329011.
This may be changed to produce @llvm.sub.with.overflow in a later patch,
but for now just make things more consistent overall.
A few observations stem from this:
* There does not seem to be a similar one-instruction fold for uadd-overflow
* I'm not sure we'll want to canonicalize `B u> A` as `usub.with.overflow`,
so since the `icmp` here no longer refers to `sub`,
reconstructing `usub.with.overflow` will be problematic,
and will likely require standalone pass (similar to DivRemPairs).
https://rise4fun.com/Alive/Zqs
Name: (A - B) u> A --> B u> A
%t0 = sub i8 %A, %B
%r = icmp ugt i8 %t0, %A
=>
%r = icmp ugt i8 %B, %A
Name: (A - B) u<= A --> B u<= A
%t0 = sub i8 %A, %B
%r = icmp ule i8 %t0, %A
=>
%r = icmp ule i8 %B, %A
Name: C u< (C - D) --> C u< D
%t0 = sub i8 %C, %D
%r = icmp ult i8 %C, %t0
=>
%r = icmp ult i8 %C, %D
Name: C u>= (C - D) --> C u>= D
%t0 = sub i8 %C, %D
%r = icmp uge i8 %C, %t0
=>
%r = icmp uge i8 %C, %D
llvm-svn: 371101
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
overflow' check
A follow-up for r342004.
This will be changed to produce @llvm.add.with.overflow in a later patch,
but for now just make things more consistent overall.
https://rise4fun.com/Alive/qxE
Name: (Op1 + X) u< Op1 --> ~Op1 u< X
%t0 = add i8 %Op1, %X
%r = icmp ult i8 %t0, %Op1
=>
%n = xor i8 %Op1, -1
%r = icmp ult i8 %n, %X
Name: (Op1 + X) u>= Op1 --> ~Op1 u>= X
%t0 = add i8 %Op1, %X
%r = icmp uge i8 %t0, %Op1
=>
%n = xor i8 %Op1, -1
%r = icmp uge i8 %n, %X
;-------------------------------------------------------------------------------
Name: Op0 u> (Op0 + X) --> X u> ~Op0
%t0 = add i8 %Op0, %X
%r = icmp ugt i8 %Op0, %t0
=>
%n = xor i8 %Op0, -1
%r = icmp ugt i8 %X, %n
Name: Op0 u<= (Op0 + X) --> X u<= ~Op0
%t0 = add i8 %Op0, %X
%r = icmp ule i8 %Op0, %t0
=>
%n = xor i8 %Op0, -1
%r = icmp ule i8 %X, %n
llvm-svn: 371100
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
----------------------------------------
Name: unsigned sub, overflow, v0
%sub = sub i8 %x, %y
%ov = icmp ugt i8 %sub, %x
=>
%agg = usub_overflow i8 %x, %y
%sub = extractvalue {i8, i1} %agg, 0
%ov = extractvalue {i8, i1} %agg, 1
Done: 1
Optimization is correct!
----------------------------------------
Name: unsigned sub, no overflow, v0
%sub = sub i8 %x, %y
%ov = icmp ule i8 %sub, %x
=>
%agg = usub_overflow i8 %x, %y
%sub = extractvalue {i8, i1} %agg, 0
%not.ov = extractvalue {i8, i1} %agg, 1
%ov = xor %not.ov, -1
Done: 1
Optimization is correct!
llvm-svn: 371099
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
----------------------------------------
Name: unsigned add, overflow, v0
%add = add i8 %x, %y
%ov = icmp ult i8 %add, %x
=>
%agg = uadd_overflow i8 %x, %y
%add = extractvalue {i8, i1} %agg, 0
%ov = extractvalue {i8, i1} %agg, 1
Done: 1
Optimization is correct!
----------------------------------------
Name: unsigned add, overflow, v1
%add = add i8 %x, %y
%ov = icmp ult i8 %add, %y
=>
%agg = uadd_overflow i8 %x, %y
%add = extractvalue {i8, i1} %agg, 0
%ov = extractvalue {i8, i1} %agg, 1
Done: 1
Optimization is correct!
----------------------------------------
Name: unsigned add, no overflow, v0
%add = add i8 %x, %y
%ov = icmp uge i8 %add, %x
=>
%agg = uadd_overflow i8 %x, %y
%add = extractvalue {i8, i1} %agg, 0
%not.ov = extractvalue {i8, i1} %agg, 1
%ov = xor %not.ov, -1
Done: 1
Optimization is correct!
----------------------------------------
Name: unsigned add, no overflow, v1
%add = add i8 %x, %y
%ov = icmp uge i8 %add, %y
=>
%agg = uadd_overflow i8 %x, %y
%add = extractvalue {i8, i1} %agg, 0
%not.ov = extractvalue {i8, i1} %agg, 1
%ov = xor %not.ov, -1
Done: 1
Optimization is correct!
llvm-svn: 371098
|
|
|
|
|
|
| |
Add more test cases simplifying `log()`.
llvm-svn: 370966
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
```
Name: sub(xor(x, y), or(x, y)) -> neg(and(x, y))
%or = or i32 %y, %x
%xor = xor i32 %x, %y
%sub = sub i32 %xor, %or
=>
%sub1 = and i32 %x, %y
%sub = sub i32 0, %sub1
Optimization: sub(xor(x, y), or(x, y)) -> neg(and(x, y))
Done: 1
Optimization is correct!
```
https://rise4fun.com/Alive/8OI
Reviewers: lebedev.ri
Reviewed By: lebedev.ri
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67188
llvm-svn: 370945
|
|
|
|
| |
llvm-svn: 370941
|
|
|
|
| |
llvm-svn: 370939
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
```
Name: sub(and(x, y), or(x, y)) -> neg(xor(x, y))
%or = or i32 %y, %x
%and = and i32 %x, %y
%sub = sub i32 %and, %or
=>
%sub1 = xor i32 %x, %y
%sub = sub i32 0, %sub1
Optimization: sub(and(x, y), or(x, y)) -> neg(xor(x, y))
Done: 1
Optimization is correct!
```
https://rise4fun.com/Alive/VI6
Found by @lebedev.ri. Also author of the proof.
Reviewers: lebedev.ri, spatel
Reviewed By: lebedev.ri
Subscribers: llvm-commits, lebedev.ri
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67155
llvm-svn: 370934
|
|
|
|
|
|
|
|
|
|
|
|
| |
SROA pass processes debug info incorrecly if applied twice.
Specifically, after SROA works first time, instcombine converts dbg.declare
intrinsics into dbg.value. Inlining creates new opportunities for SROA,
so it is called again. This time it does not handle correctly previously
inserted dbg.value intrinsics.
Differential Revision: https://reviews.llvm.org/D64595
llvm-svn: 370906
|
|
|
|
| |
llvm-svn: 370901
|
|
|
|
| |
llvm-svn: 370890
|
|
|
|
| |
llvm-svn: 370888
|