| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
| |
llvm-svn: 303178
|
|
|
|
| |
llvm-svn: 303176
|
|
|
|
| |
llvm-svn: 303174
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
RewritePHIs algorithm used in building of CoroFrame inserts a placeholder
```
%placeholder = phi [%val]
```
on every edge leading to a block starting with PHI node with multiple incoming edges,
so that if one of the incoming values was spilled and need to be reloaded, we have a
place to insert a reload. We use SplitEdge helper function to split the incoming edge.
SplitEdge function does not deal with unwind edges comping into a block with an EHPad.
This patch adds an ehAwareSplitEdge function that can correctly split the unwind edge.
For landing pads, we clone the landing pad into every edge block and replace the original
landing pad with a PHI collection the values from all incoming landing pads.
For WinEH pads, we keep the original EHPad in place and insert cleanuppad/cleapret in the
edge blocks.
Reviewers: majnemer, rnk
Reviewed By: majnemer
Subscribers: EricWF, llvm-commits
Differential Revision: https://reviews.llvm.org/D31845
llvm-svn: 303172
|
|
|
|
|
|
| |
It's breaking the bots.
llvm-svn: 303142
|
|
|
|
|
|
|
|
| |
Fixes PR32945.
Differential Revision: https://reviews.llvm.org/D33226
llvm-svn: 303141
|
|
|
|
| |
llvm-svn: 303133
|
|
|
|
| |
llvm-svn: 303127
|
|
|
|
|
|
|
|
|
|
|
| |
There's no need (& a bit incorrect) to mask off the high bits of the
register reference when describing a simple bool value.
Reviewers: aprantl
Differential Revision: https://reviews.llvm.org/D31062
llvm-svn: 303117
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ARM Neon has native support for half-sized vector registers (64 bits). This
is beneficial for example for 2D and 3D graphics. This patch adds the option
to lower MinVecRegSize from 128 via a TTI in the SLP Vectorizer.
*** Performance Analysis
This change was motivated by some internal benchmarks but it is also
beneficial on SPEC and the LLVM testsuite.
The results are with -O3 and PGO. A negative percentage is an improvement.
The testsuite was run with a sample size of 4.
** SPEC
* CFP2006/482.sphinx3 -3.34%
A pretty hot loop is SLP vectorized resulting in nice instruction reduction.
This used to be a +22% regression before rL299482.
* CFP2000/177.mesa -3.34%
* CINT2000/256.bzip2 +6.97%
My current plan is to extend the fix in rL299482 to i16 which brings the
regression down to +2.5%. There are also other problems with the codegen in
this loop so there is further room for improvement.
** LLVM testsuite
* SingleSource/Benchmarks/Misc/ReedSolomon -10.75%
There are multiple small SLP vectorizations outside the hot code. It's a bit
surprising that it adds up to 10%. Some of this may be code-layout noise.
* MultiSource/Benchmarks/VersaBench/beamformer/beamformer -8.40%
The opt-viewer screenshot can be seen at F3218284. We start at a colder store
but the tree leads us into the hottest loop.
* MultiSource/Applications/lambda-0.1.3/lambda -2.68%
* MultiSource/Benchmarks/Bullet/bullet -2.18%
This is using 3D vectors.
* SingleSource/Benchmarks/Shootout-C++/Shootout-C++-lists +6.67%
Noise, binary is unchanged.
* MultiSource/Benchmarks/Ptrdist/anagram/anagram +4.90%
There is an additional SLP in the cold code. The test runs for ~1sec and
prints out over 2000 lines. This is most likely noise.
* MultiSource/Applications/aha/aha +1.63%
* MultiSource/Applications/JM/lencod/lencod +1.41%
* SingleSource/Benchmarks/Misc/richards_benchmark +1.15%
Differential Revision: https://reviews.llvm.org/D31965
llvm-svn: 303116
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This caused PR33053.
Original commit message:
> The new experimental reduction intrinsics can now be used, so I'm enabling this
> for AArch64. We will need this for SVE anyway, so it makes sense to do this for
> NEON reductions as well.
>
> The existing code to match shufflevector patterns are replaced with a direct
> lowering of the reductions to AArch64-specific nodes. Tests updated with the
> new, simpler, representation.
>
> Differential Revision: https://reviews.llvm.org/D32247
llvm-svn: 303115
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the InstCombine counterpart to D32954.
I added some comments about the code duplication in:
rL302436
Alive-based verification:
http://rise4fun.com/Alive/dPw
This is a 2nd fix for the problem reported in:
https://bugs.llvm.org/show_bug.cgi?id=32949
Differential Revision: https://reviews.llvm.org/D32970
llvm-svn: 303105
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These folds were introduced with https://reviews.llvm.org/rL127064 as part of solving:
https://bugs.llvm.org/show_bug.cgi?id=9343
As shown here:
http://rise4fun.com/Alive/C8
...however, the sdiv exact case needs a stronger predicate.
I opted for duplicated code instead of adding another fallthrough because I think that's
easier to read (and edit in case we need/want to restrict/loosen the predicates any more).
This should fix:
https://bugs.llvm.org/show_bug.cgi?id=32949
https://bugs.llvm.org/show_bug.cgi?id=32948
Differential Revision: https://reviews.llvm.org/D32954
llvm-svn: 303104
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
The following loops should be recognized:
i = 0;
while (n) {
n = n >> 1;
i++;
body();
}
use(i);
And replaced with builtin_ctlz(n) if body() is empty or
for CPUs that have CTLZ instruction converted to countable:
for (j = 0; j < builtin_ctlz(n); j++) {
n = n >> 1;
i++;
body();
}
use(builtin_ctlz(n));
Reviewers: rengolin, joerg
Differential Revision: http://reviews.llvm.org/D32605
From: Evgeny Stupachenko <evstupac@gmail.com>
llvm-svn: 303102
|
|
|
|
|
|
|
|
|
|
|
|
| |
verifyMemoryCongruency() filters out trivially dead MemoryDef(s),
as we find them immediately dead, before moving from TOP to a new
congruence class.
This fixes the same problem for PHI(s) skipping MemoryPhis if all
the operands are dead.
Differential Revision: https://reviews.llvm.org/D33044
llvm-svn: 303100
|
|
|
|
|
|
| |
add/sub/mul
llvm-svn: 303074
|
|
|
|
| |
llvm-svn: 303069
|
|
|
|
|
|
|
| |
One didn't correctly fine the regex variable, the other still had a RUN
line for FNOBUILTIN-checks, which weren't copied to the file.
llvm-svn: 303025
|
|
|
|
|
|
|
|
| |
its commuted variants.
We already had (A & ~B) | (A ^ B), but we missed the cases where the not was part of the xor.
llvm-svn: 303004
|
|
|
|
| |
llvm-svn: 303003
|
|
|
|
| |
llvm-svn: 303000
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Loop vectorizer pass introduced undef value while it is fixing output of LCSSA form.
Here it is:
before: %e.0.ph = phi i32 [ 0, %for.inc.2.i ]
after: %e.0.ph = phi i32 [ 0, %for.inc.2.i ], [ undef, %middle.block ]
and after this change we have:
%e.0.ph = phi i32 [ 0, %for.inc.2.i ]
%e.0.ph = phi i32 [ 0, %for.inc.2.i ], [ 0, %middle.block ]
Committed on behalf of @dtemirbulatov
Differential Revision: https://reviews.llvm.org/D33055
llvm-svn: 302988
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
something changed in the initial Worklist creation
Summary:
If the Worklist build causes an IR change this change flag currently factors into the flag for running another iteration of the iteration loop. But only changes during processing should trigger another loop.
This patch captures the worklist creation change flag into the outside the loop flag currently used for DbgDeclares and only sends that flag up to the caller. Rerunning the loop only depends on IC.run() now.
This uses the debug output of InstCombine to determine if one or two iterations run. I couldn't think of a better way to detect it since the second spurious iteration shoudn't make any visible changes. Just wasted computation.
I can do a pre-commit of the test case with the CHECK-NOT as a CHECK if this is an ok way to check this.
This is a subset of D31678 as I'm still not sure how to verify the analysis behavior for that.
Reviewers: davide, majnemer, spatel, chandlerc
Reviewed By: davide
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32453
llvm-svn: 302982
|
|
|
|
|
|
|
| |
This allows us to mark this as `REQUIRES: x86`, since it uses x86
target specific intrinsics.
llvm-svn: 302980
|
|
|
|
|
|
|
|
| |
Tests with target intrinsics are inherently target specific, so it
doesn't actually make sense to run them if we've excluded their
target.
llvm-svn: 302979
|
|
|
|
| |
llvm-svn: 302977
|
|
|
|
|
|
|
|
|
|
| |
I bet the change is correct but this test seems to expose some underlying
problem that manifest only on some buildbots, and I'm not able to reproduce
locally. Unfortunately I can't debug right now but I don't want to annoy
people with spurious failures, so I'll XFAIL until I can take a look (over
the weekend).
llvm-svn: 302976
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implemented frequency based cost/saving analysis
and related options.
The pass is now in a state ready to be turne on
in the pipeline (in follow up).
Differential Revision: http://reviews.llvm.org/D32783
llvm-svn: 302967
|
|
|
|
|
|
|
|
|
|
| |
to SVML routines
Patch by Chris Chrulski
Differential Revision: https://reviews.llvm.org/D31789
llvm-svn: 302957
|
|
|
|
|
|
|
|
|
|
| |
generated from -ffast-math
Patch by Chris Chrulski
Differential Revision: https://reviews.llvm.org/D31788
llvm-svn: 302956
|
|
|
|
|
|
|
|
|
|
| |
math-finite.h that create '__<func>_finite as functions
Patch by Chris Chrulski
Differential Revision: https://reviews.llvm.org/D31787
llvm-svn: 302955
|
|
|
|
|
|
| |
Found by inspection.
llvm-svn: 302909
|
|
|
|
|
|
|
|
|
|
| |
This code was missing a check for stores, so we were thinking the
congruency class didn't have any memory members, and reset the
memory leader.
Differential Revision: https://reviews.llvm.org/D33056
llvm-svn: 302905
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
invariant PHI inputs and to rewrite PHI nodes during the actual
unswitching.
The checking is quite easy, but rewriting the PHI nodes is somewhat
surprisingly challenging. This should handle both branches and switches.
I think this is now a full featured trivial unswitcher, and more full
featured than the trivial cases in the old pass while still being (IMO)
somewhat simpler in how it works.
Next up is to verify its correctness in more widespread testing, and
then to add non-trivial unswitching.
Thanks to Davide and Sanjoy for the excellent review. There is one
remaining question that I may address in a follow-up patch (see the
review thread for details) but it isn't related to the functionality
specifically.
Differential Revision: https://reviews.llvm.org/D32699
llvm-svn: 302867
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Don't use the metadata on call instructions for determining hotness
unless we are in sample PGO mode, where it is needed because profile
counts are not accurate. In instrumentation mode this is not necessary
and does more harm than good when calls have VP metadata that hasn't
been properly scaled after transformations or dropped after constant
prop based devirtualization (both should be fixed, but we don't need
to do this in the first place for instrumentation PGO).
This required adjusting a number of tests to distinguish between sample
and instrumentation PGO handling, and to add in profile summary metadata
so that getProfileCount can get the summary.
Reviewers: davidxl, danielcdh
Subscribers: aemerson, rengolin, mehdi_amini, Prazek, llvm-commits
Differential Revision: https://reviews.llvm.org/D32877
llvm-svn: 302844
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I ran the test-suite (including SPEC 2006) in PGO mode comparing cold
thresholds of 225 and 45. Here are some stats on the text size:
Out of 904 tests that ran, 197 see a change in text size. The average
text size reduction (of all the 904 binaries) is 1.07%. Of the 197
binaries, 19 see a text size increase, as high as 18%, but most of them
are small single source benchmarks. There are 3 multisource benchmarks
with a >0.5% size increase (0.7, 1.3 and 2.1 are their % increases). On
the other side of the spectrum, 31 benchmarks see >10% size reduction
and 6 of them are MultiSource.
I haven't run the test-suite with other values of inlinecold-threshold.
Since we have a cold callsite threshold of 45, I picked this value.
Differential revision: https://reviews.llvm.org/D33106
llvm-svn: 302829
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The approach I followed was to emit the remark after getTreeCost concludes
that SLP is profitable. I initially tried emitting them after the
vectorizeRootInstruction calls in vectorizeChainsInBlock but I vaguely
remember missing a few cases for example in HorizontalReduction::tryToReduce.
ORE is placed in BoUpSLP so that it's available from everywhere (notably
HorizontalReduction::tryToReduce).
We use the first instruction in the root bundle as the locator for the remark.
In order to get a sense how far the tree is spanning I've include the size of
the tree in the remark. This is not perfect of course but it gives you at
least a rough idea about the tree. Then you can follow up with -view-slp-tree
to really see the actual tree.
llvm-svn: 302811
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
// (X ^ C1) | C2 --> (X | C2) ^ (C1&~C2)
This canonicalization was added at:
https://reviews.llvm.org/rL7264
By moving xors out/down, we can more easily combine constants. I'm adding
tests that do not change with this patch, so we can verify that those kinds
of transforms are still happening.
This is no-functional-change-intended because there's a later fold:
// (X^C)|Y -> (X|Y)^C iff Y&C == 0
...and demanded-bits appears to guarantee that any fold that would have
hit the fold we're removing here would be caught by that 2nd fold.
Similar reasoning was used in:
https://reviews.llvm.org/rL299384
The larger motivation for removing this code is that it could interfere with
the fix for PR32706:
https://bugs.llvm.org/show_bug.cgi?id=32706
Ie, we're not checking if the 'xor' is actually a 'not', so we could reverse
a 'not' optimization and cause an infinite loop by altering an 'xor X, -1'.
Differential Revision: https://reviews.llvm.org/D33050
llvm-svn: 302733
|
|
|
|
|
|
|
| |
Surprisingly, I don't think these are redundant for InstSimplify.
They were just misplaced as InstCombine tests.
llvm-svn: 302684
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new experimental reduction intrinsics can now be used, so I'm enabling this
for AArch64. We will need this for SVE anyway, so it makes sense to do this for
NEON reductions as well.
The existing code to match shufflevector patterns are replaced with a direct
lowering of the reductions to AArch64-specific nodes. Tests updated with the
new, simpler, representation.
Differential Revision: https://reviews.llvm.org/D32247
llvm-svn: 302678
|
|
|
|
|
|
|
|
|
|
|
| |
The first test in this file is duplicated exactly in and.ll -> test33.
We have commuted and vector variants there too.
The second test is a composite of 2 folds. The first fold is tested
independently in add.ll -> flip_and_mask (including vector variant).
After that transform fires, the IR is identical to the first transform.
llvm-svn: 302676
|
|
|
|
|
|
|
| |
The script at utils/update_test_checks.py has (had?) a bug when variables
start with the same sequence of letters (clearly, not all of the time).
llvm-svn: 302674
|
|
|
|
| |
llvm-svn: 302669
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is another step towards favoring 'not' ops over random 'xor' in IR:
https://bugs.llvm.org/show_bug.cgi?id=32706
This transformation may have occurred in longer IR sequences using computeKnownBits,
but that could be much more expensive to calculate.
As the scalar result shows, we do not currently favor 'not' in all cases. The 'not'
created by the transform is transformed again (unnecessarily). Vectors don't have
this problem because vectors are (wrongly) excluded from several other combines.
llvm-svn: 302659
|
|
|
|
|
|
|
|
|
|
| |
This pass doesn't correctly handle testing for when it is legal to hoist
arbitrary instructions. The whitelist happens to make it safe, so before
it is removed the pass's legality checks will need to be enhanced.
Details have been added to the code review thread for the patch.
llvm-svn: 302640
|
|
|
|
| |
llvm-svn: 302599
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This fixes the immediate crash caused by introducing an incorrect inttoptr
before attempting the conversion. There may still be a legality
check missing somewhere earlier for non-integral pointers, but this change
seems necessary in any case.
Reviewers: sanjoy, dberlin
Reviewed By: dberlin
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32623
llvm-svn: 302587
|
|
|
|
| |
llvm-svn: 302585
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The way we currently define congruency for two PHIExpression(s) is:
1) The operands to the phi functions are congruent
2) The PHIs are defined in the same BasicBlock.
NewGVN works under the assumption that phi operands are in predecessor
order, or at least in some consistent order. OTOH, is valid IR:
patatino:
%meh = phi i16 [ %0, %winky ], [ %conv1, %tinky ]
%banana = phi i16 [ %0, %tinky ], [ %conv1, %winky ]
br label %end
and the in-memory representations of the two SSA registers have an
inconsistent order. This violation of NewGVN assumptions results into
two PHIs found congruent when they're not. While we think it's useful
to have always a consistent order enforced, let's fix this in NewGVN
sorting uses in predecessor order before creating a PHI expression.
Differential Revision: https://reviews.llvm.org/D32990
llvm-svn: 302552
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The comment says to avoid the case where zero bits are shifted into the truncated value,
but the code checks that the shift is smaller than the truncated value instead of the
number of bits added by the sign extension. Fixing this allows a shift by more than the
value size to be introduced, which is undefined behavior, so the shift is capped at the
value size minus one, which has the expected behavior of filling the value with the sign
bit.
Patch by Jacob Young!
Differential Revision: https://reviews.llvm.org/D32285
llvm-svn: 302548
|