| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
llvm-svn: 304784
|
|
|
|
|
|
|
|
| |
llvm::isKnownNonEqual handles vectors.
isKnownNonEqual is called a little earlier in this function and can handle the case that we were checking here as well as more complex cases.
llvm-svn: 304775
|
|
|
|
|
|
|
|
| |
computeKnownBits and isKnownNonZero calls this code relies on should work fine for vectors.
This will be used by another commit to remove some code from InstSimplify that is redundant for scalars, but was needed for vectors due to this issue.
llvm-svn: 304774
|
|
|
|
|
|
|
|
| |
computed result type instead of hardcoding to i1. NFC
Currently, isKnownNonEqual punts on vectors so the hardcoding to i1 doesn't matter. But I plan to fix that in a future patch.
llvm-svn: 304773
|
|
|
|
|
|
| |
object instead of taking one by reference. NFC
llvm-svn: 304772
|
|
|
|
| |
llvm-svn: 304771
|
|
|
|
| |
llvm-svn: 304770
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
The patch moves LSR cost comparison to target part.
Reviewers: qcolombet
Differential Revision: http://reviews.llvm.org/D30561
From: Evgeny Stupachenko <evstupac@gmail.com>
llvm-svn: 304750
|
|
|
|
| |
llvm-svn: 304692
|
|
|
|
|
|
| |
the same condition. NFC
llvm-svn: 304681
|
|
|
|
|
|
|
|
| |
really shouldn't fallthrough.
This is actually NFC because the next case starts with the same if statement as this case did. So the result will be the same and it will fallthrough to the end of the switch. But there's no reason to rely on that so we should just break.
llvm-svn: 304680
|
|
|
|
|
|
| |
intrinsic. The second argument is not a vector so needs special treatment.
llvm-svn: 304679
|
|
|
|
|
|
| |
to understand that the second argument is still a scalar.
llvm-svn: 304668
|
|
|
|
|
|
| |
IntegerType to call getBitWidth. NFC
llvm-svn: 304656
|
|
|
|
|
|
| |
Instruction*. Makes getOpcode return the appropriate enum without a cast. NFC
llvm-svn: 304655
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This is to enable the new switch inline cost heuristic (r301649) by removing the
old heuristic as well as the flag itself.
In my experiment for LLVM test suite and spec2000/2006, +17.82% performance and
8% code size reduce was observed in spec2000/vertex with O3 LTO in AArch64.
No significant code size / performance regression was found in O3/O2/Os. No
significant complain was reported from the llvm-dev thread.
Reviewers: hans, chandlerc, eraman, haicheng, mcrosier, bmakam, eastig, ddibyend, echristo
Reviewed By: echristo
Subscribers: javed.absar, kristof.beyls, echristo, aemerson, rengolin, mehdi_amini
Differential Revision: https://reviews.llvm.org/D32653
llvm-svn: 304594
|
|
|
|
| |
llvm-svn: 304567
|
|
|
|
|
|
| |
of Instruction*. This removes a cast of getOpcode to BinaryOps.
llvm-svn: 304563
|
|
|
|
| |
llvm-svn: 304560
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
null, (inttoptr x) as well as it handles icmp (inttoptr x), null
Summary:
The constant folding code currently assumes that the constant expression will always be on the left and the simple null will be on the right. But that's not true at least on the path from InstSimplify.
This patch adds support to ConstantFolding to detect the reversed case.
Reviewers: spatel, dberlin, majnemer, davide, joey
Reviewed By: joey
Subscribers: joey, llvm-commits
Differential Revision: https://reviews.llvm.org/D33801
llvm-svn: 304559
|
|
|
|
|
|
|
| |
So far it would return true for the first uncached query, then cached
queries return false.
llvm-svn: 304545
|
|
|
|
|
|
|
|
| |
This is necessary to get opt-bisect working with polly.
Differential Revision: https://reviews.llvm.org/D33751
llvm-svn: 304476
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Reduce min percent required for indirect call promotion from 33% to 30%,
which matches gcc's threshold and catches the same hot opportunities.
Reviewers: davidxl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D33798
llvm-svn: 304469
|
|
|
|
|
|
|
|
| |
Replace GVFlags::LiveRoot with GVFlags::Live and use that instead of
all the DeadSymbols sets. This is refactoring in order to make
liveness information available in the RegularLTO pipeline.
llvm-svn: 304466
|
|
|
|
|
|
|
|
|
| |
These are no-ops when there are no invokes. We don't need to emit LSDAs
for them.
Fixes PR33220.
llvm-svn: 304367
|
|
|
|
| |
llvm-svn: 304361
|
|
|
|
| |
llvm-svn: 304358
|
|
|
|
| |
llvm-svn: 304356
|
|
|
|
|
|
| |
statement may fall through. NFC.
llvm-svn: 304340
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch does an inline expansion of memcmp.
It changes the memcmp library call into an inline expansion when the size is
known at compile time and is under a target specified threshold.
This expansion is implemented in CodeGenPrepare and expands into straight line
code. The target specifies a maximum load size and the expansion works by using
this size to load the two sources, compare, and exit early if a difference is
found. It also has a special case when the memcmp result is used in a compare
to zero equality.
Differential Revision: https://reviews.llvm.org/D28637
llvm-svn: 304313
|
|
|
|
|
|
|
|
|
|
| |
Thanks to Galina Kistanova for finding the missing break!
When trying to make a test for this, I realized our logic for handling
extractvalue/insertvalue/... is somewhat broken. This makes constructing
a test-case for this missing break nontrivial.
llvm-svn: 304275
|
|
|
|
|
|
| |
already does them
llvm-svn: 304270
|
|
|
|
|
|
|
|
| |
Params DT and LI are redundant, because these values are contained in fields anyways.
Differential Revision: https://reviews.llvm.org/D33668
llvm-svn: 304204
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The optimistic delinearization implemented in LLVM detects array sizes by
looking for non-linear products between parameters and induction variables.
In OpenCL code, such products often look like:
A[get_global_id(0) * N + get_global_id(1)]
Hence, the IV is hidden in the get_global_id() call and consequently
delinearization would fail as no induction variable is available that helps
us to identify N as array size parameter.
We now use a very simple heuristic to change this. We assume that each parameter
that comes directly from a function call is a hidden induction variable. As
a result, we can delinearize the access above to:
A[get_global_id(0)][get_global_id(1]
llvm-svn: 304073
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This fixes introduction of an incorrect inttoptr/ptrtoint pair in
the included test case which makes use of non-integral pointers. I
suspect there are more cases like this left, but this takes care of
the one I was seeing at the moment.
Reviewers: sanjoy
Subscribers: mzolotukhin, llvm-commits
Differential Revision: https://reviews.llvm.org/D33129
llvm-svn: 304058
|
|
|
|
|
|
|
|
|
|
|
|
| |
avoid duplicate work
Previously, we called simplifyPossiblyCastedAndOrOfICmps twice with the operands commuted, but the call to simplifyAndOrOfICmpsWithConstants further down already handles commuting and doesn't need to be called both ways.
This patch pushes double calls further down to just the individual routines that need to be called twice.
Differential Revision: https://reviews.llvm.org/D33603
llvm-svn: 304044
|
|
|
|
|
|
| |
more like simplifyOrOfICmps. NFC
llvm-svn: 304023
|
|
|
|
|
|
|
|
| |
This code was replicated two additional times to handle commuted cases, but I think a commutable matcher can take care of it.
Differential Revision: https://reviews.llvm.org/D33585
llvm-svn: 304022
|
|
|
|
|
|
|
|
|
|
| |
C2) handling in order to support splat vectors.
The tests here are have operands commuted to provide more coverage. I also commuted one of the instructions in the scalar tests so the 4 tests cover the 4 commuted variations
Differential Revision: https://reviews.llvm.org/D33599
llvm-svn: 304021
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The patch rL303730 was reverted because test lsr-expand-quadratic.ll failed on
many non-X86 configs with this patch. The reason of this is that the patch
makes a correctless fix that changes optimizer's behavior for this test.
Without the change, LSR was making an overconfident simplification basing on a
wrong SCEV. Apparently it did not need the IV analysis to do this. With the
change, it chose a different way to simplify (that wasn't so confident), and
this way required the IV analysis. Now, following the right execution path,
LSR tries to make a transformation relying on IV Users analysis. This analysis
is target-dependent due to this code:
// LSR is not APInt clean, do not touch integers bigger than 64-bits.
// Also avoid creating IVs of non-native types. For example, we don't want a
// 64-bit IV in 32-bit code just because the loop has one 64-bit cast.
uint64_t Width = SE->getTypeSizeInBits(I->getType());
if (Width > 64 || !DL.isLegalInteger(Width))
return false;
To make a proper transformation in this test case, the type i32 needs to be
legal for the specified data layout. When the test runs on some non-X86
configuration (e.g. pure ARM 64), opt gets confused by the specified target
and does not use it, rejecting the specified data layout as well. Instead,
it uses some default layout that does not treat i32 as a legal type
(currently the layout that is used when it is not specified does not have
legal types at all). As result, the transformation we expect to happen does
not happen for this test.
This re-enabling patch does not have any source code changes compared to the
original patch rL303730. The only difference is that the failing test is
moved to X86 directory and now has requirement of running on x86 only to comply
with the specified target triple and data layout.
Differential Revision: https://reviews.llvm.org/D33543
llvm-svn: 303971
|
|
|
|
| |
llvm-svn: 303968
|
|
|
|
| |
llvm-svn: 303967
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
having it internally allocate the loop.
This is a much more flexible API and necessary in the new loop unswitch
to reasonably support both new and old PMs in common code. It also just
seems like a cleaner separation of concerns.
NFC, this should just be a pure refactoring.
Differential Revision: https://reviews.llvm.org/D33528
llvm-svn: 303834
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the Zero or Undef is on the LHS.
Summary: This code was migrated from InstCombine a few years ago. InstCombine had nearby code that would move Constants to the RHS for these, but InstSimplify doesn't have such code on this path.
Reviewers: spatel, majnemer, davide
Reviewed By: spatel
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D33473
llvm-svn: 303774
|
|
|
|
|
|
|
|
|
|
| |
version that returns the KnownBits object.
This continues the changes started when computeSignBit was replaced with this new version of computeKnowBits.
Differential Revision: https://reviews.llvm.org/D33431
llvm-svn: 303773
|
|
|
|
|
|
|
|
| |
commuteKnownBits.
This is needed for an upcoming patch.
llvm-svn: 303772
|
|
|
|
|
|
| |
This reverts commit r303730 because it broke all the buildbots.
llvm-svn: 303747
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The loop vectorizer usually vectorizes any instruction it can and then
extracts the elements for a scalarized use. On SystemZ, all elements
containing addresses must be extracted into address registers (GRs). Since
this extraction is not free, it is better to have the address in a suitable
register to begin with. By forcing address arithmetic instructions and loads
of addresses to be scalar after vectorization, two benefits result:
* No need to extract the register
* LSR optimizations trigger (LSR isn't handling vector addresses currently)
Benchmarking show improvements on SystemZ with this new behaviour.
Any other target could try this by returning false in the new hook
prefersVectorizedAddressing().
Review: Renato Golin, Elena Demikhovsky, Ulrich Weigand
https://reviews.llvm.org/D32422
llvm-svn: 303744
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When folding arguments of AddExpr or MulExpr with recurrences, we rely on the fact that
the loop of our base recurrency is the bottom-lost in terms of domination. This assumption
may be broken by an expression which is treated as invariant, and which depends on a complex
Phi for which SCEVUnknown was created. If such Phi is a loop Phi, and this loop is lower than
the chosen AddRecExpr's loop, it is invalid to fold our expression with the recurrence.
Another reason why it might be invalid to fold SCEVUnknown into Phi start value is that unlike
other SCEVs, SCEVUnknown are sometimes position-bound. For example, here:
for (...) { // loop
phi = {A,+,B}
}
X = load ...
Folding phi + X into {A+X,+,B}<loop> actually makes no sense, because X does not exist and cannot
exist while we are iterating in loop (this memory can be even not allocated and not filled by this moment).
It is only valid to make such folding if X is defined before the loop. In this case the recurrence {A+X,+,B}<loop>
may be existant.
This patch prohibits folding of SCEVUnknown (and those who use them) into the start value of an AddRecExpr,
if this instruction is dominated by the loop. Merging the dominating unknown values is still valid. Some tests that
relied on the fact that some SCEVUnknown should be folded into AddRec's are changed so that they no longer
expect such behavior.
llvm-svn: 303730
|
|
|
|
|
|
|
|
|
|
| |
When presented with an icmp/select pair, we can end up asking what would happen
if we replaced one constant with another in an instruction. This is a mistake,
while non-constant Values could become a constant, constants cannot change and
trying to do so can lead to completely invalid IR (a GEP referencing a
non-existant field in the original case).
llvm-svn: 303580
|