| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
|
|
|
|
|
|
|
|
| |
Summary: We do not care about intrinsic calls when assigning discriminators.
Reviewers: davidxl, dnovillo
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D23212
llvm-svn: 277843
|
| |
|
|
|
|
|
|
|
|
|
| |
This generated IR based on the order of evaluation, which is different
between GCC and Clang. With that in mind you get bootstrap miscompares
if you compare a Clang built with GCC-built Clang vs. Clang built with
Clang-built Clang. Diagnosing that made my head hurt.
This also reverts commit r277337, which "fixed" the test case.
llvm-svn: 277820
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Turn (select C, (sext A), B) into (sext (select C, A, B')) when A is i1 and
B is a compatible constant, also for zext instead of sext. This will then be
further folded into logical operations.
The transformation would be valid for non-i1 types as well, but other parts of
InstCombine prefer to have sext from non-i1 as an operand of select.
Motivated by the shader compiler frontend in Mesa for AMDGPU, which emits i32
for boolean operations. With this change, the boolean logic is fully
recovered.
Reviewers: majnemer, spatel, tstellarAMD
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D22747
llvm-svn: 277801
|
| |
|
|
| |
llvm-svn: 277793
|
| |
|
|
| |
llvm-svn: 277792
|
| |
|
|
| |
llvm-svn: 277786
|
| |
|
|
|
|
|
|
|
|
| |
The patch splits a complex && if condition into easier to read and understand
logic. That wrong early exit condition was letting some instructions with not
all operands available pass through when HoistingGeps was true.
Differential Revision: https://reviews.llvm.org/D23174
llvm-svn: 277785
|
| |
|
|
|
|
|
|
|
|
| |
Add a generalized IRBuilderCallbackInserter, which is just given a
callback to execute after insertion. This can be used to get rid of
the custom inserter in InstCombine, which will in turn allow me to add
target specific InstCombineCalls API for intrinsics without horrible
layering violations.
llvm-svn: 277784
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Shifts with a uniform but non-constant count were considered very expensive to
vectorize, because the splat of the uniform count and the shift would tend to
appear in different blocks. That made the splat invisible to ISel, and we'd
scalarize the shift at codegen time.
Since r201655, CodeGenPrepare sinks those splats to be next to their use, and we
are able to select the appropriate vector shifts. This updates the cost model to
to take this into account by making shifts by a uniform cheap again.
Differential Revision: https://reviews.llvm.org/D23049
llvm-svn: 277782
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
constant vectors
This concludes the splat vector enhancements for foldICmpEqualityWithConstant().
Other commits in this series:
https://reviews.llvm.org/rL277762
https://reviews.llvm.org/rL277752
https://reviews.llvm.org/rL277738
https://reviews.llvm.org/rL277731
https://reviews.llvm.org/rL277659
https://reviews.llvm.org/rL277638
https://reviews.llvm.org/rL277629
llvm-svn: 277779
|
| |
|
|
| |
llvm-svn: 277767
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
coro.destroy.
This is the forth patch in the coroutine series. CoroEaly pass now lowers coro.resume
and coro.destroy intrinsics by replacing them with an indirect call to an address
returned by coro.subfn.addr intrinsic. This is done so that CGPassManager recognizes
devirtualization when CoroElide replaces a call to coro.subfn.addr with an appropriate
function address.
Patch by Gor Nishanov!
Differential Revision: https://reviews.llvm.org/D22998
llvm-svn: 277765
|
| |
|
|
|
|
| |
constant vectors
llvm-svn: 277762
|
| |
|
|
|
|
| |
constant vectors
llvm-svn: 277752
|
| |
|
|
|
|
|
|
|
| |
constant vectors
I'm removing a misplaced pair of more specific folds from InstCombine in this patch as well,
so we know where those folds are happening in InstSimplify.
llvm-svn: 277738
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
adjustments.
Summary:
TargetBaseAlign is no longer required since LSV checks if target allows misaligned accesses.
A constant defining a base alignment is still needed for stack accesses where alignment can be adjusted.
Previous patch (D22936) was reverted because tests were failing. This patch also fixes the cause of those failures:
- x86 failing tests either did not have the right target, or the right alignment.
- NVPTX failing tests did not have the right alignment.
- AMDGPU failing test (merge-stores) should allow vectorization with the given alignment but the target info
considers <3xi32> a non-standard type and gives up early. This patch removes the condition and only checks
for a maximum size allowed and relies on the next condition checking for %4 for correctness.
This should be revisited to include 3xi32 as a MVT type (on arsenm's non-immediate todo list).
Note that checking the sizeInBits for a MVT is undefined (leads to an assertion failure),
so we need to create an EVT, hence the interface change in allowsMisaligned to include the Context.
Reviewers: arsenm, jlebar, tstellarAMD
Subscribers: jholewinski, arsenm, mzolotukhin, llvm-commits
Differential Revision: https://reviews.llvm.org/D23068
llvm-svn: 277735
|
| |
|
|
|
|
| |
constant vectors
llvm-svn: 277731
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Summary: As per title.
Reviewers: majnemer, spatel
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D23139
llvm-svn: 277694
|
| |
|
|
| |
llvm-svn: 277693
|
| |
|
|
|
|
|
| |
This reinstates r277611 + r277614 and reverts r277642. A cast_or_null
should have been a dyn_cast_or_null.
llvm-svn: 277691
|
| |
|
|
|
|
|
|
| |
This reverts commits r277685 & r277688. r277685 broke compiler-rt
compilation http://lab.llvm.org:8080/green/job/clang-stage1-configure-RA_build/23335
and r277685 is a followup from it.
llvm-svn: 277690
|
| |
|
|
|
|
|
|
|
|
| |
As we addressed all compilation time problems with GVN-hoist
https://llvm.org/bugs/show_bug.cgi?id=28670
this patch turns GVN-hoist back by default.
Differential Revision: https://reviews.llvm.org/D23136
llvm-svn: 277685
|
| |
|
|
|
|
| |
constant vectors
llvm-svn: 277659
|
| |
|
|
|
|
|
| |
Not a correctness issue, but it would be nice if we didn't have to
recompute our block numbering (worst-case) every time we move MSSA.
llvm-svn: 277652
|
| |
|
|
|
|
|
|
|
|
| |
Limit the number of times the while(1) loop is executed. With this restriction
the number of hoisted instructions does not change in a significant way on the
test-suite.
Differential Revision: https://reviews.llvm.org/D23028
llvm-svn: 277651
|
| |
|
|
|
|
|
|
|
| |
With this patch we compute the DFS numbers of instructions only once and update
them during the code generation when an instruction gets hoisted.
Differential Revision: https://reviews.llvm.org/D23021
llvm-svn: 277650
|
| |
|
|
|
|
|
|
| |
With this patch we compute the MemorySSA once and update it in the code generator.
Differential Revision: https://reviews.llvm.org/D22966
llvm-svn: 277649
|
| |
|
|
|
|
|
|
|
| |
This reverts commit r277611 and the followup r277614.
Bootstrap builds and chromium builds are crashing during inlining after
this change.
llvm-svn: 277642
|
| |
|
|
|
|
|
| |
Didn't want to fold this in with r277640, since it touches bits that
aren't entirely related to r277640.
llvm-svn: 277641
|
| |
|
|
|
|
|
| |
This is a follow-up to r277637. It teaches MemorySSA that invariant
loads (and loads of provably constant memory) are always liveOnEntry.
llvm-svn: 277640
|
| |
|
|
|
|
| |
constant vectors
llvm-svn: 277638
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch makes MemorySSA recognize atomic/volatile loads, and makes
MSSA treat said loads specially. This allows us to be a bit more
aggressive in some cases.
Administrative note: Revision was LGTM'ed by reames in person.
Additionally, this doesn't include the `invariant.load` recognition in
the differential revision, because I feel it's better to commit that
separately. Will commit soon.
Differential Revision: https://reviews.llvm.org/D16875
llvm-svn: 277637
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
aggressive cast-folding
Summary:
InstCombine unfolds expressions of the form `zext(or(icmp, icmp))` to `or(zext(icmp), zext(icmp))` such that in a later iteration of InstCombine the exposed `zext(icmp)` instructions can be optimized. We now combine this unfolding and the subsequent `zext(icmp)` optimization to be performed together. Since the unfolding doesn't happen separately anymore, we also again enable the folding of `logic(cast(icmp), cast(icmp))` expressions to `cast(logic(icmp, icmp))` which had been disabled due to its interference with the unfolding transformation.
Tested via `make check` and `lnt`.
Background
==========
For a better understanding on how it came to this change we subsequently summarize its history. In commit r275989 we've already tried to enable the folding of `logic(cast(icmp), cast(icmp))` to `cast(logic(icmp, icmp))` which had to be reverted in r276106 because it could lead to an endless loop in InstCombine (also see http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20160718/374347.html). The root of this problem is that in `visitZExt()` in InstCombineCasts.cpp there also exists a reverse of the above folding transformation, that unfolds `zext(or(icmp, icmp))` to `or(zext(icmp), zext(icmp))` in order to expose `zext(icmp)` operations which would then possibly be eliminated by subsequent iterations of InstCombine. However, before these `zext(icmp)` would be eliminated the folding from r275989 could kick in and cause InstCombine to endlessly switch back and forth between the folding and the unfolding transformation. This is the reason why we now combine the `zext`-unfolding and the elimination of the exposed `zext(icmp)` to happen at one go because this enables us to still allow the cast-folding in `logic(cast(icmp), cast(icmp))` without entering an endless loop again.
Details on the submitted changes
================================
- In `visitZExt()` we combine the unfolding and optimization of `zext` instructions.
- In `transformZExtICmp()` we have to use `Builder->CreateIntCast()` instead of `CastInst::CreateIntegerCast()` to make sure that the new `CastInst` is inserted in a `BasicBlock`. The new calls to `transformZExtICmp()` that we introduce in `visitZExt()` would otherwise cause according assertions to be triggered (in our case this happend, for example, with lnt for the MultiSource/Applications/sqlite3 and SingleSource/Regression/C++/EH/recursive-throw tests). The subsequent usage of `replaceInstUsesWith()` is necessary to ensure that the new `CastInst` replaces the `ZExtInst` accordingly.
- In InstCombineAndOrXor.cpp we again allow the folding of casts on `icmp` instructions.
- The instruction order in the optimized IR for the zext-or-icmp.ll test case is different with the introduced changes.
- The test cases in zext.ll have been adopted from the reverted commits r275989 and r276105.
Reviewers: grosser, majnemer, spatel
Subscribers: eli.friedman, majnemer, llvm-commits
Differential Revision: https://reviews.llvm.org/D22864
Contributed-by: Matthias Reisinger <d412vv1n@gmail.com>
llvm-svn: 277635
|
| |
|
|
|
|
|
|
|
| |
splat vectors
This removes the restriction for the icmp constant, but as noted by the FIXME comments,
we still need to change individual checks for binop operand constants.
llvm-svn: 277629
|
| |
|
|
|
|
|
|
|
| |
It is possible for the value map to not have an entry for some value
that has already been removed.
I don't have a testcase, this is fall-out from a buildbot.
llvm-svn: 277614
|
| |
|
|
| |
llvm-svn: 277612
|
| |
|
|
|
|
|
|
|
|
|
| |
We were able to figure out that the result of a call is some constant.
While propagating that fact, we added the constant to the value map.
This is problematic because it results in us losing the call site when
processing the value map.
This fixes PR28802.
llvm-svn: 277611
|
| |
|
|
|
|
| |
This reverts commit r277592, trying to fix the AArch64 42VMA buildbot.
llvm-svn: 277607
|
| |
|
|
|
|
|
|
| |
obsolete comment (NFC)
Differential Revision: https://reviews.llvm.org/D23013
llvm-svn: 277595
|
| |
|
|
|
|
|
|
|
|
| |
Use LVI to prove that adds do not wrap. The change is motivated by https://llvm.org/bugs/show_bug.cgi?id=28620 bug and it's the first step to fix that problem.
Reviewed By: sanjoy
Differential Revision: http://reviews.llvm.org/D23059
llvm-svn: 277592
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This is the first refactoring before adding new functionality.
Add a class wrapper for the functions and container for
state associated with the transformation.
No functional change
Reviewers: majnemer, nadav, mehdi_amini
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D23065
llvm-svn: 277565
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a bug where we'd sometimes cache overly-conservative results
with our walker. This bug was made more obvious by r277480, which makes
our cache far more spotty than it was. Test case is llvm-unit, because
we're likely going to use CachingWalker only for def optimization in the
future.
The bug stems from that there was a place where the walker assumed that
`DefNode.Last` was a valid target to cache to when failing to optimize
phis. This is sometimes incorrect if we have a cache hit. The fix is to
use the thing we *can* assume is a valid target to cache to. :)
llvm-svn: 277559
|
| |
|
|
|
|
| |
here. NFC.
llvm-svn: 277557
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Sometimes, bitsets could get really large (>300k entries) and
we might want to drop a check, as it would have a too much cost.
Adding a flag to control how much penalty are we willing to pay
for bitsets.
Reviewers: kcc
Differential Revision: https://reviews.llvm.org/D23088
llvm-svn: 277556
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Summary: Depends on D23072
Reviewers: george.burgess.iv
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D23076
llvm-svn: 277553
|
| |
|
|
|
|
| |
Clean-up before changing this to allow folds for vectors.
llvm-svn: 277538
|
| |
|
|
|
|
|
|
|
|
| |
Reviewers: tejohnson, eraman
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D22980
llvm-svn: 277534
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Summary: We really want to move towards MemoryLocOrCall (or fix AA) everywhere, but for now, this lets us have a single instructionClobbersQuery.
Reviewers: george.burgess.iv
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D23072
llvm-svn: 277530
|
| |
|
|
|
|
|
|
|
|
| |
original value.
As agreed in post-commit review of r265388, I'm switching the flag to
its original value until the 90% runtime performance regression on
SingleSource/Benchmarks/Stanford/Bubblesort is addressed.
llvm-svn: 277524
|
| |
|
|
|
|
|
|
|
| |
Update comment for isOutOfScope and add a testcase for uniform value being used
out of scope.
Differential Revision: https://reviews.llvm.org/D23073
llvm-svn: 277515
|