| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
This fixes PR30454.
llvm-svn: 287379
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
insertUniqueBackedgeBlock in lib/Transforms/Utils/LoopSimplify.cpp now
propagates existing llvm.loop metadata to newly the added backedge.
llvm::TryToSimplifyUncondBranchFromEmptyBlock in lib/Transforms/Utils/Local.cpp
now propagates existing llvm.loop metadata to the branch instructions in the
predecessor blocks of the empty block that is removed.
Differential Revision: https://reviews.llvm.org/D26495
llvm-svn: 287341
|
| |
|
|
|
|
|
|
| |
for variable shift with 16-bit elements.
This is a straightforward extension of the existing support for 32/64-bit element types. Just needed to add the additional instrinsics to the switches.
llvm-svn: 287316
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This extends FCOPYSIGN support to 512-bit vectors.
I've also added tests to show what the 128-bit and 256-bit cases look like with broadcast loads.
Reviewers: delena, zvi, RKSimon, spatel
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D26791
llvm-svn: 287298
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
For flat loop, even if it is hot, it is not a good idea to unroll in runtime, thus we set a lower partial unroll threshold.
For hot loop, we set a higher unroll threshold and allows expensive tripcount computation to allow more aggressive unrolling.
Reviewers: davidxl, mzolotukhin
Subscribers: sanjoy, mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D26527
llvm-svn: 287186
|
| |
|
|
|
|
|
|
|
| |
This pass splits globals into elements using inrange annotations on
getelementptr indices.
Differential Revision: https://reviews.llvm.org/D22295
llvm-svn: 287178
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Summary: These intrinsics have been unused for clang for a while. This patch removes them. We auto upgrade them to extractelements, a scalar operation and then an insertelement. This matches the sequence used by clangs intrinsic file.
Reviewers: zvi, delena, RKSimon
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D26660
llvm-svn: 287083
|
| |
|
|
|
|
|
| |
Reviewer: Michael Zolotukhin.
Differential Revision: https://reviews.llvm.org/D26575
llvm-svn: 287064
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
We don't do BypassSlowDivision when the denominator is a constant, but
we do do it when the numerator is a constant.
This patch makes two related changes to BypassSlowDivision when the
numerator is a constant:
* If the numerator is too large to fit into the bypass width, don't
bypass slow division (because we'll never run the smaller-width
code).
* If we bypass slow division where the numerator is a constant, don't
OR together the numerator and denominator when determining whether
both operands fit within the bypass width. We need to check only the
denominator.
Reviewers: tra
Subscribers: llvm-commits, jholewinski
Differential Revision: https://reviews.llvm.org/D26699
llvm-svn: 287062
|
| |
|
|
|
|
| |
be made target specific.
llvm-svn: 287014
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In RateRegister of existing LSR, if a formula contains a Reg which is a SCEVAddRecExpr,
and this SCEVAddRecExpr's loop is an outerloop, the formula will be marked as Loser
and dropped.
Suppose we have an IR that %for.body is outerloop and %for.body2 is innerloop. LSR only
handle inner loop now so only %for.body2 will be handled.
Using the logic above, formula like
reg(%array) + reg({1,+, %size}<%for.body>) + 1*reg({0,+,1}<%for.body2>) will be dropped
no matter what because reg({1,+, %size}<%for.body>) is a SCEVAddRecExpr type reg related
with outerloop. Only formula like
reg(%array) + 1*reg({{1,+, %size}<%for.body>,+,1}<nuw><nsw><%for.body2>) will be kept
because the SCEVAddRecExpr related with outerloop is folded into the initial value of the
SCEVAddRecExpr related with current loop.
But in some cases, we do need to share the basic induction variable
reg{0 ,+, 1}<%for.body2> among LSR Uses to reduce the final total number of induction
variables used by LSR, so we don't want to drop the formula like
reg(%array) + reg({1,+, %size}<%for.body>) + 1*reg({0,+,1}<%for.body2>) unconditionally.
From the existing comment, it tries to avoid considering multiple level loops at the same time.
However, existing LSR only handles innermost loop, so for any SCEVAddRecExpr with a loop other
than current loop, it is an invariant and will be simple to handle, and the formula doesn't have
to be dropped.
Differential Revision: https://reviews.llvm.org/D26429
llvm-svn: 286999
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When both WidenIV::getWideRecurrence and WidenIV::getExtendedOperandRecurrence
return non-null but different WideAddRec, if getWideRecurrence is called
before getExtendedOperandRecurrence, we won't bother to call
getExtendedOperandRecurrence again. But As we know it is possible that after
SCEV folding, we cannot prove the legality using the SCEVAddRecExpr returned
by getWideRecurrence. Meanwhile if getExtendedOperandRecurrence returns non-null
WideAddRec, we know for sure that it is legal to do widening for current instruction.
So it is better to put getExtendedOperandRecurrence before getWideRecurrence, which
will increase the chance of successful widening.
Differential Revision: https://reviews.llvm.org/D26059
llvm-svn: 286987
|
| |
|
|
| |
llvm-svn: 286978
|
| |
|
|
|
|
| |
This reverts commit ac54d0066c478a09c7cd28d15d0f9ff8af984afc.
llvm-svn: 286976
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The register usage algorithm incorrectly treats instructions whose value is
not used within the loop (e.g. those that do not produce a value).
The algorithm first calculates the usages within the loop. It iterates over
the instructions in order, and records at which instruction index each use
ends (in fact, they're actually recorded against the next index, as this is
when we want to delete them from the open intervals).
The algorithm then iterates over the instructions again, adding each
instruction in turn to a list of open intervals. Instructions are then
removed from the list of open intervals when they occur in the list of uses
ended at the current index.
The problem is, instructions which are not used in the loop are skipped.
However, although they aren't used, the last use of a value may have been
recorded against that instruction index. In this case, the use is not deleted
from the open intervals, which may then bump up the estimated register usage.
This patch fixes the issue by simply moving the "is used" check after the loop
which erases the uses at the current index.
Differential Revision: https://reviews.llvm.org/D26554
llvm-svn: 286969
|
| |
|
|
| |
llvm-svn: 286900
|
| |
|
|
|
|
|
|
| |
Add explicit v16i16/v32i8 ADD/SUB costs, matching the costs of v4i64/v8i32 - they were missing for some reason.
This has side effects on the LV max bandwidth tests (AVX1 now prefers 128-bit vectors vs AVX2 which still prefers 256-bit)
llvm-svn: 286832
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When calculating the cost of a call instruction we were applying a heuristic penalty as well as the cost of the instruction itself.
However, when calculating the benefit from inlining we weren't discounting the equivalent penalty for the call instruction that would be removed! This caused skew in the calculation and meant we wouldn't inline in the following, trivial case:
int g() {
h();
}
int f() {
g();
}
llvm-svn: 286814
|
| |
|
|
|
|
| |
AVX-512 variable shift intrinsics.
llvm-svn: 286755
|
| |
|
|
|
|
|
|
| |
shift by immediate and shift by single value.
This does not include support for the AVX-512 variable shifts. That will be coming in a future patch.
llvm-svn: 286739
|
| |
|
|
| |
llvm-svn: 286668
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is PR28376.
Unfortunately given the current structure of optimization diagnostics we
lack the capability to tell whether the user has
passed -Rpass-analysis=loop-vectorize since this is local to the
front-end (BackendConsumer::OptimizationRemarkHandler).
So rather than printing this even if the user has already
passed -Rpass-analysis, this patch just punts and stops recommending
this option. I don't think that getting this right is worth the
complexity.
Differential Revision: https://reviews.llvm.org/D26563
llvm-svn: 286662
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a function pointer is replaced with a jumptable pointer, special
case is needed to preserve the semantics of extern_weak functions.
Since a jumptable entry can not be extern_weak, we emulate that
behaviour by replacing all references to F (the extern_weak function)
with the following expression: F != nullptr ? JumpTablePtr : nullptr.
Extra special care is needed for global initializers, since most (or
probably all) backends can not lower an initializer that includes
this kind of constant expression. Initializers like that are replaced
with a global constructor (i.e. a runtime initializer).
llvm-svn: 286636
|
| |
|
|
|
|
|
| |
Reviewer: Michael Zolotukhin.
Differential Revision: https://reviews.llvm.org/D26543
llvm-svn: 286626
|
| |
|
|
| |
llvm-svn: 286619
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The current implementation is emitting a global constant that happens
to evaluate to the same bytes + relocation as a jump instruction on
X86. This does not work for PIE executables and shared libraries
though, because we end up with a wrong relocation type. And it has no
chance of working on ARM/AArch64 which use different relocation types
for jump instructions (R_ARM_JUMP24) that is never generated for
data.
This change replaces the constant with module-level inline assembly
followed by a hidden declaration of the jump table. Works fine for
ARM/AArch64, but has some drawbacks.
* Extra symbols are added to the static symbol table, which inflate
the size of the unstripped binary a little. Stripped binaries are not
affected. This happens because jump table declarations must be
external (because their body is in the inline asm).
* Original functions that were anonymous are now named
<original name>.cfi, and it affects symbolization sometimes. This is
necessary because the only user of these functions is the (inline
asm) jump table, so they had to be added to @llvm.used, which does
not allow unnamed functions.
llvm-svn: 286611
|
| |
|
|
|
|
|
| |
The r283656 did this in the remark arguments. We also need to do this
in the main function attribute as that is written to YAML as well.
llvm-svn: 286482
|
| |
|
|
|
|
|
|
| |
Note that the existing metadata checking was re-added by hand because the
script doesn't currently know how to generate checks for lines outside of
functions.
llvm-svn: 286460
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Removing the limitation in visitInsertElementInst() causes several regressions
because we're not prepared to fold sequences of shuffles or inserts and extracts
separated by shuffles. Fixing that appears to be a difficult mission because we
are purposely trying to avoid creating shuffles with arbitrary shuffle masks
because some targets may choke on those.
https://llvm.org/bugs/show_bug.cgi?id=30923
llvm-svn: 286423
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
The change will test the change in r286159.
The idea behind the change: Make the dbg location different between loop header and preheader/exit. Originally, dbg location 21 exists in 3 BBs: preheader, header, critical edge (exit). Update the debug location of inside the loop header from !21 to !22 so that it will reflect the correct location.
Reviewers: probinson
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D26428
llvm-svn: 286403
|
| |
|
|
| |
llvm-svn: 286402
|
| |
|
|
| |
llvm-svn: 286399
|
| |
|
|
|
|
|
|
|
|
|
| |
Scalar Evolution asserts when not all the operands of an Add Recurrence
Expression are loop invariants. Loop Strength Reduction should only
create affine Add Recurrences, so that both the start and the step of
the expression are loop invariants.
Differential Revision: https://reviews.llvm.org/D26185
llvm-svn: 286347
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Summary: For functions with profile data, we are confident that loop sink will be optimal in sinking code.
Reviewers: davidxl, hfinkel
Subscribers: mehdi_amini, mzolotukhin, llvm-commits
Differential Revision: https://reviews.llvm.org/D26155
llvm-svn: 286325
|
| |
|
|
|
|
|
|
|
|
| |
As the test change shows, we can increase the critical path by adding
a 'not' instruction, so make sure that we're actually removing an
instruction if we do this transform.
This transform could also cause us to miss folds of min/max pairs.
llvm-svn: 286315
|
| |
|
|
|
|
|
|
|
|
| |
For example, it invalidates the domtree, causing assertions
in later passes which need dominator infos. Make it preserve
GlobalsAA, as suggested by Eli.
Differential Revision: https://reviews.llvm.org/D26381
llvm-svn: 286271
|
| |
|
|
| |
llvm-svn: 286256
|
| |
|
|
| |
llvm-svn: 286255
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
These are good candidates for jump threading. This enables later opts
(such as InstCombine) to combine instructions from the selects with
instructions out of the selects. SimplifyCFG will fold the select
again if unfolding wasn't worth it.
Patch by James Molloy and Pablo Barrio.
Reviewers: rengolin, haicheng, sebpop
Subscribers: jojo, jmolloy, llvm-commits
Differential Revision: https://reviews.llvm.org/D26391
llvm-svn: 286236
|
| |
|
|
|
|
|
|
|
|
| |
This patch avoids scalarization of CTLZ by instead expanding to use CTPOP (ref: "Hacker's Delight") when the necessary operations are available.
This also adds the necessary cost models for X86 SSE2 targets (the main beneficiary) to ensure vectorization only happens when its useful.
Differential Revision: https://reviews.llvm.org/D25910
llvm-svn: 286233
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
With this we get a new field in the YAML record if the value being
streamed out has a debug location. For examples, please see the changes
to the tests.
This is then used in opt-viewer to display a link for the callee
function in the inlining remarks.
Differential Revision: https://reviews.llvm.org/D26366
llvm-svn: 286169
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
In some specific scenarios with well understood operand bundle types
(like `"deopt"`) it may be possible to go ahead and convert recursion to
iteration, but TailRecursionElimination does not have that logic today
so avoid doing the right thing for now.
I need some input on whether `"funclet"` operand bundles should also
block tail recursion elimination. If not, I'll allow TRE across calls
with `"funclet"` operand bundles and add a test case.
Reviewers: rnk, majnemer, nlewycky, ahatanak
Subscribers: mcrosier, llvm-commits
Differential Revision: https://reviews.llvm.org/D26270
llvm-svn: 286147
|
| |
|
|
|
|
|
|
|
| |
Argument evaluation order is one of the edge cases where Clang differs
from GCC, yielding different IR depending on which compiler LLVM was
built with. Make the order deterministic and tune the test to actually
verify the order instead of trying to hide it.
llvm-svn: 286126
|
| |
|
|
|
|
|
|
| |
This was reverted at r285866 because there was a crash handling a scalar
select of vectors. I added a check for that pattern and a test case based
on the example provided in the post-commit thread for r285732.
llvm-svn: 286113
|
| |
|
|
|
|
| |
It depends on host's pow(3), and mingw's pow doesn't raise any errors, just returns +INF.
llvm-svn: 286005
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit r285732.
This change introduced a new assertion failure in the following
testcase at -O2:
typedef short __v8hi __attribute__((__vector_size__(16)));
__v8hi foo(__v8hi &V1, __v8hi &V2, unsigned mask) {
__v8hi Result = V1;
if (mask & 0x80)
Result[0] = V2[0];
return Result;
}
llvm-svn: 285866
|
| |
|
|
|
|
|
|
|
|
|
| |
On platforms which use -fmath-errno, math libcalls without any uses
require some extra checks to figure out if they are actually dead.
Fixes https://llvm.org/bugs/show_bug.cgi?id=30464 .
Differential Revision: https://reviews.llvm.org/D25970
llvm-svn: 285857
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
It was detected that the reassociate pass could enter an inifite
loop when analysing dead code. Simply skipping to analyse basic
blocks that are dead avoids such problems (and as a side effect
we avoid spending time on optimising dead code).
The solution is using the same Reverse Post Order ordering of the
basic blocks when doing the optimisations, as when building the
precalculated rank map. A nice side-effect of this solution is
that we now know that we only try to do optimisations for blocks
with ranked instructions.
Fixes https://llvm.org/bugs/show_bug.cgi?id=30818
Reviewers: llvm-commits, davide, eli.friedman, mehdi_amini
Subscribers: dberlin
Differential Revision: https://reviews.llvm.org/D26154
llvm-svn: 285793
|
| |
|
|
| |
llvm-svn: 285732
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This transforms
%a = shl nuw %x, c1
%b = icmp {ugt|ule} %a, c0
into
%b = icmp {ugt|ule} %x, (c0 >> c1)
z3:
(declare-const x (_ BitVec 64))
(declare-const c0 (_ BitVec 64))
(declare-const c1 (_ BitVec 64))
(push)
(assert (= x (bvlshr (bvshl x c1) c1))) ; nuw
(assert (not (= (bvugt (bvshl x c1) c0)
(bvugt x
(bvlshr c0 c1)))))
(check-sat)
(get-model)
(pop)
(push)
(assert (= x (bvlshr (bvshl x c1) c1))) ; nuw
(assert (not (= (bvule (bvshl x c1) c0)
(bvule x
(bvlshr c0 c1)))))
(check-sat)
(get-model)
(pop)
Patch by bryant!
Differential Revision: https://reviews.llvm.org/D25913
llvm-svn: 285729
|