| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
SIGN_EXTEND_VECTOR_INREG/ZERO_EXTEND_VECTOR_INREG
Preparatory step for PR31712
llvm-svn: 294874
|
|
|
|
|
|
| |
Generalize VSEXT/VZEXT constant folding to work with any target constant bits source not just BUILD_VECTOR .
llvm-svn: 294873
|
|
|
|
| |
llvm-svn: 294864
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I don't know if anything other than x86 vectors is affected by this change, but this may allow
us to remove target-specific intrinsics for blendv* (vector selects). The simplification arises
from the fact that blendv* instructions only use the sign-bit when deciding which vector element
to choose for the destination vector. The mechanism to fold VSELECT into SHRUNKBLEND nodes already
exists in x86 lowering; this demanded bits change just enables the transform to fire more often.
The original motivation starts with a bug for DSE of masked stores that seems completely unrelated,
but I've explained the likely steps in this series here:
https://llvm.org/bugs/show_bug.cgi?id=11210
Differential Revision: https://reviews.llvm.org/D29687
llvm-svn: 294863
|
|
|
|
| |
llvm-svn: 294859
|
|
|
|
| |
llvm-svn: 294858
|
|
|
|
| |
llvm-svn: 294857
|
|
|
|
|
|
|
|
|
|
| |
getTargetConstantBitsFromNode.
Removes duplicate constant extraction code in getTargetShuffleMaskIndices.
getTargetConstantBitsFromNode - adds support for VZEXT_MOVL(SCALAR_TO_VECTOR) and fail if the caller doesn't support undef bits.
llvm-svn: 294856
|
|
|
|
| |
llvm-svn: 294852
|
|
|
|
| |
llvm-svn: 294851
|
|
|
|
| |
llvm-svn: 294850
|
|
|
|
| |
llvm-svn: 294849
|
|
|
|
| |
llvm-svn: 294847
|
|
|
|
|
|
|
|
| |
All commutations confirmed to give identical results - note PFMAX/PFMIN do not
PFSUB<->PFSUBR should be commutable as well
llvm-svn: 294846
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
it is dead or unreachable, as it should be.
This also makes the leader of INITIAL undef, enabling us to handle
irreducibility properly.
Summary:
This lets us verify, more than we do now, that we didn't screw up
value numbering.
Reviewers: davide
Subscribers: Prazek, llvm-commits
Differential Revision: https://reviews.llvm.org/D29842
llvm-svn: 294844
|
|
|
|
| |
llvm-svn: 294843
|
|
|
|
| |
llvm-svn: 294837
|
|
|
|
| |
llvm-svn: 294830
|
|
|
|
| |
llvm-svn: 294829
|
|
|
|
| |
llvm-svn: 294827
|
|
|
|
|
|
|
|
|
|
|
|
| |
is available.
Seems the execution dependency pass likes to use FP instructions when most of the consuming code is integer if a vextractf128 instruction produced the register. Without AVX2 we don't have the corresponding integer instruction available.
This patch suppresses the domain on these instructions to GenericDomain if AVX2 is not supported so that they are ignored by domain fixing. If AVX2 is supported we'll report the correct domain and allow them to switch between integer and fp.
Overall I think this produces better results in the modified test cases.
llvm-svn: 294824
|
|
|
|
| |
llvm-svn: 294822
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
The patch adds instructions number generated by a solution
to LSR cost under "-lsr-insns-cost" option.
Reviewers: qcolombet, hfinkel
Differential Revision: http://reviews.llvm.org/D28307
From: Evgeny Stupachenko <evstupac@gmail.com>
llvm-svn: 294821
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are no vldN/vstN f16 variants, even with +fullfp16.
We could use the i16 variants, but, in practice, even with +fullfp16,
the f16 sequence leading to the i16 shuffle usually gets scalarized.
We'd need to improve our support for f16 codegen before getting there.
Teach the cost model to consider f16 interleaved operations as
expensive. Otherwise, we are all but guaranteed to end up with
a large block of scalarized vector code.
llvm-svn: 294819
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are no vldN/vstN f16 variants, even with +fullfp16.
We could use the i16 variants, but, in practice, even with +fullfp16,
the f16 sequence leading to the i16 shuffle usually gets scalarized.
We'd need to improve our support for f16 codegen before getting there.
Reject f16 interleaved accesses. If we try to emit the f16 intrinsics,
we'll just end up with a selection failure.
llvm-svn: 294818
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
outerloop.
The recommit includes some changes of testcases. No functional change to the patch.
In RateRegister of existing LSR, if a formula contains a Reg which is a SCEVAddRecExpr,
and this SCEVAddRecExpr's loop is an outerloop, the formula will be marked as Loser
and dropped.
Suppose we have an IR that %for.body is outerloop and %for.body2 is innerloop. LSR only
handle inner loop now so only %for.body2 will be handled.
Using the logic above, formula like
reg(%array) + reg({1,+, %size}<%for.body>) + 1*reg({0,+,1}<%for.body2>) will be dropped
no matter what because reg({1,+, %size}<%for.body>) is a SCEVAddRecExpr type reg related
with outerloop. Only formula like
reg(%array) + 1*reg({{1,+, %size}<%for.body>,+,1}<nuw><nsw><%for.body2>) will be kept
because the SCEVAddRecExpr related with outerloop is folded into the initial value of the
SCEVAddRecExpr related with current loop.
But in some cases, we do need to share the basic induction variable
reg{0 ,+, 1}<%for.body2> among LSR Uses to reduce the final total number of induction
variables used by LSR, so we don't want to drop the formula like
reg(%array) + reg({1,+, %size}<%for.body>) + 1*reg({0,+,1}<%for.body2>) unconditionally.
From the existing comment, it tries to avoid considering multiple level loops at the same time.
However, existing LSR only handles innermost loop, so for any SCEVAddRecExpr with a loop other
than current loop, it is an invariant and will be simple to handle, and the formula doesn't have
to be dropped.
Differential Revision: https://reviews.llvm.org/D26429
llvm-svn: 294814
|
|
|
|
|
|
| |
minor fixes (NFC).
llvm-svn: 294813
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was marking the loop for deletion after the loop was deleted. This
almost works, except that when we do any kind of debug logging it starts
reading the name of the loop from deleted memory or otherwise blowing
up. This can fail in a bunch of ways. I recently added a test that
*always* does this, and it started failing on the sanitizer bots.
The fix is to mark the loop as deleted in the loop PM infrastructure
before we remove the loop. We can do this by passing the updater into
the routine. That also lets us simplify a bunch of other interface
components here for a net win.
llvm-svn: 294810
|
|
|
|
|
|
|
| |
Remove support for disassembling an old experimental wasm binary format, which
is no longer in use anywhere.
llvm-svn: 294809
|
|
|
|
| |
llvm-svn: 294807
|
|
|
|
| |
llvm-svn: 294805
|
|
|
|
|
|
|
|
|
|
| |
The summary information includes all uses of llvm.type.test and
llvm.type.checked.load intrinsics that can be used to devirtualize calls,
including any constant arguments for virtual constant propagation.
Differential Revision: https://reviews.llvm.org/D29734
llvm-svn: 294795
|
|
|
|
|
|
|
|
| |
This is necessary to avoid warnings from GCC.
InstCombineLoadStoreAlloca.cpp:238:7: error: 'PointerReplacer' declared
with greater visibility than the type of its field 'PointerReplacer::IC'
llvm-svn: 294794
|
|
|
|
|
|
|
|
| |
This makes this code much more similar to what ThinLTO is
using (also API wise), so now we can probably use a single
code path instead of copying stuff around.
llvm-svn: 294792
|
|
|
|
| |
llvm-svn: 294791
|
|
|
|
| |
llvm-svn: 294788
|
|
|
|
| |
llvm-svn: 294787
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For function-scope variables with large initialisation list, FE usually
generates a global variable to hold the initializer, then generates
memcpy intrinsic to initialize the alloca. InstCombiner::visitAllocaInst
identifies such allocas which are accessed only by reading and replaces
them with the global variable. This is done by casting the global variable
to the type of the alloca and replacing all references.
However, when the global variable is in a different address space which
is disjoint with addr space 0 (e.g. for IR generated from OpenCL,
global variable cannot be in private addr space i.e. addr space 0), casting
the global variable to addr space 0 results in invalid IR for certain
targets (e.g. amdgpu).
To fix this issue, when the global variable is not in addr space 0,
instead of casting it to addr space 0, this patch chases down the uses
of alloca until reaching the load instructions, then replaces load from
alloca with load from the global variable. If during the chasing
bitcast and GEP are encountered, new bitcast and GEP based on the global
variable are generated and used in the load instructions.
Differential Revision: https://reviews.llvm.org/D27283
llvm-svn: 294786
|
|
|
|
| |
llvm-svn: 294783
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
discriminator.
Summary:
This patch starts the implementation as discuss in the following RFC: http://lists.llvm.org/pipermail/llvm-dev/2016-October/106532.html
When optimization duplicates code that will scale down the execution count of a basic block, we will record the duplication factor as part of discriminator so that the offline process tool can find the duplication factor and collect the accurate execution frequency of the corresponding source code. Two important optimization that fall into this category is loop vectorization and loop unroll. This patch records the duplication factor for these 2 optimizations.
The recording will be guarded by a flag encode-duplication-in-discriminators, which is off by default.
Reviewers: probinson, aprantl, davidxl, hfinkel, echristo
Reviewed By: hfinkel
Subscribers: mehdi_amini, anemet, mzolotukhin, llvm-commits
Differential Revision: https://reviews.llvm.org/D26420
llvm-svn: 294782
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
powerpc64 big-endian is not supported, but I believe that most logic can
be shared, except for xray_powerpc64.cc.
Also add a function InvalidateInstructionCache to xray_util.h, which is
copied from llvm/Support/Memory.cpp. I'm not sure if I need to add a unittest,
and I don't know how.
Reviewers: dberris, echristo, iteratee, kbarton, hfinkel
Subscribers: mehdi_amini, nemanjai, mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D29742
llvm-svn: 294781
|
|
|
|
| |
llvm-svn: 294775
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since r274013, we've been looking through bitcasts on broadcast inputs.
In the scalar-folding case (from a load, build_vector, or sc2vec),
the input type didn't matter, as we'd simply bitcast the resulting
scalar back.
However, when broadcasting a 128-bit-lane-aligned element, we create an
EXTRACT_SUBVECTOR. Use proper types, by creating an extract_subvector
of the original input type.
llvm-svn: 294774
|
|
|
|
|
|
|
|
|
|
|
| |
in this case for CPU_SUBTYPE_ARM64_ALL.
For this cpusubtype it should default to a cyclone CPU
to give proper disassembly without a -mcpu= flag.
rdar://27767188
llvm-svn: 294771
|
|
|
|
|
|
| |
We don't use them yet and they just cause problems.
llvm-svn: 294770
|
|
|
|
|
|
| |
Differential revision: https://reviews.llvm.org/D29831
llvm-svn: 294769
|
|
|
|
|
|
|
|
|
|
|
| |
In the encoding of system registers in the M-class MSR instruction the mask bits
should be 2 for registers that don't take a _<bits> qualifier (the instruction
is unpredictable otherwise), and should also be 2 if the register takes a
_<bits> qualifier but it's not present as no _<bits> is an alias for _nzcvq.
Differential Revision: https://reviews.llvm.org/D29828
llvm-svn: 294762
|
|
|
|
|
|
|
|
|
| |
We previously only created a vector phi node for an induction variable if its
type matched the type of the canonical induction variable.
Differential Revision: https://reviews.llvm.org/D29776
llvm-svn: 294755
|
|
|
|
| |
llvm-svn: 294753
|
|
|
|
|
|
|
|
|
| |
This makes sure we get the same redefinition rules regardless of who
is printing (asm parser, codegen) and to what (asm, obj).
This fixes an unintentional regression in r293936.
llvm-svn: 294752
|