| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Use the hasVectorInstrinsicScalarOpd helper function
in ConstantFoldVectorCall.
Reviewers: rengolin, RKSimon, dblaikie
Reviewed By: rengolin, RKSimon
Subscribers: tschuett, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D63705
llvm-svn: 364178
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Handle smul.fix.sat in the scalarizer. This is done by
adding smul.fix.sat to the set of "isTriviallyVectorizable"
intrinsics.
The addition of smul.fix.sat in isTriviallyVectorizable and
hasVectorInstrinsicScalarOpd can also be seen as a preparation
to be able to use hasVectorInstrinsicScalarOpd in ConstantFolding.
Reviewers: rengolin, RKSimon, dblaikie
Reviewed By: rengolin
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D63704
llvm-svn: 364177
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds the family of loads and stores with names like VLD20.8 and
VST42.32, which load and store parts of multiple q-registers in such a
way that executing both VLD20 and VLD21, or all four of VLD40..VLD43,
will distribute 2 or 4 vectors' worth of memory data across the lanes
of the same number of registers but in a transposed order.
In addition to the Tablegen descriptions of the instructions
themselves, this patch also adds encode and decode support for the
QQPR and QQQQPR register classes (representing the range of loaded or
stored vector registers), and tweaks to the parsing system for lists
of vector registers to make it return the right format in this case
(since, unlike NEON, MVE regards q-registers as primitive, and not
just an alias for two d-registers).
llvm-svn: 364172
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
These functions are documented as not modifying the offset argument if
the extraction fails (just like other DataExtractor functions). However,
while reviewing D63591 we discovered that this is not the case -- if the
function reaches the end of the data buffer, it will just return the
value parsed until that point and set offset to point to the end of the
buffer.
This fixes the functions to act as advertised, and adds a regression
test.
Reviewers: dblaikie, probinson, bkramer
Subscribers: kristina, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D63645
llvm-svn: 364169
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
avx512f, but not avx512bw.
Ideally we'd be able to represent this truncate as a any_extend to
v16i32 and a truncate, but SelectionDAG doens't know how to not
fold those together.
We have isel patterns to use a vpmovzxwd+vpdmovdb for the truncate,
but we aren't able to simultaneously fold the load and the store
from the isel pattern. By pulling the truncate into the store we
can successfully hide it from the DAG combiner. Then we can isel
pattern match the truncstore and load+any_extend separately.
llvm-svn: 364163
|
| |
|
|
| |
llvm-svn: 364159
|
| |
|
|
|
|
|
|
|
|
| |
appears to be a copy/paste mistake.
DAG combine should ensure bitcasts of loads don't exist.
Also remove 3 patterns that are identical to the block above them.
llvm-svn: 364158
|
| |
|
|
|
|
| |
In rL364135, I taught IndVars to fold exiting branches in loops with a zero backedge taken count (i.e. loops that only run one iteration). This extends that to eliminate the dead comparison left around.
llvm-svn: 364155
|
| |
|
|
| |
llvm-svn: 364154
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is another intermediate IR step towards solving PR42314:
https://bugs.llvm.org/show_bug.cgi?id=42314
We can test if a value is power-of-2-or-0 using ctpop(X) < 2,
so combining that with a non-zero check of the input is the
same as testing if exactly 1 bit is set:
(X != 0) && (ctpop(X) u< 2) --> ctpop(X) == 1
Differential Revision: https://reviews.llvm.org/D63660
llvm-svn: 364153
|
| |
|
|
| |
llvm-svn: 364152
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
the second half of a split masked load/store.
The code divides the alignment by 2 if the original alignment is
equal to the original VT size. But this wouldn't be correct
if the alignment was larger than the VT size.
The memory operand object already takes care of calling MinAlign
on the base alignment and the memory pointer offset. So we don't
need any special code at all.
llvm-svn: 364151
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tablegen. Use more precise PatFrags for scalar masked load/store.
Rename masked_load/masked_store to masked_ld/masked_st to discourage
their direct use. We need to check truncating/extending and
compressing/expanding before using them. This revealed that
our scalar masked load/store patterns were misusing these.
With those out of the way, renamed masked_load_unaligned and
masked_store_unaligned to remove the "_unaligned". We didn't
check the alignment anyway so the name was somewhat misleading.
Make the aligned versions inherit from masked_load/store instead
from a separate identical version. Merge the 3 different alignments
PatFrags into a single version that uses the VT from the SDNode to
determine the size that the alignment needs to match.
llvm-svn: 364150
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Emscripten's libc doesn't define MNT_LOCAL, thus causing a build
failure in the fallback path. However, to the best of my knowledge,
it also doesn't support remote file system mounts, so we may simply
return `true` here (as we do for e.g. Fuchsia). With this fix, the
core LLVM libraries build correctly under emscripten (though some
of the tools and utils do not).
Reviewers: kripken
Differential Revision: https://reviews.llvm.org/D63688
llvm-svn: 364143
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Option class.
This reverts r364134 (git commit a5b83bc9e3b8e8945b55068c762bd6c73621a4b0)
Caused errors in the asan bot, so the GeneralCategory global needs to
be changed to ManagedStatic.
Differential Revision: https://reviews.llvm.org/D62105
llvm-svn: 364141
|
| |
|
|
|
|
| |
vselect(extract_subvector(x,0),extract_subvector(y,0),extract_subvector(z,0))
llvm-svn: 364136
|
| |
|
|
|
|
|
|
|
|
| |
This turned out to be surprisingly effective. I was originally doing this just for completeness sake, but it seems like there are a lot of cases where SCEV's exit count reasoning is stronger than it's isKnownPredicate reasoning.
Once this is in, I'm thinking about trying to build on the same infrastructure to eliminate provably untaken checks. There may be something generally interesting here.
Differential Revision: https://reviews.llvm.org/D63618
llvm-svn: 364135
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This change processes `OptionCategory`s and `SubCommand`s as they
are seen instead of caching them in the Option class and processing
them later. Doing so simplifies the work needed to be done by the Global
parser and significantly reduces the size of the Option class to a mere 64
bytes.
Removing the `OptionCategory` cache saved 24 bytes, and removing
the `SubCommand` cache saved an additional 48 bytes, for a total of a
72 byte reduction.
Reviewers: beanz, zturner, MaskRay, serge-sans-paille
Reviewed By: serge-sans-paille
Subscribers: serge-sans-paille, tstellar, zturner, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D62105
llvm-svn: 364134
|
| |
|
|
|
|
|
|
| |
After r248261, the indentation switches, inside a namespace definition,
between indenting and not indenting one level in for that namespace; the
abomination occurs in the middle of a class definition. Fix that.
llvm-svn: 364133
|
| |
|
|
|
|
|
| |
A comment that applies to a virtual destructor was placed on a class
constructor. Move the comment to where it belongs.
llvm-svn: 364132
|
| |
|
|
| |
llvm-svn: 364130
|
| |
|
|
| |
llvm-svn: 364129
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is useful for allowing code to efficiently take an address
that can be later mapped onto debug info. Currently the hwasan
pass achieves this by taking the address of the current function:
http://llvm-cs.pcc.me.uk/lib/Transforms/Instrumentation/HWAddressSanitizer.cpp#921
but this costs two instructions (plus a GOT entry in PIC code) per function
with stack variables. This will allow the cost to be reduced to a single
instruction.
Differential Revision: https://reviews.llvm.org/D63471
llvm-svn: 364126
|
| |
|
|
|
|
| |
The link dependencies are already specified in LLVMBuild.txt
llvm-svn: 364125
|
| |
|
|
|
|
|
|
|
|
|
| |
GlobalISel/IRTranslator.cpp now references SelectionDAG/FunctionLoweringInfo.cpp.
This fixes a link error in -DBUILD_SHARED_LIBS=on builds:
ld.lld: error: undefined symbol: llvm::FunctionLoweringInfo::clear()
>>> referenced by IRTranslator.cpp:2198 (../lib/CodeGen/GlobalISel/IRTranslator.cpp:2198)
>>> lib/CodeGen/GlobalISel/CMakeFiles/LLVMGlobalISel.dir/IRTranslator.cpp.o:(llvm::IRTranslator::finalizeFunction())
llvm-svn: 364124
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To help produce better diagnostics for stack use-after-return, we'd like
to be able to determine the addresses of each HWASANified function's local
variables given a small amount of information recorded on entry to the
function. Currently we require all HWASANified functions to use frame pointers
and record (PC, FP) on function entry. This works better than recording SP
because FP cannot change during the function, unlike SP which can change
e.g. due to dynamic alloca.
However, most variables currently end up using SP-relative locations in their
debug info. This prevents us from recomputing the address of most variables
because the distance between SP and FP isn't recorded in the debug info. To
address this, make the AArch64 backend prefer FP-relative debug locations
when producing debug info for HWASANified functions.
Differential Revision: https://reviews.llvm.org/D63300
llvm-svn: 364117
|
| |
|
|
|
|
|
|
|
|
|
|
| |
On Windows ARM64, intrinsic __debugbreak is compiled into brk #0xF000 which is
mapped to llvm.debugtrap in Clang. Instruction brk #F000 is the defined break
point instruction on ARM64 which is recognized by Windows debugger and
exception handling code, so llvm.debugtrap should map to it instead of
redirecting to llvm.trap (brk #1) as the default implementation.
Differential Revision: https://reviews.llvm.org/D63635
llvm-svn: 364115
|
| |
|
|
|
|
|
|
|
| |
This reverts r364084 (git commit 5698921be2d567f6abf925479ac9f5a376d6d74f)
It caused crashes while compiling a file in Chrome. Reduction
forthcoming.
llvm-svn: 364111
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The VM layout on iOS is not stable between releases. On 64-bit iOS and
its derivatives we use a dynamic shadow offset that enables ASan to
search for a valid location for the shadow heap on process launch rather
than hardcode it.
This commit extends that approach for 32-bit iOS plus derivatives and
their simulators.
rdar://50645192
rdar://51200372
rdar://51767702
Reviewed By: delcypher
Differential Revision: https://reviews.llvm.org/D63586
llvm-svn: 364105
|
| |
|
|
|
|
| |
Fixes missing piece from r363990.
llvm-svn: 364099
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(insert_subvector allzeros, (vzmovl X), 0)
128/256 bit scalar_to_vectors are canonicalized to (insert_subvector undef, (scalar_to_vector), 0). We have isel patterns that try to match this pattern being used by a vzmovl to use a 128-bit instruction and a subreg_to_reg.
This patch detects the insert_subvector undef portion of this and pulls it through the vzmovl, creating a narrower vzmovl and an insert_subvector allzeroes. We can then match the insertsubvector into a subreg_to_reg operation by itself. Then we can fall back on existing (vzmovl (scalar_to_vector)) patterns.
Note, while the scalar_to_vector case is the motivating case I didn't restrict to just that case. I'm also wondering about shrinking any 256/512 vzmovl to an extract_subvector+vzmovl+insert_subvector(allzeros) but I fear that would have bad implications to shuffle combining.
I also think there is more canonicalization we can do with vzmovl with loads or scalar_to_vector with loads to create vzload.
Differential Revision: https://reviews.llvm.org/D63512
llvm-svn: 364095
|
| |
|
|
|
|
|
|
|
| |
We don't have any Custom handling during type legalization. Only
operation legalization.
Fixes PR42355
llvm-svn: 364093
|
| |
|
|
|
|
|
|
|
|
| |
opcodes in ReplaceNodeResults.
This should be unreachable, but bugs can make it reachable. This
adds a debug print so we can see the bad node in the output when
the llvm_unreachable triggers.
llvm-svn: 364091
|
| |
|
|
|
|
| |
Subvector shuffling often ends up as insert/extract subvector.
llvm-svn: 364090
|
| |
|
|
|
|
|
|
|
|
| |
and G_BRJT ops.
With this we can now fully code generate jump tables, which is important for code size.
Differential Revision: https://reviews.llvm.org/D63223
llvm-svn: 364086
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tables and range checks.
This change makes use of the newly refactored SwitchLoweringUtils code from
SelectionDAG to in order to generate jump tables and range checks where appropriate.
Much of this code is ported from SDAG with some modifications. We generate
G_JUMP_TABLE and G_BRJT instructions when JT opportunities are found. This means
that targets which previously relied on the naive one MBB per case stmt
translation will now start falling back until they add support for the new opcodes.
For range checks, we don't generate any previously unused operations. This
just recognizes contiguous ranges of case values and generates a single block per
range. Single case value blocks are just a special case of ranges so we get that
support almost for free.
There are still some optimizations missing that I haven't ported over, and
bit-tests are also unimplemented. This patch series is already complex enough.
Actual arm64 support for selection of jump tables is coming in a later patch.
Differential Revision: https://reviews.llvm.org/D63169
llvm-svn: 364085
|
| |
|
|
|
|
|
|
|
|
| |
This patch introduces a new heuristic for guiding operand reordering. The new "look-ahead" heuristic can look beyond the immediate predecessors. This helps break ties when the immediate predecessors have identical opcodes (see lit test for an example).
Committed on behalf of @vporpo (Vasileios Porpodas)
Differential Revision: https://reviews.llvm.org/D60897
llvm-svn: 364084
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
We already use vmovq for v2i64/v2f64 vzmovl. But we were using a
blendpd+xorpd for v4i64/v4f64/v8i64/v8f64 under opt speed. Or
movsd+xorpd under optsize.
I think the blend with 0 or movss/d is only needed for
vXi32 where we don't have an instruction that can move 32
bits from one xmm to another while zeroing upper bits.
movq is no worse than blendpd on any known CPUs.
llvm-svn: 364079
|
| |
|
|
| |
llvm-svn: 364076
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We sometimes get poor code size because constants of types < 32b are legalized
as 32 bit G_CONSTANTs with a truncate to fit. This works but means that the
localizer can no longer sink them (although it's possible to extend it to do so).
On AArch64 however s8 and s16 constants can be selected in the same way as s32
constants, with a mov pseudo into a W register. If we make s8 and s16 constants
legal then we can avoid unnecessary truncates, they can be CSE'd, and the
localizer can sink them as normal.
There is a caveat: if the user of a smaller constant has to widen the sources,
we end up with an anyext of the smaller typed G_CONSTANT. This can cause
regressions because of the additional extend and missed pattern matching. To
remedy this, there's a new artifact combiner to generate the wider G_CONSTANT
if it's legal for the target.
Differential Revision: https://reviews.llvm.org/D63587
llvm-svn: 364075
|
| |
|
|
|
|
|
|
|
| |
This requires 3 wait states unless there is a wait or VALU in
between.
Differential Revision: https://reviews.llvm.org/D63619
llvm-svn: 364074
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
```
%a = sub i32 31, %x
%r = shl i32 1, %a
=>
%d = shl i32 1, 31
%r = lshr i32 %d, %x
Done: 1
Optimization is correct!
```
https://rise4fun.com/Alive/btZm
Reviewers: spatel, lebedev.ri, nikic
Reviewed By: lebedev.ri
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D63652
llvm-svn: 364073
|
| |
|
|
|
|
| |
TargetLoweringBase::isBinOp checks isCommutativeBinOp as a fallback, so don't duplicate.
llvm-svn: 364072
|
| |
|
|
| |
llvm-svn: 364068
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Summary: Signedness does not change number of trailing zeros.
Reviewers: spatel, lebedev.ri, nikic
Reviewed By: lebedev.ri
Differential Revision: https://reviews.llvm.org/D63546
llvm-svn: 364064
|
| |
|
|
|
|
|
| |
Patch based on suggestion by James Molloy (@jmolloy) in:
https://bugs.llvm.org/show_bug.cgi?id=42346
llvm-svn: 364062
|
| |
|
|
|
|
|
|
| |
detection code.
Move the "extract from insert detection code" into a lambda helper function.
llvm-svn: 364059
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
The motivation for this was to propagate fast-math flags like nnan and
ninf on vector floating point operations to the corresponding scalar
operations to take advantage of follow-on optimizations. But I think
the same argument applies to all of our IR flags: if they apply to the
vector operation then they also apply to all the individual scalar
operations, and they might enable follow-on optimizations.
Subscribers: hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D63593
llvm-svn: 364051
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
LLVM Allows Targets to provide information that guides optimisations
made to LLVM IR. This is done with callbacks on a TargetTransformInfo object.
This patch adds a TargetTransformInfo class for RISC-V. This will allow us to
implement RISC-V specific callbacks as they become necessary.
This commit also adds the getIntImmCost callbacks, and tests them with a simple
constant hoisting test. Our immediate costs are on the conservative side, for
the moment, but we prevent hoisting in most circumstances anyway.
Previous review was on D63007
Reviewers: asb, luismarques
Reviewed By: asb
Subscribers: ributzka, MaskRay, llvm-commits, Jim, benna, psnobl, jocewei, PkmX, rkruppe, the_o, brucehoult, MartinMosbeck, rogfer01, edward-jones, zzheng, jrtc27, shiva0217, kito-cheng, niosHD, sabuasal, apazos, simoncook, johnrusso, rbar, hiraditya, mgorny
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D63433
llvm-svn: 364046
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These instructions let you load half a vector register at once from
two general-purpose registers, or vice versa.
The assembly syntax for these instructions mentions the vector
register name twice. For the move _into_ a vector register, the MC
operand list also has to mention the register name twice (once as the
output, and once as an input to represent where the unchanged half of
the output register comes from). So we can conveniently assign one of
the two asm operands to be the output $Qd, and the other $QdSrc, which
avoids confusing the auto-generated AsmMatcher too much. For the move
_from_ a vector register, there's no way to get round the fact that
both instances of that register name have to be inputs, so we need a
custom AsmMatchConverter to avoid generating two separate output MC
operands. (And even that wouldn't have worked if it hadn't been for
D60695.)
Reviewers: dmgreen, samparker, SjoerdMeijer, t.p.northover
Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D62679
llvm-svn: 364041
|