| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
| |
llvm-svn: 371416
|
| |
|
|
|
|
| |
There are no scalar extloads.
llvm-svn: 371414
|
| |
|
|
|
|
|
| |
Fixes 8-byte, 8-byte aligned LDS loads. 16-byte case still broken due
to not be reported as legal.
llvm-svn: 371413
|
| |
|
|
| |
llvm-svn: 371412
|
| |
|
|
|
|
|
| |
The pointer is always a VGPR. Also fix hardcoding the pointer size to
64.
llvm-svn: 371411
|
| |
|
|
| |
llvm-svn: 371409
|
| |
|
|
| |
llvm-svn: 371407
|
| |
|
|
|
|
| |
This will allow optimization patterns which fold adds away to work.
llvm-svn: 371406
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The family of 'dual-accumulating' vector multiply-add instructions
(VMLADAV, VMLALDAV and VRMLALDAVH) can all operate on both signed and
unsigned integer types, and they all have an 'exchange' variant (with
an X in the name) that modifies which pairs of vector lanes in the two
inputs are multiplied together. But there's a clause in the spec that
says that the X variants //don't// operate on unsigned integer types,
only signed. You can have X, or unsigned, or neither, but not both.
We didn't notice that clause when we implemented the MC support for
these instructions, so LLVM believes that things like VMLADAVX.U8 do
exist, contradicting the spec. Here I fix that by conditioning them
out in Tablegen.
In order to do that, I've reversed the nesting order of the Tablegen
multiclasses for those instructions. Previously, the innermost
multiclass generated the X and not-X variants, and the one outside
that generated the A and not-A variants. Now X is done by the outer
multiclass, which allows me to bypass that one when I only want the
two not-X variants.
Changing the multiclass nesting order also changes the names of the
instruction ids unless I make a special effort not to. I decided that
while I was changing them anyway I'd make them look nicer; so now the
instructions have names like MVE_VMLADAVs32 or MVE_VMLADAVaxs32,
instead of cumbersome _noacc_noexch suffixes.
The corresponding multiply-subtract instructions are unaffected. Those
don't accept unsigned types at all, either in the spec or in LLVM.
Reviewers: ostannard, dmgreen
Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67214
llvm-svn: 371405
|
| |
|
|
| |
llvm-svn: 371404
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reapply with fix to reduce resources required by the compiler - use
unsigned[2] instead of std::pair. This causes clang and gcc to compile
the generated file multiple times faster, and hopefully will reduce
the resource requirements on Visual Studio also. This fix is a little
ugly but it's clearly the same issue the previous author of
DFAPacketizer faced (the previous tables use unsigned[2] rather uglily
too).
This patch allows the DFAPacketizer to be queried after a packet is formed to work out which
resources were allocated to the packetized instructions.
This is particularly important for targets that do their own bundle packing - it's not
sufficient to know simply that instructions can share a packet; which slots are used is
also required for encoding.
This extends the emitter to emit a side-table containing resource usage diffs for each
state transition. The packetizer maintains a set of all possible resource states in its
current state. After packetization is complete, all remaining resource states are
possible packetization strategies.
The sidetable is only ~500K for Hexagon, but the extra tracking is disabled by default
(most uses of the packetizer like MachinePipeliner don't care and don't need the extra
maintained state).
Differential Revision: https://reviews.llvm.org/D66936
llvm-svn: 371399
|
| |
|
|
|
|
|
|
| |
Add codegen support for vctp{8,16,32}.
Differential Revision: https://reviews.llvm.org/D67344
llvm-svn: 371395
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
packetized instructions
This patch allows the DFAPacketizer to be queried after a packet is formed to work out which
resources were allocated to the packetized instructions.
This is particularly important for targets that do their own bundle packing - it's not
sufficient to know simply that instructions can share a packet; which slots are used is
also required for encoding.
This extends the emitter to emit a side-table containing resource usage diffs for each
state transition. The packetizer maintains a set of all possible resource states in its
current state. After packetization is complete, all remaining resource states are
possible packetization strategies.
The sidetable is only ~500K for Hexagon, but the extra tracking is disabled by default
(most uses of the packetizer like MachinePipeliner don't care and don't need the extra
maintained state).
Differential Revision: https://reviews.llvm.org/D66936
........
Reverted as this is causing "compiler out of heap space" errors on MSVC 2017/19 NDEBUG builds
llvm-svn: 371393
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This patch implements two arithmetic intrinsics:
* int_aarch64_sve_abs
* int_aarch64_sve_neg
testing the support for scalable vector types in intrinsics added in D65930.
Reviewed By: greened
Differential Revision: https://reviews.llvm.org/D65931
llvm-svn: 371388
|
| |
|
|
|
|
|
|
|
|
|
|
| |
We should not be generating Neon stack loads/stores even for these large
registers.
No test here because my understanding is we will only generate these QQPR regs
for intrinsics and VLDn's. The tests will follow once those are available.
Differential revision: https://reviews.llvm.org/D67169
llvm-svn: 371386
|
| |
|
|
| |
llvm-svn: 371385
|
| |
|
|
|
|
|
|
|
|
|
| |
Loosely based on DAGCombiner version, but this part is slightly simpler in
GlobalIsel because all address calculation is performed by G_GEP. That makes
the inc/dec distinction moot so there's just pre/post to think about.
No targets can handle it yet so testing is via a special flag that overrides
target hooks.
llvm-svn: 371384
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The aim of this patch is to refactor how we handle and report error.
I suggest to use the same approach we use in LLD: delayed error reporting.
For that I introduced 'HasError' flag which triggers when we report an error.
Now we do not exit instantly on any error. The benefits are:
1) There are no more 'exit(1)' calls in the library code.
2) Code was simplified significantly in a few places.
3) It is now possible to print multiple errors instead of only one.
Also, I changed the messages to be lower case and removed a full stop.
Differential revision: https://reviews.llvm.org/D67182
llvm-svn: 371380
|
| |
|
|
|
|
|
|
|
|
| |
Specify the Unpredictable bits, and return softfails when appropriate.
Patch by Mark Murray!
Differential revision: https://reviews.llvm.org/D66939
llvm-svn: 371374
|
| |
|
|
|
|
|
|
|
|
| |
The incoming accumulator value can be discovered through a sext, in
which case there will be a mismatch between the input and the result.
So sign extend the accumulator input if we're performing a 64-bit mac.
Differential Revision: https://reviews.llvm.org/D67220
llvm-svn: 371370
|
| |
|
|
|
|
|
| |
This is simpler than using findRegisterDefOperandIdx() + setIsDead().
Review: Ulrich Weigand.
llvm-svn: 371369
|
| |
|
|
| |
llvm-svn: 371368
|
| |
|
|
| |
llvm-svn: 371366
|
| |
|
|
| |
llvm-svn: 371364
|
| |
|
|
| |
llvm-svn: 371363
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
After tailduplication, we have redundant copies. We can remove these
copies in machine-cp if it's safe to, i.e.
```
$reg0 = OP ...
... <<< No read or clobber of $reg0 and $reg1
$reg1 = COPY $reg0 <<< $reg0 is killed
...
<RET>
```
will be transformed to
```
$reg1 = OP ...
...
<RET>
```
Differential Revision: https://reviews.llvm.org/D65267
llvm-svn: 371359
|
| |
|
|
|
|
| |
This matches what we do for f32/f64. gcc also does this for fp128.
llvm-svn: 371357
|
| |
|
|
|
|
| |
This patch decodes target and faux shuffles with getTargetShuffleInputs - a reduced version of resolveTargetShuffleInputs that doesn't resolve SM_SentinelZero cases, so we can correctly remove zero vectors if they aren't demanded.
llvm-svn: 371353
|
| |
|
|
|
|
|
|
|
|
|
| |
two zero/undef vector inputs into an all zeroes vector.
If the two zero vectors have undefs in different places they
won't get combined by simplifySelect.
This fixes a regression from an earlier commit.
llvm-svn: 371351
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
patterns for zero vectors with all types.
The change to avx512-vec-cmp.ll is a regression, but should be
easy to fix. It occurs because the getZeroVector call was
canonicalizing both sides to the same node, then SimplifySelect
was able to simplify it. But since only called getZeroVector
on some VTs this isn't a robust way to combine this.
The change to vector-shuffle-combining-ssse3.ll is more
instructions, but removes a constant pool load so its unclear
if its a regression or not.
llvm-svn: 371350
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
more cases (PR43246)
Summary:
This is motivated by D67122 sanitizer check enhancement.
That patch seemingly worsens `-fsanitize=pointer-overflow`
overhead from 25% to 50%, which strongly implies missing folds.
In this particular case, given
```
char* test(char& base, unsigned long offset) {
return &base + offset;
}
```
it will end up producing something like
https://godbolt.org/z/LK5-iH
which after optimizations reduces down to roughly
```
define i1 @t0(i8* nonnull %base, i64 %offset) {
%base_int = ptrtoint i8* %base to i64
%adjusted = add i64 %base_int, %offset
%non_null_after_adjustment = icmp ne i64 %adjusted, 0
%no_overflow_during_adjustment = icmp uge i64 %adjusted, %base_int
%res = and i1 %non_null_after_adjustment, %no_overflow_during_adjustment
ret i1 %res
}
```
Without D67122 there was no `%non_null_after_adjustment`,
and in this particular case we can get rid of the overhead:
Here we add some offset to a non-null pointer,
and check that the result does not overflow and is not a null pointer.
But since the base pointer is already non-null, and we check for overflow,
that overflow check will already catch the null pointer,
so the separate null check is redundant and can be dropped.
Alive proofs:
https://rise4fun.com/Alive/WRzq
There are more patterns of "unsigned-add-with-overflow", they are not handled here,
but this is the main pattern, that we currently consider canonical,
so it makes sense to handle it.
https://bugs.llvm.org/show_bug.cgi?id=43246
Reviewers: spatel, nikic, vsk
Reviewed By: spatel
Subscribers: hiraditya, llvm-commits, reames
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67332
llvm-svn: 371349
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
manually
As reported in post-commit review of r370327,
there is some case where the code crashes.
As discussed with Craig Topper, the problem is that getConstant()
internally calls getSplatBuildVector(), so we don't insert
the constant itself.
If we do that manually we're good.
llvm-svn: 371346
|
| |
|
|
|
|
|
|
| |
getZeroVector canonicalizes the type to vXi32, but that's a
legalization action. We should use the most correct type if
possible.
llvm-svn: 371345
|
| |
|
|
|
|
|
|
|
| |
no carry.
I modified the ARM test to use two inputs instead of 0 so the
test hopefully still tests what was intended.
llvm-svn: 371344
|
| |
|
|
|
|
| |
getZeroVector/getOnesVector on the types we already have isel patterns for.
llvm-svn: 371343
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is similar to the existing fold for splats added with:
rL365379
If we can adjust the shuffle mask to include another element
in an identity mask (if it changes vector length, that's an
extract/insert subvector operation in the backend), then that
can eliminate extractelement/insertelement pairs in IR.
All targets are expected to lower shuffles with identity masks
efficiently.
llvm-svn: 371340
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Add zero-materializing XORs to X86's describeLoadedValue() hook in order
to produce call site values.
I have had to change the defs logic in collectCallSiteParameters() a bit
to be able to describe the XORs. The XORs implicitly define $eflags,
which would cause them to never be considered, due to a guard condition
that I->getNumDefs() is one. I have changed that condition so that we
now only consider instructions where a forwarded register overlaps with
the instruction's single explicit define. We still need to collect the implicit
defines of other forwarded registers to remove them from the work list.
I'm not sure how to move towards supporting instructions with multiple
explicit defines, cases where forwarded register are implicitly defined,
and/or cases where an instruction produces values for multiple forwarded
registers. Perhaps the describeLoadedValue() hook should take a register
argument, and we then leave it up to the hook to describe the loaded
value in that register? I have not yet encountered a situation where
that would be necessary though.
Reviewers: aprantl, vsk, djtodoro, NikolaPrica
Reviewed By: vsk
Subscribers: ychen, hiraditya, llvm-commits
Tags: #debug-info, #llvm
Differential Revision: https://reviews.llvm.org/D67225
llvm-svn: 371333
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This changes the ParamLoadedValue pair which the describeLoadedValue()
hook returns so that MachineOperand objects are returned instead of
pointers.
When describing call site values we may need to describe operands which
are not part of the instruction. One such example is zero-materializing
XORs on x86, which I have implemented support for in a child revision.
Instead of having to return a pointer to an operand stored somewhere
outside the instruction, start returning objects directly instead, as
that simplifies the code.
The MachineOperand class only holds POD members, and on x86-64 it is 32
bytes large. That combined with copy elision means that the overhead of
returning a machine operand object from the hook does not become very
large.
I benchmarked this on a 8-thread i7-8650U machine with 32 GB RAM. The
benchmark consisted of building a clang 8.0 binary configured with:
-DCMAKE_BUILD_TYPE=RelWithDebInfo \
-DLLVM_TARGETS_TO_BUILD=X86 \
-DLLVM_USE_SANITIZER=Address \
-DCMAKE_CXX_FLAGS="-Xclang -femit-debug-entry-values -stdlib=libc++"
The average wall clock time increased by 4 seconds, from 62:05 to
62:09, which is an 0.1% increase.
Reviewers: aprantl, vsk, djtodoro, NikolaPrica
Reviewed By: vsk
Subscribers: hiraditya, ychen, llvm-commits
Tags: #debug-info, #llvm
Differential Revision: https://reviews.llvm.org/D67261
llvm-svn: 371332
|
| |
|
|
| |
llvm-svn: 371331
|
| |
|
|
|
|
| |
Use APInt to create the comparison mask instead.
llvm-svn: 371330
|
| |
|
|
|
|
|
|
|
|
| |
This generalizes the existing <32 x i1> pre-AVX2 split code to support reductions from <64 x i1> as well, we can probably generalize to any larger pow2 case in the future if the (unlikely) need ever arises.
We still need to tweak combineBitcastvxi1 to improve AVX512F codegen as its assumes vXi1 types should be handled on the mask registers even when they aren't legal.
Differential Revision: https://reviews.llvm.org/D67070
llvm-svn: 371328
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
on SSE2 and later.
isel used to require zero vectors to be canonicalized to a single
type to minimize the number of patterns needed to match. This is
no longer required.
I plan to do this to integers too, but floating point was simpler
to start with. Integer has a complication where v32i16/v64i8 aren't
legal when the other 512-bit integer types are.
llvm-svn: 371325
|
| |
|
|
| |
llvm-svn: 371323
|
| |
|
|
|
|
|
|
|
| |
This patch enables generation of fused multiply add/sub for instructions operating on fp16.
Tested on aarch64-linux.
Differential Revision: https://reviews.llvm.org/D67297
llvm-svn: 371321
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Similar to the previous prefer-256-bit flag. We might want to
enable this by default some CPUs. This just starts the initial
work to implement and prove that it effects TTI's vector width.
Reviewers: RKSimon, echristo, spatel, atdt
Reviewed By: RKSimon
Subscribers: lebedev.ri, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67311
llvm-svn: 371319
|
| |
|
|
| |
llvm-svn: 371317
|
| |
|
|
|
|
|
|
| |
Use getAPIntValue() directly - this is mainly a best practice style issue to help prevent fuzz tests blowing up when a i12345 (or whatever) is generated.
Use getConstantOperandVal/getConstantOperandAPInt wrappers where possible.
llvm-svn: 371315
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
```
.type foo,@gnu_indirect_function
.set foo,foo_resolver
.set foo2,foo
.set foo3,foo2
```
The types of foo2 and foo3 should be STT_GNU_IFUNC, but we currently
resolve them to the type of foo_resolver. This patch fixes it.
Differential Revision: https://reviews.llvm.org/D67206
Patch by Senran Zhang
llvm-svn: 371312
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TargetLowering::expandFixedPointMul
Summary:
Normally TargetLowering::expandFixedPointMul would handle
SMULFIXSAT with scale zero by using an SMULO to compute the
product and determine if saturation is needed (if overflow
happened). But if SMULO isn't custom/legal it falls through
and uses the same technique, using MULHS/SMUL_LOHI, as used
for non-zero scales.
Problem was that when checking for overflow (handling saturation)
when not using MULO we did not expect to find a zero scale. So
we ended up in an assertion when doing
APInt::getLowBitsSet(VTSize, Scale - 1)
This patch fixes the problem by adding a new special case for
how saturation is computed when scale is zero.
Reviewers: RKSimon, bevinh, leonardchan, spatel
Reviewed By: RKSimon
Subscribers: wuzish, nemanjai, hiraditya, MaskRay, jsji, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D67071
llvm-svn: 371309
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Add an intrinsic that takes 2 unsigned integers with
the scale of them provided as the third argument and
performs fixed point multiplication on them. The
result is saturated and clamped between the largest and
smallest representable values of the first 2 operands.
This is a part of implementing fixed point arithmetic
in clang where some of the more complex operations
will be implemented as intrinsics.
Patch by: leonardchan, bjope
Reviewers: RKSimon, craig.topper, bevinh, leonardchan, lebedev.ri, spatel
Reviewed By: leonardchan
Subscribers: ychen, wuzish, nemanjai, MaskRay, jsji, jdoerfert, Ka-Ka, hiraditya, rjmccall, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D57836
llvm-svn: 371308
|