| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
|
|
|
|
|
|
| |
instructions as well.
Without this we allow "vmovd %rax, %xmm0", but not "vmovd %rax, %xmm16"
This exists due to continue a silly bug where really old versions of the GNU assembler required movd instead of movq on these instructions. This compatibility hack then crept forward to avx version too, but we didn't propagate it to avx512.
llvm-svn: 321903
|
| |
|
|
|
|
|
| |
Commit message:
[Hexagon] Even simpler patterns for sign- and zero-extending HVX vectors
llvm-svn: 321902
|
| |
|
|
|
|
|
|
| |
'movq' instead.
This behavior existed to work with an old version of the gnu assembler on MacOS that only accepted this form. Newer versions of GNU assembler and the current LLVM derived version of the assembler on MacOS support movq as well.
llvm-svn: 321898
|
| |
|
|
| |
llvm-svn: 321897
|
| |
|
|
|
|
| |
Only non-bool vectors.
llvm-svn: 321895
|
| |
|
|
| |
llvm-svn: 321894
|
| |
|
|
| |
llvm-svn: 321893
|
| |
|
|
| |
llvm-svn: 321892
|
| |
|
|
| |
llvm-svn: 321891
|
| |
|
|
|
|
| |
These arise because enums are 'int' by default.
llvm-svn: 321887
|
| |
|
|
|
|
|
|
|
|
|
|
| |
operands
Currently the assembler would accept, e.g. `ldr r0, [s0, #12]` and similar.
This patch add checks that only general-purpose registers are used in address
operands, shifted registers, and shift amounts.
Differential revision: https://reviews.llvm.org/D39910
llvm-svn: 321866
|
| |
|
|
|
|
|
|
|
| |
Instead of using, for example, `dup v0.4s, wzr`, which transfers between
register files, use the more efficient `movi v0.4s, #0` instead.
Differential revision: https://reviews.llvm.org/D41515
llvm-svn: 321824
|
| |
|
|
|
|
| |
of integer.
llvm-svn: 321821
|
| |
|
|
|
|
|
|
|
| |
Wide Thumb2 instructions should be emitted into the object file as pairs of
16-bit words of the appropriate endianness, not one 32-bit word.
Differential revision: https://reviews.llvm.org/D41185
llvm-svn: 321799
|
| |
|
|
| |
llvm-svn: 321798
|
| |
|
|
|
|
|
|
| |
Select G_PHI to PHI and manually constrain the result register. This is
very similar to how COPY is handled, so extract and reuse some of that
code.
llvm-svn: 321797
|
| |
|
|
|
|
|
| |
Mark G_PHI as Legal for s32 and p0, and also for s64 if we have hard
float. Widen any smaller types.
llvm-svn: 321795
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We used to handle G_CONSTANT with pointer type by forcing the type of
the result register to s32 and then letting TableGen handle it.
Unfortunately, setting the type only works for generic virtual
registers, that haven't yet been constrained to a register class (e.g.
those used only by a COPY later on). If the result register has already
been constrained as a use of a previously selected instruction, then
setting the type will assert.
It would be nice to be able to teach TableGen to select pointer
constants the same as integer constants, but since it's such an edge
case (at the moment the only pointer constant that we're generally
interested in is 0, and that is mostly used for comparisons and selects,
which are also not supported by TableGen) it's probably not worth the
effort right now. Instead, handle pointer constants with some trivial
handwritten code.
llvm-svn: 321793
|
| |
|
|
| |
llvm-svn: 321755
|
| |
|
|
| |
llvm-svn: 321752
|
| |
|
|
|
|
|
|
|
|
| |
This custom inserter was added in r124272 at which time it added about bunch of Defs for Win64. In r150708, those defs were removed leaving only the "return BB". So I think this means the custom inserter is a NOP these days.
This patch removes the remaining code and stops tagging the instructions for custom insertion
Differential Revision: https://reviews.llvm.org/D41671
llvm-svn: 321747
|
| |
|
|
|
|
|
|
| |
Currently we use SIGN_EXTEND in lowerMasksToReg as part of calling convention setup, but we don't require a specific value for the upper bits.
This patch changes it to ANY_EXTEND which will be lowered as SIGN_EXTEND if it ends up sticking around.
llvm-svn: 321746
|
| |
|
|
|
|
|
| |
Besides the unsightly print-out, it was causing some buildbots to fail,
e.g. http://lab.llvm.org:8011/builders/clang-x86-windows-msvc2015/builds/9311
llvm-svn: 321711
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
After D41349, we can now directly access MCSubtargetInfo from
createARM*AsmBackend. This patch makes use of this, avoiding the need to
create a fresh MCSubtargetInfo (which was previously always done with a blank
CPU and feature string). Given the total size of the change remains pretty
tiny and we're removing the old explicit destructor, I changed the STI field
to a reference rather than a pointer.
Differential Revision: https://reviews.llvm.org/D41693
llvm-svn: 321707
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Add a register class for SVE predicate operands that can only be p0-p7 (as opposed to p0-p15)
Patch [1/3] in a series to add predicated ADD/SUB instructions for SVE.
Reviewers: rengolin, mcrosier, evandro, fhahn, echristo, olista01, SjoerdMeijer, javed.absar
Reviewed By: fhahn
Subscribers: aemerson, javed.absar, tschuett, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D41441
llvm-svn: 321699
|
| |
|
|
|
|
|
| |
As experimental backends, I didn't have them configured to build in my local
build config.
llvm-svn: 321696
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently it's not possible to access MCSubtargetInfo from a TgtMCAsmBackend.
D20830 threaded an MCSubtargetInfo reference through
MCAsmBackend::relaxInstruction, but this isn't the only function that would
benefit from access. This patch removes the Triple and CPUString arguments
from createMCAsmBackend and replaces them with MCSubtargetInfo.
This patch just changes the interface without making any intentional
functional changes. Once in, several cleanups are possible:
* Get rid of the awkward MCSubtargetInfo handling in ARMAsmBackend
* Support 16-bit instructions when valid in MipsAsmBackend::writeNopData
* Get rid of the CPU string parsing in X86AsmBackend and just use a SubtargetFeature for HasNopl
* Emit 16-bit nops in RISCVAsmBackend::writeNopData if the compressed instruction set extension is enabled (see D41221)
This change initially exposed PR35686, which has since been resolved in r321026.
Differential Revision: https://reviews.llvm.org/D41349
llvm-svn: 321692
|
| |
|
|
|
|
|
| |
Differential Revision: https://reviews.llvm.org/D40524
Change-Id: Ie3a405b28503ceae999f5f3ba07a68fa733a2400
llvm-svn: 321674
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(PR33325)
This is an extension of D31156 with the goal that we'll allow memcmp() == 0 expansion
for x86 to use 2 pairs of loads per block.
The memcmp expansion pass (formerly part of CGP) will generate this kind of pattern
with oversized integer compares, so we want to transform these into x86-specific vector
nodes before legalization splits things into scalar chunks.
See PR33325 for more details:
https://bugs.llvm.org/show_bug.cgi?id=33325
Differential Revision: https://reviews.llvm.org/D41618
llvm-svn: 321656
|
| |
|
|
|
|
|
|
|
|
|
| |
Tests updated to explicitly use fast-isel at -O0 instead of implicitly.
This change also allows an explicit -fast-isel option to override an
implicitly enabled global-isel. Otherwise -fast-isel would have no effect at -O0.
Differential Revision: https://reviews.llvm.org/D41362
llvm-svn: 321655
|
| |
|
|
| |
llvm-svn: 321650
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
isReg() in AArch64AsmParser.cpp is a bit of a misnomer, and would be better named 'isScalarReg()' instead.
Patch [1/3] in a series to add operand constraint checks for SVE's predicated ADD/SUB.
Reviewers: rengolin, mcrosier, evandro, fhahn, echristo
Reviewed By: fhahn
Subscribers: aemerson, javed.absar, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D41445
llvm-svn: 321646
|
| |
|
|
| |
llvm-svn: 321644
|
| |
|
|
|
|
|
| |
Differential Revision: https://reviews.llvm.org/D41339
Patch by Shiva Chen.
llvm-svn: 321643
|
| |
|
|
|
|
| |
XLenVT in LowerFormalArguments is used only in an assert.
llvm-svn: 321642
|
| |
|
|
| |
llvm-svn: 321632
|
| |
|
|
|
|
| |
The custom lowering was just doing the same thing promotion would do.
llvm-svn: 321630
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
when using setOperationAction Promote for INT_TO_FP and FP_TO_INT
Currently the promotion for these ignores the normal getTypeToPromoteTo and instead just tries to double the element width. This is because the default behavior of getTypeToPromote to just adds 1 to the SimpleVT, which has the affect of increasing the element count while keeping the scalar size the same.
If multiple steps are required to get to a legal operation type, int_to_fp will be promoted multiple times. And fp_to_int will keep trying wider types in a loop until it finds one that works.
getTypeToPromoteTo does have the ability to query a promotion map to get the type and not do the increasing behavior. It seems better to just let the target specify the promotion type in the map explicitly instead of letting the legalizer iterate via widening.
FWIW, it's worth I think for any other vector operations that need to be promoted, we have to specify the type explicitly because the default behavior of getTypeToPromote isn't useful for vectors. The other types of promotion already require either the element count is constant or the total vector width is constant, but neither happens by incrementing the SimpleVT enum.
Differential Revision: https://reviews.llvm.org/D40664
llvm-svn: 321629
|
| |
|
|
|
|
|
|
| |
sign bits.
If the input is all sign bits then the LSB through MSB are all the same so we don't need to be move the LSB to the MSB.
llvm-svn: 321617
|
| |
|
|
|
|
| |
registers to implement 128/256-bit operations without VLX.
llvm-svn: 321613
|
| |
|
|
|
|
|
|
| |
false input being zero.
We can use zmm move with zero masking for this. We already had patterns for using a masked move, but we didn't check for the zero masking case separately.
llvm-svn: 321612
|
| |
|
|
|
|
|
|
|
|
| |
vector to v8i1 pre-legalize.
The CONCAT_VECTORS will be lowered to INSERT_SUBVECTOR later. In the modified cases this seems to be enough to trick a later DAG combine into running in a different order than allows the ANDs to be removed.
I'll admit this is a bit of a hack that happens to work, but using CONCAT_VECTORS is more consistent with other legalization code anyway.
llvm-svn: 321611
|
| |
|
|
|
|
| |
As it has a scalar source we don't treat it as a target shuffle so needs special handling.
llvm-svn: 321610
|
| |
|
|
|
|
| |
Don't combine buildvector(binop(),binop(),binop(),binop()) -> binop(buildvector(), buildvector()) if its a splat - keep the binop scalar and just splat the result to avoid large vector constants.
llvm-svn: 321607
|
| |
|
|
|
|
|
|
| |
legalization sees the i4 and changes to load/store.
Same for v2i1 and i2.
llvm-svn: 321602
|
| |
|
|
|
|
|
|
| |
legalization sees the i4 and changes to load/store.
Same for i2 and v2i1.
llvm-svn: 321601
|
| |
|
|
|
|
|
|
| |
don't have DQI.
We end up using an i8 load via an isel pattern from v8i1 anyway. This just makes it more explicit. This seems to improve codgen in some cases and I'd like to kill off some of the load patterns.
llvm-svn: 321598
|
| |
|
|
|
|
| |
This is better handled by a DAG combine if its not already being done. No lit tests fail from the removal of these patterns.
llvm-svn: 321597
|
| |
|
|
|
|
| |
I don't think anything would actually expect the other bits to be zero.
llvm-svn: 321596
|
| |
|
|
| |
llvm-svn: 321595
|