| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
| |
Put the first ldp at the end, so that the load-store optimizer can run
and merge the ldp and the add into a post-index ldp.
This didn't work in case no frame was needed and resulted in code size
regressions.
llvm-svn: 331044
|
| |
|
|
|
|
|
|
| |
If the MachineInstr uses a custom inserter and is then erased after
instruction selection, there is no use for mapping it to a sched class.
Review: Ulrich Weigand
llvm-svn: 331040
|
| |
|
|
|
|
|
|
|
| |
This adds IR intrinsics for the AArch64 dot-product instructions introduced in
v8.2-A.
Differential revisioon: https://reviews.llvm.org/D46107
llvm-svn: 331036
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since PTX has grown a <2 x half> datatype vectorization has become more
important. The late LoadStoreVectorizer intentionally only does loads
and stores, but now arithmetic has to be vectorized for optimal
throughput too.
This is still very limited, SLP vectorization happily creates <2 x half>
if it's a legal type but there's still a lot of register moving
happening to get that fed into a vectorized store. Overall it's a small
performance win by reducing the amount of arithmetic instructions.
I haven't really checked what the loop vectorizer does to PTX code, the
cost model there might need some more tweaks. I didn't see it causing
harm though.
Differential Revision: https://reviews.llvm.org/D46130
llvm-svn: 331035
|
| |
|
|
|
|
| |
entry. NFCI.
llvm-svn: 331034
|
| |
|
|
|
|
|
|
|
|
| |
This patch makes compiler does not fuse fmul and fadd/fsub into
fmadd/fmsub by default. Instead, -fp-contract=fast option can
be used when such behavior is desired.
Differential Revision: https://reviews.llvm.org/D46057
llvm-svn: 331033
|
| |
|
|
|
|
|
|
|
| |
This adds IR intrinsics for the ARM dot-product instructions introduced in
v8.2-A.
Differential revision: https://reviews.llvm.org/D46106
llvm-svn: 331032
|
| |
|
|
|
|
|
|
|
| |
Back when the R52 schedule was added in rL286949, there was no way
to enable machine schedules in ARM for specific cores. Since then a
target feature has been added. This enables the feature for R52,
removing the need to manually specify compiler flags.
llvm-svn: 331027
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This includes
Instructions: tlbginv, tlbginvf, tlbgp, tlbgr, tlbgwi, tlbgwr, hypcall
mfgc0, mtgc0, mfhgc0, mthgc0, dmfgc0, dmtgc0,
Assembler directives: .set virt, .set novirt, .module virt, .module novirt
Attribute: virt
.MIPS.abiflags: VZ (0x100)
Patch by Vladimir Stefanovic.
Differential Revision: https://reviews.llvm.org/D44905
llvm-svn: 331024
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The program might have unusual expectations for functions; for example,
the Linux kernel's build system warns if it finds references from .text
to .init.data.
I'm not sure this is something we actually want to make any guarantees
about (there isn't any explicit rule that would disallow outlining
in this case), but we might want to be conservative anyway.
Differential Revision: https://reviews.llvm.org/D46091
llvm-svn: 331007
|
| |
|
|
|
|
|
|
| |
The LLVM commit introduces a crash in LLVM's instruction selection.
I filed http://llvm.org/PR37260 with the test case.
llvm-svn: 330997
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
`lb` and `lbu` commands accepts 16-bit signed offsets. But GAS accepts
larger offsets for these commands. If an offset does not fit in 16-bit
range, `lb` command is translated into lui/lb or lui/addu/lb series.
It's interesting that initially LLVM assembler supported this feature,
but later it was broken.
This patch restores support for 32-bit offsets. It replaces `mem_simm16`
operand for `LB` and `LBu` definitions by the new `mem_simmptr` operand.
This operand is intended to check that offset fits to the same size as
using for pointers. Later we will be able to extend this rule and
accepts 64-bit offsets when it is possible.
Some issues remain:
- The regression also affects LD, SD, LH, LHU commands. I'm going
to fix them by a separate patch.
- GAS accepts any 32-bit values as an offset. Now LLVM accepts signed
16-bit values and this patch extends the range to signed 32-bit offsets.
In other words, the following code accepted by GAS and still triggers
an error by LLVM:
```
lb $4, 0x80000004
# gas
lui a0, 0x8000
lb a0, 4(a0)
```
- In case of 64-bit pointers GAS accepts a 64-bit offset and translates
it to the li/dsll/lb series of commands. LLVM still rejects it.
Probably this feature has never been implemented in LLVM. This issue
is for a separate patch.
```
lb $4, 0x800000001
# gas
li a0, 0x8000
dsll a0, a0, 0x14
lb a0, 4(a0)
```
Differential Revision: https://reviews.llvm.org/D45020
llvm-svn: 330983
|
| |
|
|
|
|
|
|
|
|
|
| |
- Writes ".debug_XXX" into corresponding custom sections.
- Writes relocation records into "reloc.debug_XXX" sections.
Patch by Yury Delendik!
Differential Revision: https://reviews.llvm.org/D44184
llvm-svn: 330982
|
| |
|
|
|
|
| |
If an fcanoncialize was done on a vector type that was legal,
llvm-svn: 330981
|
| |
|
|
|
|
| |
Fixes a regression in a future commit.
llvm-svn: 330980
|
| |
|
|
| |
llvm-svn: 330979
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Use the FP for scavenged spill slot accesses to prevent corruption of
the callee-save region when the SP is re-aligned.
Based on problem and patch reported by @paulwalker-arm
This is an alternative to solution proposed in D45770
Reviewers: t.p.northover, paulwalker-arm, thegameg, javed.absar
Subscribers: qcolombet, mcrosier, paulwalker-arm, kristof.beyls, rengolin, javed.absar, llvm-commits
Differential Revision: https://reviews.llvm.org/D46063
llvm-svn: 330976
|
| |
|
|
|
|
|
|
| |
counter and the input registers are available in the next instruction; update the waitcnt pass to take this into account.
Differential Revision: https://reviews.llvm.org/D46067
llvm-svn: 330954
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Correct the definitions of ei, di, eret, deret, wait, syscall and break.
Also provide microMIPS specific aliases to match the MIPS aliases.
Additionally correct the definition of the wait instruction so that
it is present in the instruction mapping tables.
Reviewers: smaksimovic, abeserminji, atanasyan
Differential Revision: https://reviews.llvm.org/D45939
llvm-svn: 330952
|
| |
|
|
|
|
|
|
|
| |
This causes some slight shuffling but no meaningful codegen differences on the
corpus I used for testing, but it has a larger impact when combined with e.g.
rematerialisation. Regardless, it makes sense to report as accurate
target-specific information as possible.
llvm-svn: 330949
|
| |
|
|
|
|
|
|
|
|
| |
There's no direct instruction for this, but it's trivially implemented
with two movs. Without this the code generator just dies when
encountering a shufflevector.
Differential Revision: https://reviews.llvm.org/D46116
llvm-svn: 330948
|
| |
|
|
|
|
|
| |
This returns true for 8-bit and 16-bit loads, allowing LBU/LHU to be selected
and avoiding unnecessary masks.
llvm-svn: 330943
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds a new shuffle kind useful for transposing a 2xn matrix. These
transpose shuffle masks read corresponding even- or odd-numbered vector
elements from two n-dimensional source vectors and write each result into
consecutive elements of an n-dimensional destination vector. The transpose
shuffle kind is meant to model the TRN1 and TRN2 AArch64 instructions. As such,
this patch also considers transpose shuffles in the AArch64 implementation of
getShuffleCost.
Differential Revision: https://reviews.llvm.org/D45982
llvm-svn: 330941
|
| |
|
|
|
|
| |
Adapted from ARM's implementation introduced in r313533 and r314280.
llvm-svn: 330940
|
| |
|
|
|
|
|
| |
Differential Revision: https://reviews.llvm.org/D45823
Change-Id: Icf6f34f6babc3cb2ff5292fde003472473037a71
llvm-svn: 330939
|
| |
|
|
|
|
|
|
| |
I'm unable to construct a representative test case that demonstrates the
advantage, but it seems sensible to report accurate target-specific
information regardless.
llvm-svn: 330938
|
| |
|
|
|
|
|
| |
This causes a trivial improvement in the recently added lsr-legaladdimm.ll
test case.
llvm-svn: 330937
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch extends the PredicateMethod of AsmOperands used in SVE's
LD1 instructions with a DiagnosticPredicate. This makes them 'context
sensitive' to the operand that has been parsed and tells the user to
use the right register (with expected shift/extend), rather than telling
the immediate is out of range when it actually parsed a register.
Patch [2/2] in a series to improve assembler diagnostics for SVE:
- Patch [1/2]: https://reviews.llvm.org/D45879
- Patch [2/2]: https://reviews.llvm.org/D45880
Reviewers: olista01, stoklund, craig.topper, mcrosier, rengolin, echristo, fhahn, SjoerdMeijer, evandro, javed.absar
Reviewed By: fhahn
Differential Revision: https://reviews.llvm.org/D45880
llvm-svn: 330934
|
| |
|
|
| |
llvm-svn: 330933
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This has no impact on codegen for the current RISC-V unit tests or my small
benchmark set and very minor changes in a few programs in the GCC torture
suite. Based on this, I haven't been able to produce a representative test
program that demonstrates a benefit from isLegalAddressingMode. I'm committing
the patch anyway, on the basis that presenting accurate information to the
target-independent code is preferable to relying on incorrect generic
assumptions.
llvm-svn: 330932
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
instructions.
Patch [2/3] in series to add support for SVE's gather load instructions
that use scalar+vector addressing modes:
- Patch [1/3]: https://reviews.llvm.org/D45951
- Patch [2/3]: https://reviews.llvm.org/D46023
- Patch [3/3]: https://reviews.llvm.org/D45958
Reviewers: fhahn, rengolin, samparker, SjoerdMeijer, t.p.northover, echristo, evandro, javed.absar
Reviewed By: fhahn
Differential Revision: https://reviews.llvm.org/D46023
llvm-svn: 330928
|
| |
|
|
|
|
| |
This matches objdump.
llvm-svn: 330922
|
| |
|
|
|
|
|
|
| |
isel too.
This is a follow up to the changes in r330896 which enabled folding after isel during peephole and register allocation.
llvm-svn: 330897
|
| |
|
|
|
|
|
|
|
|
|
| |
instructions.
These have special permission according to the x86 manual to read
unaligned memory, and this folding is done by ICC and GCC as well.
This corrects one of the issues identified in PR37246.
llvm-svn: 330896
|
| |
|
|
|
|
| |
Algorithmically compute the 'x20' SDIV/UDIV vector costs - this is necessary for PR36550 when DIV costs will be driven from the scheduler models.
llvm-svn: 330870
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
The TableGen'd GlobalISel instruction selector assumes all intrinsics are in
the public Intrinsic:: namespace.
Reviewers: jvesely, nhaehnle
Reviewed By: jvesely, nhaehnle
Subscribers: arsenm, kzhuravl, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D45989
llvm-svn: 330866
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Add "amdgpu-waitcnt-forcezero" to force all waitcnt instrs to be emitted as s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
- Add debug counters to control force emit of s_waitcnt instrs; debug counters:
si-insert-waitcnts-forceexp: force emit s_waitcnt expcnt(0) instrs
si-insert-waitcnts-forcevm: force emit s_waitcnt lgkmcnt(0) instrs
si-insert-waitcnts-forcelgkm: force emit s_waitcnt vmcnt(0) instrs
- Add some debug statements
Note that a variant of this patch was previously committed/reverted.
Differential Revision: https://reviews.llvm.org/D45888
llvm-svn: 330862
|
| |
|
|
|
|
|
|
|
|
| |
load folding.
Previously we only formed MUL_IMM when we split a constant. This blocked load folding on those cases. We should also form MUL_IMM for 3/5/9 to favor LEA over load folding.
Differential Revision: https://reviews.llvm.org/D46040
llvm-svn: 330850
|
| |
|
|
|
|
|
|
|
|
| |
coincides with a register name
Previously `call zero`, `call f0` etc would fail. This leads to compilation
failures if building programs that define functions with those names and using
-save-temps.
llvm-svn: 330846
|
| |
|
|
|
|
|
|
| |
shift left.
Don't just assume cost = 1.
llvm-svn: 330834
|
| |
|
|
|
|
| |
rdar://38674040
llvm-svn: 330831
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To do this:
1. Change GlobalAddress SDNode to TargetGlobalAddress to avoid legalizer
split the symbol.
2. Change ExternalSymbol SDNode to TargetExternalSymbol to avoid legalizer
split the symbol.
3. Let PseudoCALL match direct call with target operand TargetGlobalAddress
and TargetExternalSymbol.
Differential Revision: https://reviews.llvm.org/D44885
llvm-svn: 330827
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To do this:
1. Add PseudoCALLIndirct to match indirect function call.
2. Add PseudoCALL to support parsing and print pseudo `call` in assembly
3. Expand PseudoCALL to the following form with R_RISCV_CALL relocation type
while encoding:
auipc ra, func
jalr ra, ra, 0
If we expand PseudoCALL before emitting assembly, we will see auipc and jalr
pair when compile with -S. It's hard for assembly parser to parsing this
pair and identify it's semantic is function call and then insert R_RISCV_CALL
relocation type. Although we could insert R_RISCV_PCREL_HI20 and
R_RISCV_PCREL_LO12_I relocation types instead of R_RISCV_CALL.
Due to RISCV relocation design, auipc and jalr pair only can relax to jal with
R_RISCV_CALL + R_RISCV_RELAX relocation types.
We expand PseudoCALL as late as encoding(RISCVMCCodeEmitter) instead of before
emitting assembly(RISCVAsmPrinter) because we want to preserve call
pseudoinstruction in assembly code. It's more readable and assembly parser
could identify call assembly and insert R_RISCV_CALL relocation type.
Differential Revision: https://reviews.llvm.org/D45859
llvm-svn: 330826
|
| |
|
|
|
|
|
|
|
|
| |
ISel is currently picking 'JAL' over 'JAL_MM' for calling a function when
targeting microMIPS. A later patch will correct this behaviour.
This patch extends the mechanism for transforming instructions into their short
delay to recognise 'JAL_MM' for transforming into 'JALS_MM'.
llvm-svn: 330825
|
| |
|
|
|
|
|
|
| |
This removes all the FMA InstRW overrides.
If we ever get PR36924, then we can remove many of these declarations from models.
llvm-svn: 330820
|
| |
|
|
| |
llvm-svn: 330818
|
| |
|
|
| |
llvm-svn: 330813
|
| |
|
|
| |
llvm-svn: 330812
|
| |
|
|
|
|
|
|
|
|
| |
Also, fix the disassembly of synci for microMIPS.
Reviewers: abeserminji, smaksimovic, atanasyan
Differential Revision: https://reviews.llvm.org/D45870
llvm-svn: 330810
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
modes.
This patch adds parsing support for 'vector + shift/extend' and
corresponding asm operand classes, needed for implementing SVE's
gather/scatter addressing modes.
The added combinations of vector (ZPR) and Shift/Extend are:
Unscaled:
ZPR64ExtLSL8: signed 64-bit offsets (z0.d)
ZPR32ExtUXTW8: unsigned 32-bit offsets (z0.s, uxtw)
ZPR32ExtSXTW8: signed 32-bit offsets (z0.s, sxtw)
Unpacked and unscaled:
ZPR64ExtUXTW8: unsigned 32-bit offsets (z0.d, uxtw)
ZPR64ExtSXTW8: signed 32-bit offsets (z0.d, sxtw)
Unpacked and scaled:
ZPR64ExtUXTW<scale>: unsigned 32-bit offsets (z0.d, uxtw #<shift>)
ZPR64ExtSXTW<scale>: signed 32-bit offsets (z0.d, sxtw #<shift>)
Scaled:
ZPR32ExtUXTW<scale>: unsigned 32-bit offsets (z0.s, uxtw #<shift>)
ZPR32ExtSXTW<scale>: signed 32-bit offsets (z0.s, sxtw #<shift>)
ZPR64ExtLSL<scale>: unsigned 64-bit offsets (z0.d, lsl #<shift>)
ZPR64ExtLSL<scale>: signed 64-bit offsets (z0.d, lsl #<shift>)
Patch [1/3] in series to add support for SVE's gather load instructions
that use scalar+vector addressing modes:
- Patch [1/3]: https://reviews.llvm.org/D45951
- Patch [2/3]: https://reviews.llvm.org/D46023
- Patch [3/3]: https://reviews.llvm.org/D45958
Reviewers: fhahn, rengolin, samparker, SjoerdMeijer, t.p.northover, echristo, evandro, javed.absar
Reviewed By: fhahn
Differential Revision: https://reviews.llvm.org/D45951
llvm-svn: 330805
|