| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
|
|
| |
Patch from Ahmed Bougaca again.
llvm-svn: 284072
|
| |
|
|
| |
llvm-svn: 284071
|
| |
|
|
|
|
| |
It's going to be a TBNZ (at -O0) anyway, so the high bits don't matter.
llvm-svn: 284070
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Summary: We need a new LLVM intrinsic to implement MS _AddressOfReturnAddress builtin on 64-bit Windows.
Reviewers: majnemer, rnk
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D25293
llvm-svn: 284061
|
| |
|
|
|
|
|
|
|
|
|
| |
This is the most basic handling of the indirect access
pseudos using GPR indexing mode. This currently only enables
the mode for a single v_mov_b32 and then disables it.
This is much more complicated to use than the movrel instructions,
so a new optimization pass is probably needed to fold the access
into the uses and keep the mode enabled for them.
llvm-svn: 284031
|
| |
|
|
|
|
|
| |
VI added a second method of indexing into VGPRs
besides using v_movrel*
llvm-svn: 284027
|
| |
|
|
|
|
|
|
| |
This makes more fields overridable and removes redundant bits.
Patch by: Changpeng Fang
llvm-svn: 284024
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The current Cost Model implementation is very inaccurate and has to be
updated, improved, re-implemented to be able to take into account the
concrete CPU models and the concrete targets where this Cost Model is
being used. For example, the Latency Cost Model should be differ from
Code Size Cost Model, etc.
This patch is the first step to launch the developing and implementation
of a new Cost Model generation.
Differential Revision: https://reviews.llvm.org/D25186
llvm-svn: 284012
|
| |
|
|
| |
llvm-svn: 283973
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Although Copies are not specific to preISel, we still have to assign them
a proper register class. However, given they are not constrained to
anything we do not have to handle the source register at the copy. It
will be properly mapped when reaching the related definition.
In the process, the handlong of G_ANYEXT is slightly modified as those
end up being selected as copy. The difference is that when register size
do not match on both sides, we need to insert SUBREG_TO_REG operation,
otherwise the post RA copy expansion will not be happy!
llvm-svn: 283972
|
| |
|
|
|
|
| |
Those are copies, we do not have to do any legalization action for them.
llvm-svn: 283970
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
In PPCMIPeephole, when we see two splat instructions, we can't simply do the following transformation:
B = Splat A
C = Splat B
=>
C = Splat A
because B may still be used between these two instructions. Instead, we should make the second Splat a PPC::COPY and let later passes decide whether to remove it or not:
B = Splat A
C = Splat B
=>
B = Splat A
C = COPY B
Fixes PR30663.
Reviewers: echristo, iteratee, kbarton, nemanjai
Subscribers: mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D25493
llvm-svn: 283961
|
| |
|
|
|
|
|
| |
Mostly Ahmed's work again, I'm just sprucing things up slightly before
committing.
llvm-svn: 283952
|
| |
|
|
|
|
|
|
|
| |
Reverts r283938 to reinstate r283867 with a fix.
The original change had an ArrayRef referring to a destroyed temporary
initializer list. Use plain C arrays instead.
llvm-svn: 283942
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts r283867.
This appears to be an infinite loop:
while (HiRegToSave != AllHighRegs.end() && CopyReg != AllCopyRegs.end()) {
if (HiRegsToSave.count(*HiRegToSave)) {
...
CopyReg = findNextOrderedReg(++CopyReg, CopyRegs, AllCopyRegs.end());
HiRegToSave =
findNextOrderedReg(++HiRegToSave, HiRegsToSave, AllHighRegs.end());
}
}
llvm-svn: 283938
|
| |
|
|
|
|
| |
Patch mostly by Ahmed Bougaca.
llvm-svn: 283937
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
- Refactor bit packing/unpacking
- Calculate bit mask given bit shift and bit width
- Introduce function for decoding bits of waitcnt
- Introduce function for encoding bits of waitcnt
- Introduce function for getting waitcnt mask (instead of using bare numbers)
- Introduce function fot getting max waitcnt(s) (instead of using bare numbers)
Differential Revision: https://reviews.llvm.org/D25298
llvm-svn: 283919
|
| |
|
|
|
|
|
|
| |
I fixed all the other Targets in r283702, and interestingly the
sanitizers are only now "sometimes" catching this bug on the only
one I missed.
llvm-svn: 283914
|
| |
|
|
|
|
|
|
| |
ARMFunctionInfo::ReturnRegsCount in the explicit ctor.
It caused crash since r283867.
llvm-svn: 283909
|
| |
|
|
| |
llvm-svn: 283908
|
| |
|
|
| |
llvm-svn: 283899
|
| |
|
|
|
|
|
|
|
|
| |
Differential Revision:
http://reviews.llvm.org/D25454
Reviewers:
tstellarAMD
llvm-svn: 283893
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The high registers are not allocatable in Thumb1 functions, but they
could still be used by inline assembly, so we need to save and restore
the callee-saved high registers (r8-r11) in the prologue and epilogue.
This is complicated by the fact that the Thumb1 push and pop
instructions cannot access these registers. Therefore, we have to move
them down into low registers before pushing, and move them back after
popping into low registers.
In most functions, we will have low registers that are also being
pushed/popped, which we can use as the temporary registers for
saving/restoring the high registers. However, this is not guaranteed, so
we may need to push some extra low registers to ensure that the high
registers can be saved/restored. For correctness, it would be sufficient
to use just one low register, but if we have enough low registers
available then we only need one push/pop instruction, rather than one
per high register.
We can also use the argument/return registers when they are not live,
and the link register when saving (but not restoring), reducing the
number of extra registers we need to push.
There are still a few extreme edge cases where we need two push/pop
instructions, because not enough low registers can be made live in the
prologue or epilogue.
In addition to the regression tests included here, I've also tested this
using a script to generate functions which clobber different
combinations of registers, have different numbers of argument and return
registers (including variadic arguments), allocate different fixed sized
objects on the stack, and do or don't use variable sized allocas and the
__builtin_return_address intrinsic (all of which affect the available
registers in the prologue and epilogue). I ran these functions in a test
harness which verifies that all of the callee-saved registers are
correctly preserved.
Differential Revision: https://reviews.llvm.org/D24228
llvm-svn: 283867
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, the Int_eh_sjlj_dispatchsetup intrinsic is marked as
clobbering all registers, including floating-point registers that may
not be present on the target. This is technically true, as we could get
linked against code that does use the FP registers, but that will not
actually work, as the soft-float code cannot save and restore the FP
registers. SjLj exception handling can only work correctly if either all
or none of the code is built for a target with FP registers. Therefore,
we can assume that, when Int_eh_sjlj_dispatchsetup is compiled for a
soft-float target, it is only going to be linked against other
soft-float code, and so only clobbers the general-purpose registers.
This allows us to check that no non-savable registers are clobbered when
generating the prologue/epilogue.
Differential Revision: https://reviews.llvm.org/D25180
llvm-svn: 283866
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Allow instructions such as 'cmp w0, #(end - start)' by folding the
expression into a constant. For ELF, we fold only if the symbols are in
the same section. For MachO, we fold if the expression contains only
symbols that are not linker visible.
Fixes https://llvm.org/bugs/show_bug.cgi?id=18920
Differential Revision: https://reviews.llvm.org/D23834
llvm-svn: 283862
|
| |
|
|
|
|
| |
This patch allows to select 32 and 64-bit FP load and store.
llvm-svn: 283832
|
| |
|
|
|
|
|
|
|
|
| |
This only adds the support for 64-bit vector OR. Adding more sizes is
not difficult, but it requires a bigger refactoring because ORs work on
any size, not necessarly the ones that match the width of the register
width. Right now, this is not expressed in the legalization, so don't
bother pushing the refactoring yet.
llvm-svn: 283831
|
| |
|
|
|
|
|
| |
Actually every 64-bit loads are legal, but right now the API does not
offer a simple way to express that.
llvm-svn: 283829
|
| |
|
|
| |
llvm-svn: 283814
|
| |
|
|
| |
llvm-svn: 283809
|
| |
|
|
| |
llvm-svn: 283808
|
| |
|
|
|
|
| |
They're basically just an alias for G_ADD on AArch64.
llvm-svn: 283807
|
| |
|
|
| |
llvm-svn: 283806
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The instructions VLDM/VSTM can only access word-aligned memory
locations and produce alignment fault if the condition is not met.
The compiler currently generates VLDM/VSTM for v2f64 load/store
regardless the alignment of the memory access. Instead, if a v2f64
load/store is not word-aligned, the compiler should generate
VLD1/VST1. For each non double-word-aligned VLD1/VST1, a VREV
instruction should be generated when targeting Big Endian.
Differential Revision: https://reviews.llvm.org/D25281
llvm-svn: 283763
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Rotate by 1 is translated to 1 micro-op, while rotate with imm8 is translated to 2 micro-ops.
Fixes pr30644.
Reviewers: delena, igorb, craig.topper, spatel, RKSimon
Differential Revision: https://reviews.llvm.org/D25399
llvm-svn: 283758
|
| |
|
|
|
|
|
|
| |
instruction is not issued, but replaced by SDIVcc instead, which does not exhibit the error. Unit test included.
Differential Review: https://reviews.llvm.org/D24660
llvm-svn: 283727
|
| |
|
|
| |
llvm-svn: 283723
|
| |
|
|
|
|
| |
128-bit load as input.
llvm-svn: 283720
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit in the name of:Coby Tayree
1.'v' constraint for (x86) non-avx arch imitates the already implemented 'x' constraint, i.e. allows XMM{0-15} & YMM{0-15} depending on the apparent arch & mode (32/64).
2.for the avx512 arch it allows [X,Y,Z]MM{0-31} (mode dependent)
This patch applies the needed changes to clang
clang patch: https://reviews.llvm.org/D25004
Differential Revision: D25005
llvm-svn: 283717
|
| |
|
|
|
|
|
| |
This also changes the order of the statements in CMakeLists.txt to be
alphabetical.
llvm-svn: 283711
|
| |
|
|
|
|
| |
from SSE file. Also add a minimal set for 512-bit.
llvm-svn: 283704
|
| |
|
|
| |
llvm-svn: 283703
|
| |
|
|
|
|
|
|
| |
This avoids "static initialization order fiasco"
Differential Revision: https://reviews.llvm.org/D25412
llvm-svn: 283702
|
| |
|
|
|
|
|
|
|
|
| |
Masked-expand-load node represents load operation that loads a variable amount of elements from memory according to amount of "true" bits in the mask and expands the loaded elements according to their position in the mask vector.
Right now, the node is used in intrinsics for VEXPAND* instructions.
The work is done towards implementation of masked.expandload and masked.compressstore intrinsics.
Differential Revision: https://reviews.llvm.org/D25322
llvm-svn: 283694
|
| |
|
|
| |
llvm-svn: 283692
|
| |
|
|
| |
llvm-svn: 283691
|
| |
|
|
| |
llvm-svn: 283690
|
| |
|
|
| |
llvm-svn: 283689
|
| |
|
|
| |
llvm-svn: 283687
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
template
The core of the change is supposed to be NFC, however it also fixes
what I believe was an undefined behavior when calling:
va_start(ValueArgs, Desc);
with Desc being a StringRef.
Differential Revision: https://reviews.llvm.org/D25342
llvm-svn: 283671
|