summaryrefslogtreecommitdiffstats
path: root/llvm/test/CodeGen/AArch64/fast-isel-addressing-modes.ll
Commit message (Collapse)AuthorAgeFilesLines
* [AArch64] Prefer "mov" over "orr" to materialize constants.Eli Friedman2019-03-251-4/+4
| | | | | | | | | | | | | This is generally more readable due to the way the assembler aliases work. (This causes a lot of test changes, but it's not really as scary as it looks at first glance; it's just mechanically changing a bunch of checks for orr to check for mov instead.) Differential Revision: https://reviews.llvm.org/D59720 llvm-svn: 356954
* [opaque pointer type] Add textual IR support for explicit type parameter to ↵David Blaikie2015-02-271-45/+45
| | | | | | | | | | | | | | | | | | | | | | | | load instruction Essentially the same as the GEP change in r230786. A similar migration script can be used to update test cases, though a few more test case improvements/changes were required this time around: (r229269-r229278) import fileinput import sys import re pat = re.compile(r"((?:=|:|^)\s*load (?:atomic )?(?:volatile )?(.*?))(| addrspace\(\d+\) *)\*($| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$)") for line in sys.stdin: sys.stdout.write(re.sub(pat, r"\1, \2\3*\4", line)) Reviewers: rafael, dexonsmith, grosser Differential Revision: http://reviews.llvm.org/D7649 llvm-svn: 230794
* Change the fast-isel-abort option from bool to int to enable "levels"Mehdi Amini2015-02-271-1/+1
| | | | | | | | | | | | | | | | | | | | | | | Summary: Currently fast-isel-abort will only abort for regular instructions, and just warn for function calls, terminators, function arguments. There is already fast-isel-abort-args but nothing for calls and terminators. This change turns the fast-isel-abort options into an integer option, so that multiple levels of strictness can be defined. This will help no being surprised when the "abort" option indeed does not abort, and enables the possibility to write test that verifies that no intrinsics are forgotten by fast-isel. Reviewers: resistor, echristo Subscribers: jfb, llvm-commits Differential Revision: http://reviews.llvm.org/D7941 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 230775
* [FastISel][AArch64] Fix load/store with frame indices.Juergen Ributzka2014-10-271-0/+26
| | | | | | | | | | | | At higher optimization levels the LLVM IR may contain more complex patterns for loads/stores from/to frame indices. The 'computeAddress' function wasn't able to handle this and triggered an assertion. This fix extends the possible addressing modes for frame indices. This fixes rdar://problem/18783298. llvm-svn: 220700
* [AArch64]Select wide immediate offset into [Base+XReg] addressing modeHao Liu2014-10-141-8/+14
| | | | | | | | | | | | | | | e.g Currently we'll generate following instructions if the immediate is too wide: MOV X0, WideImmediate ADD X1, BaseReg, X0 LDR X2, [X1, 0] Using [Base+XReg] addressing mode can save one ADD as following: MOV X0, WideImmediate LDR X2, [BaseReg, X0] Differential Revision: http://reviews.llvm.org/D5477 llvm-svn: 219665
* [FastIsel][AArch64] Fix a think-o in address computation.Juergen Ributzka2014-09-191-0/+39
| | | | | | | | | | When looking through sign/zero-extensions the code would always assume there is such an extension instruction and use the wrong operand for the address. There was also a minor issue in the handling of 'AND' instructions. I accidentially used a 'cast' instead of a 'dyn_cast'. llvm-svn: 218161
* [FastISel][AArch64] Followup commit for 218031 to handle negative offsets too.Juergen Ributzka2014-09-181-12/+6
| | | | llvm-svn: 218032
* [FastISel][AArch64] Try to fold the offset into the add instruction when ↵Juergen Ributzka2014-09-181-25/+13
| | | | | | | | | | | | simplifying a memory address. Small optimization in 'simplifyAddress'. When the offset cannot be encoded in the load/store instruction, then we need to materialize the address manually. The add instruction can encode a wider range of immediates than the load/store instructions. This change tries to fold the offset into the add instruction first before materializing the offset in a register. llvm-svn: 218031
* [FastISel][AArch64] Fold 'AND' instruction during the address computation.Juergen Ributzka2014-09-181-2/+45
| | | | | | | | | | | The 'AND' instruction could be used to mask out the lower 32 bits of a register. If this is done inside an address computation we might be able to fold the instruction into the memory instruction itself. and x1, x1, #0xffffffff ---> ldrb x0, [x0, w1, uxtw] ldrb x0, [x0, x1] llvm-svn: 218030
* [FastISel][AArch64] Fold mul into the address computation of memory operations.Juergen Ributzka2014-09-171-0/+41
| | | | | | | | | Teach 'computeAddress' to also fold multiplies into the address computation (when possible). This fixes rdar://problem/18369443. llvm-svn: 217977
* [FastISel][AArch64] Use the target-dependent selection code for shifts first.Juergen Ributzka2014-09-021-3/+3
| | | | | | | | | | | | This uses the target-dependent selection code for shifts first, which allows us to create better code for shifts with immediates and sign-/zero-extend folding. Vector type are not handled yet and the code falls back to target-independent instruction selection for these cases. This fixes rdar://problem/17907920. llvm-svn: 216985
* [FastISel]Juergen Ributzka2014-08-281-0/+10
| | | | | | | | | | | | | | | | | | | | Currently instructions are folded very aggressively for AArch64 into the memory operation, which can lead to the use of killed operands: %vreg1<def> = ADDXri %vreg0<kill>, 2 %vreg2<def> = LDRBBui %vreg0, 2 ... = ... %vreg1 ... This usually happens when the result is also used by another non-memory instruction in the same basic block, or any instruction in another basic block. This fix teaches hasTrivialKill to not only check the LLVM IR that the value has a single use, but also to check if the register that represents that value has already been used. This can happen when the instruction with the use was folded into another instruction (in this particular case a load instruction). This fixes rdar://problem/18142857. llvm-svn: 216634
* Revert "[FastISel][AArch64] Don't fold instructions too aggressively into ↵Juergen Ributzka2014-08-271-130/+0
| | | | | | | | the memory operation." Quentin pointed out that this is not the correct approach and there is a better and easier solution. llvm-svn: 216632
* [FastISel][AArch64] Don't fold instructions too aggressively into the memory ↵Juergen Ributzka2014-08-271-0/+130
| | | | | | | | | | | | | | | | | | | | | operation. Currently instructions are folded very aggressively into the memory operation, which can lead to the use of killed operands: %vreg1<def> = ADDXri %vreg0<kill>, 2 %vreg2<def> = LDRBBui %vreg0, 2 ... = ... %vreg1 ... This usually happens when the result is also used by another non-memory instruction in the same basic block, or any instruction in another basic block. If the computed address is used by only memory operations in the same basic block, then it is safe to fold them. This is because all memory operations will fold the address computation and the original computation will never be emitted. This fixes rdar://problem/18142857. llvm-svn: 216629
* [FastISel][AArch64] Fix simplify address when the address comes from a shift.Juergen Ributzka2014-08-271-0/+21
| | | | | | | | | When the address comes directly from a shift instruction then the address computation cannot be folded into the memory instruction, because the zero register is not available as a base register. Simplify addess needs to emit the shift instruction and use the result as base register. llvm-svn: 216621
* [FastISel][AArch64] Use the zero register for stores.Juergen Ributzka2014-08-271-6/+13
| | | | | | | | | Use the zero register directly when possible to avoid an unnecessary register copy and a wasted register at -O0. This also uses integer stores to store a positive floating-point zero. This saves us from materializing the positive zero in a register and then storing it. llvm-svn: 216617
* [FastISel][AArch64] Fix address simplification.Juergen Ributzka2014-08-271-0/+27
| | | | | | | | | | | | When a shift with extension or an add with shift and extension cannot be folded into the memory operation, then the address calculation has to be materialized separately. While doing so the code forgot to consider a possible sign-/zero- extension. This fix folds now also the sign-/zero-extension into the add or shift instruction which is used to materialize the address. This fixes rdar://problem/18141718. llvm-svn: 216511
* [FastISel][AArch64] Use the correct register class to make the MI verifier ↵Juergen Ributzka2014-08-211-2/+2
| | | | | | | | | | | | | | | happy. This is mostly achieved by providing the correct register class manually, because getRegClassFor always returns the GPR*AllRegClass for MVT::i32 and MVT::i64. Also cleanup the code to use the FastEmitInst_* method whenever possible. This makes sure that the operands' register class is properly constrained. For all the remaining cases this adds the missing constrainOperandRegClass calls for each operand. llvm-svn: 216225
* Reapply [FastISel][AArch64] Add support for more addressing modes (r215597).Juergen Ributzka2014-08-191-0/+425
| | | | | | | | | | | | | | | | | | | | | Note: This was originally reverted to track down a buildbot error. Reapply without any modifications. Original commit message: FastISel didn't take much advantage of the different addressing modes available to it on AArch64. This commit allows the ComputeAddress method to recognize more addressing modes that allows shifts and sign-/zero-extensions to be folded into the memory operation itself. For Example: lsl x1, x1, #3 --> ldr x0, [x0, x1, lsl #3] ldr x0, [x0, x1] sxtw x1, w1 lsl x1, x1, #3 --> ldr x0, [x0, x1, sxtw #3] ldr x0, [x0, x1] llvm-svn: 216013
* Revert "[FastISel][AArch64] Add support for more addressing modes."Juergen Ributzka2014-08-141-425/+0
| | | | | | This reverts commits r215597, because it might have broken the build bots. llvm-svn: 215659
* [FastISel][AArch64] Add support for more addressing modes.Juergen Ributzka2014-08-131-0/+425
FastISel didn't take much advantage of the different addressing modes available to it on AArch64. This commit allows the ComputeAddress method to recognize more addressing modes that allows shifts and sign-/zero-extensions to be folded into the memory operation itself. For Example: lsl x1, x1, #3 --> ldr x0, [x0, x1, lsl #3] ldr x0, [x0, x1] sxtw x1, w1 lsl x1, x1, #3 --> ldr x0, [x0, x1, sxtw #3] ldr x0, [x0, x1] llvm-svn: 215597
OpenPOWER on IntegriCloud