summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/AArch64/AArch64RegisterInfo.cpp
Commit message (Collapse)AuthorAgeFilesLines
* [COFF, ARM64] Implement support for SEH extensions __try/__except/__finallyMandeep Singh Grang2019-01-161-0/+7
| | | | | | | | | | | | | | | | | Summary: This patch supports MS SEH extensions __try/__except/__finally. The intrinsics localescape and localrecover are responsible for communicating escaped static allocas from the try block to the handler. We need to preserve frame pointers for SEH. So we create a new function/property HasLocalEscape. Reviewers: rnk, compnerd, mstorsjo, TomTan, efriedma, ssijaric Reviewed By: rnk, efriedma Subscribers: smeenai, jrmuizel, alex, majnemer, ssijaric, ehsan, dmajor, kristina, javed.absar, kristof.beyls, chrib, llvm-commits Differential Revision: https://reviews.llvm.org/D53540 llvm-svn: 351370
* Introduce control flow speculation tracking pass for AArch64Kristof Beyls2018-12-181-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The pass implements tracking of control flow miss-speculation into a "taint" register. That taint register can then be used to mask off registers with sensitive data when executing under miss-speculation, a.k.a. "transient execution". This pass is aimed at mitigating against SpectreV1-style vulnarabilities. At the moment, it implements the tracking of miss-speculation of control flow into a taint register, but doesn't implement a mechanism yet to then use that taint register to mask off vulnerable data in registers (something for a follow-on improvement). Possible strategies to mask out vulnerable data that can be implemented on top of this are: - speculative load hardening to automatically mask of data loaded in registers. - using intrinsics to mask of data in registers as indicated by the programmer (see https://lwn.net/Articles/759423/). For AArch64, the following implementation choices are made. Some of these are different than the implementation choices made in the similar pass implemented in X86SpeculativeLoadHardening.cpp, as the instruction set characteristics result in different trade-offs. - The speculation hardening is done after register allocation. With a relative abundance of registers, one register is reserved (X16) to be the taint register. X16 is expected to not clash with other register reservation mechanisms with very high probability because: . The AArch64 ABI doesn't guarantee X16 to be retained across any call. . The only way to request X16 to be used as a programmer is through inline assembly. In the rare case a function explicitly demands to use X16/W16, this pass falls back to hardening against speculation by inserting a DSB SYS/ISB barrier pair which will prevent control flow speculation. - It is easy to insert mask operations at this late stage as we have mask operations available that don't set flags. - The taint variable contains all-ones when no miss-speculation is detected, and contains all-zeros when miss-speculation is detected. Therefore, when masking, an AND instruction (which only changes the register to be masked, no other side effects) can easily be inserted anywhere that's needed. - The tracking of miss-speculation is done by using a data-flow conditional select instruction (CSEL) to evaluate the flags that were also used to make conditional branch direction decisions. Speculation of the CSEL instruction can be limited with a CSDB instruction - so the combination of CSEL + a later CSDB gives the guarantee that the flags as used in the CSEL aren't speculated. When conditional branch direction gets miss-speculated, the semantics of the inserted CSEL instruction is such that the taint register will contain all zero bits. One key requirement for this to work is that the conditional branch is followed by an execution of the CSEL instruction, where the CSEL instruction needs to use the same flags status as the conditional branch. This means that the conditional branches must not be implemented as one of the AArch64 conditional branches that do not use the flags as input (CB(N)Z and TB(N)Z). This is implemented by ensuring in the instruction selectors to not produce these instructions when speculation hardening is enabled. This pass will assert if it does encounter such an instruction. - On function call boundaries, the miss-speculation state is transferred from the taint register X16 to be encoded in the SP register as value 0. Future extensions/improvements could be: - Implement this functionality using full speculation barriers, akin to the x86-slh-lfence option. This may be more useful for the intrinsics-based approach than for the SLH approach to masking. Note that this pass already inserts the full speculation barriers if the function for some niche reason makes use of X16/W16. - no indirect branch misprediction gets protected/instrumented; but this could be done for some indirect branches, such as switch jump tables. Differential Revision: https://reviews.llvm.org/D54896 llvm-svn: 349456
* [ARM64] [Windows] Handle funcletsEli Friedman2018-11-091-4/+9
| | | | | | | | | | | | This patch adds support for funclets in frame lowering and ISel lowering. Together with D50288 and D50166, it enables C++ exception handling. Patch by Sanjin Sijaric, with some fixes by me. Differential Revision: https://reviews.llvm.org/D51524 llvm-svn: 346568
* [ARM64] [Windows] Exception handling support in frame loweringSanjin Sijaric2018-10-311-0/+2
| | | | | | | | | | Emit pseudo instructions indicating unwind codes corresponding to each instruction inside the prologue/epilogue. These are used by the MCLayer to populate the .xdata section. Differential Revision: https://reviews.llvm.org/D50288 llvm-svn: 345701
* [Aarch64] Fix memcpy that was copying 4x too many bytesBenjamin Kramer2018-09-231-1/+1
| | | | | | Found by asan. llvm-svn: 342845
* [AArch64] Support adding X[8-15,18] registers as CSRs.Tri Vo2018-09-221-0/+37
| | | | | | | | | | | | | | | | | | | Summary: Specifying X[8-15,18] registers as callee-saved is used to support CONFIG_ARM64_LSE_ATOMICS in Linux kernel. As part of this patch we: - use custom CSR list/mask when user specifies custom CSRs - update Machine Register Info's list of CSRs with additional custom CSRs in LowerCall and LowerFormalArguments. Reviewers: srhines, nickdesaulniers, efriedma, javed.absar Reviewed By: nickdesaulniers Subscribers: kristof.beyls, jfb, llvm-commits Differential Revision: https://reviews.llvm.org/D52216 llvm-svn: 342824
* [AArch64] Implement aarch64_vector_pcs codegen support.Sander de Smalen2018-09-121-4/+2
| | | | | | | | | | | | | | This patch adds codegen support for the saving/restoring V8-V23 for functions specified with the aarch64_vector_pcs calling convention attribute, as added in patch D51477. Reviewers: t.p.northover, gberry, thegameg, rengolin, javed.absar, MatzeB Reviewed By: thegameg Differential Revision: https://reviews.llvm.org/D51479 llvm-svn: 342049
* [AArch64] Add parsing of aarch64_vector_pcs attribute.Sander de Smalen2018-09-121-0/+6
| | | | | | | | | | | | | | | | | | | This patch adds parsing support for the 'aarch64_vector_pcs' calling convention attribute to calls and function declarations. More information describing the vector ABI and procedure call standard can be found here: https://developer.arm.com/products/software-development-tools/\ hpc/arm-compiler-for-hpc/vector-function-abi Reviewers: t.p.northover, rnk, rengolin, javed.absar, thegameg, SjoerdMeijer Reviewed By: SjoerdMeijer Differential Revision: https://reviews.llvm.org/D51477 llvm-svn: 342030
* [AArch64] Support reserving x1-7 registers.Nick Desaulniers2018-09-071-32/+21
| | | | | | | | | | | | | | | Summary: Reserving registers x1-7 is used to support CONFIG_ARM64_LSE_ATOMICS in Linux kernel. This change adds support for reserving registers x1 through x7. Reviewers: javed.absar, phosek, srhines, nickdesaulniers, efriedma Reviewed By: nickdesaulniers, efriedma Subscribers: niravd, jfb, manojgupta, nickdesaulniers, jyknight, efriedma, kristof.beyls, llvm-commits Differential Revision: https://reviews.llvm.org/D48580 llvm-svn: 341706
* [CodeGen] emit inline asm clobber list warnings for reserved (cont)Ties Stuij2018-08-301-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This is a continuation of https://reviews.llvm.org/D49727 Below the original text, current changes in the comments: Currently, in line with GCC, when specifying reserved registers like sp or pc on an inline asm() clobber list, we don't always preserve the original value across the statement. And in general, overwriting reserved registers can have surprising results. For example: extern int bar(int[]); int foo(int i) { int a[i]; // VLA asm volatile( "mov r7, #1" : : : "r7" ); return 1 + bar(a); } Compiled for thumb, this gives: $ clang --target=arm-arm-none-eabi -march=armv7a -c test.c -o - -S -O1 -mthumb ... foo: .fnstart @ %bb.0: @ %entry .save {r4, r5, r6, r7, lr} push {r4, r5, r6, r7, lr} .setfp r7, sp, #12 add r7, sp, #12 .pad #4 sub sp, #4 movs r1, #7 add.w r0, r1, r0, lsl #2 bic r0, r0, #7 sub.w r0, sp, r0 mov sp, r0 @APP mov.w r7, #1 @NO_APP bl bar adds r0, #1 sub.w r4, r7, #12 mov sp, r4 pop {r4, r5, r6, r7, pc} ... r7 is used as the frame pointer for thumb targets, and this function needs to restore the SP from the FP because of the variable-length stack allocation a. r7 is clobbered by the inline assembly (and r7 is included in the clobber list), but LLVM does not preserve the value of the frame pointer across the assembly block. This type of behavior is similar to GCC's and has been discussed on the bugtracker: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=11807 . No consensus seemed to have been reached on the way forward. Clang behavior has briefly been discussed on the CFE mailing (starting here: http://lists.llvm.org/pipermail/cfe-dev/2018-July/058392.html). I've opted for following Eli Friedman's advice to print warnings when there are reserved registers on the clobber list so as not to diverge from GCC behavior for now. The patch uses MachineRegisterInfo's target-specific knowledge of reserved registers, just before we convert the inline asm string in the AsmPrinter. If we find a reserved register, we print a warning: repro.c:6:7: warning: inline asm clobber list contains reserved registers: R7 [-Winline-asm] "mov r7, #1" ^ Reviewers: efriedma, olista01, javed.absar Reviewed By: efriedma Subscribers: eraman, kristof.beyls, llvm-commits Differential Revision: https://reviews.llvm.org/D51165 llvm-svn: 341062
* [AArch64] Remove Duplicate FP16 Patterns with same encoding, match on ↵Luke Geeson2018-06-271-0/+13
| | | | | | existing patterns llvm-svn: 335715
* [AArch64] Support reserving x20 registerPetr Hosek2018-06-121-3/+11
| | | | | | | | | | | | Register x20 is a callee-saved register which may be used for other purposes in certain contexts, for example to hold special variables within the kernel. This change adds support for reserving this register both to frontend and backend to make this register usable for these purposes. Differential Revision: https://reviews.llvm.org/D46552 llvm-svn: 334531
* AArch64: Implement support for the shadowcallstack attribute.Peter Collingbourne2018-04-041-6/+10
| | | | | | | | | | | | The implementation of shadow call stack on aarch64 is quite different to the implementation on x86_64. Instead of reserving a segment register for the shadow call stack, we reserve the platform register, x18. Any function that spills lr to sp also spills it to the shadow call stack, a pointer to which is stored in x18. Differential Revision: https://reviews.llvm.org/D45239 llvm-svn: 329236
* [AArch64] Implement dynamic stack probing for windowsMartin Storsjo2018-02-171-0/+4
| | | | | | | | | This makes sure that alloca() function calls properly probe the stack as needed. Differential Revision: https://reviews.llvm.org/D42356 llvm-svn: 325433
* AArch64: Fix emergency spillslot being out of reach for large callframesMatthias Braun2018-01-191-5/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Re-commit of r322200: The testcase shouldn't hit machineverifiers anymore with r322917 in place. Large callframes (calls with several hundreds or thousands or parameters) could lead to situations in which the emergency spillslot is out of range to be addressed relative to the stack pointer. This commit forces the use of a frame pointer in the presence of large callframes. This commit does several things: - Compute max callframe size at the end of instruction selection. - Add mirFileLoaded target callback. Use it to compute the max callframe size after loading a .mir file when the size wasn't specified in the file. - Let TargetFrameLowering::hasFP() return true if there exists a callframe > 255 bytes. - Always place the emergency spillslot close to FP if we have a frame pointer. - Note that `useFPForScavengingIndex()` would previously return false when a base pointer was available leading to the emergency spillslot getting allocated late (that's the whole effect of this callback). Which made no sense to me so I took this case out: Even though the emergency spillslot is technically not referenced by FP in this case we still want it allocated early. Differential Revision: https://reviews.llvm.org/D40876 llvm-svn: 322919
* Revert "AArch64: Fix emergency spillslot being out of reach for large ↵Matthias Braun2018-01-101-7/+5
| | | | | | | | | | | | callframes" Revert for now as the testcase is hitting a pre-existing verifier error that manifest as a failure when expensive checks are enabled (or -verify-machineinstrs) is used. This reverts commit r322200. llvm-svn: 322231
* AArch64: Fix emergency spillslot being out of reach for large callframesMatthias Braun2018-01-101-5/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | Large callframes (calls with several hundreds or thousands or parameters) could lead to situations in which the emergency spillslot is out of range to be addressed relative to the stack pointer. This commit forces the use of a frame pointer in the presence of large callframes. This commit does several things: - Compute max callframe size at the end of instruction selection. - Add mirFileLoaded target callback. Use it to compute the max callframe size after loading a .mir file when the size wasn't specified in the file. - Let TargetFrameLowering::hasFP() return true if there exists a callframe > 255 bytes. - Always place the emergency spillslot close to FP if we have a frame pointer. - Note that `useFPForScavengingIndex()` would previously return false when a base pointer was available leading to the emergency spillslot getting allocated late (that's the whole effect of this callback). Which made no sense to me so I took this case out: Even though the emergency spillslot is technically not referenced by FP in this case we still want it allocated early. Differential Revision: https://reviews.llvm.org/D40876 llvm-svn: 322200
* MachineFunction: Return reference from getFunction(); NFCMatthias Braun2017-12-151-7/+7
| | | | | | The Function can never be nullptr so we can return a reference. llvm-svn: 320884
* Move TargetFrameLowering.h to CodeGen where it's implementedDavid Blaikie2017-11-031-1/+1
| | | | | | | | | | | This header already includes a CodeGen header and is implemented in lib/CodeGen, so move the header there to match. This fixes a link error with modular codegeneration builds - where a header and its implementation are circularly dependent and so need to be in the same library, not split between two like this. llvm-svn: 317379
* [COFF, ARM64, CodeView] Add support to emit CodeView debug info for ARM64 COFFMandeep Singh Grang2017-07-201-1/+3
| | | | | | | | | | | | Reviewers: compnerd, ruiu, rnk, zturner Reviewed By: rnk Subscribers: majnemer, aemerson, aprantl, javed.absar, kristof.beyls, llvm-commits Differential Revision: https://reviews.llvm.org/D35518 llvm-svn: 308665
* fix typos in comments; NFCHiroshi Inoue2017-07-161-1/+1
| | | | llvm-svn: 308126
* [AArch64] Avoid selecting XZR inline ASM memory operandYi Kong2017-07-141-1/+1
| | | | | | | | | | | | | Restricting register class to PointerRegClass for memory operands. Also fix the PointerRegClass for AArch64 from GPR64 to GPR64sp, since XZR cannot hold a memory pointer while SP is. Fixes PR33134. Differential Revision: https://reviews.llvm.org/D34999 llvm-svn: 308060
* [AArch64] Make assert messages uniform and general [NFC]Mandeep Singh Grang2017-06-281-1/+1
| | | | | | | | | | | | | | Summary: Make assert messages related to Darwin, ELF and COFF uniform. Reviewers: rnk, ruiu, compnerd, t.p.northover Reviewed By: t.p.northover Subscribers: t.p.northover, aemerson, rengolin, javed.absar, llvm-commits, kristof.beyls Differential Revision: https://reviews.llvm.org/D34730 llvm-svn: 306589
* AArch64RegisterInfo: Simplify getReservedReg(); NFCMatthias Braun2017-02-021-12/+4
| | | | | | | After marking a 32bit register and all its super registers the 64bit register does not need to be marked again. llvm-svn: 293855
* Clarify rules for reserved regs, fix aarch64 ones.Matthias Braun2016-11-301-10/+11
| | | | | | | | | No test case necessary as the problematic condition is checked with the newly introduced assertAllSuperRegsMarked() function. Differential Revision: https://reviews.llvm.org/D26648 llvm-svn: 288277
* [TargetRegisterInfo, AArch64] Add target hook for isConstantPhysReg().Geoff Berry2016-09-271-0/+4
| | | | | | | | | | | | | | | | | | | Summary: The current implementation of isConstantPhysReg() checks for defs of physical registers to determine if they are constant. Some architectures (e.g. AArch64 XZR/WZR) have registers that are constant and may be used as destinations to indicate the generated value is discarded, preventing isConstantPhysReg() from returning true. This change adds a TargetRegisterInfo hook that overrides the no defs check for cases such as this. Reviewers: MatzeB, qcolombet, t.p.northover, jmolloy Subscribers: junbuml, aemerson, mcrosier, rengolin Differential Revision: https://reviews.llvm.org/D24570 llvm-svn: 282543
* MachineFunction: Return reference for getFrameInfo(); NFCMatthias Braun2016-07-281-10/+10
| | | | | | | getFrameInfo() never returns nullptr so we should use a reference instead of a pointer. llvm-svn: 277017
* AArch64: Remove unnecessary namespace llvm; NFCMatthias Braun2016-06-281-4/+0
| | | | llvm-svn: 273975
* [NFC] Header cleanupMehdi Amini2016-04-181-1/+0
| | | | | | | | | | | | | | Removed some unused headers, replaced some headers with forward class declarations. Found using simple scripts like this one: clear && ack --cpp -l '#include "llvm/ADT/IndexedMap.h"' | xargs grep -L 'IndexedMap[<]' | xargs grep -n --color=auto 'IndexedMap' Patch by Eugene Kosov <claprix@yandex.ru> Differential Revision: http://reviews.llvm.org/D19219 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 266595
* Swift Calling Convention: swifterror target support.Manman Ren2016-04-111-0/+9
| | | | | | Differential Revision: http://reviews.llvm.org/D18716 llvm-svn: 265997
* Add support for a preserve_most calling convention to the AArch64 backend.Roman Levenstein2016-03-101-0/+4
| | | | | | | | | | This change adds a support for a preserve_most calling convention to the AArch64 backend, similar to how it was done for X86-64. There is also a subsequent patch on top of this one to add a tail-calls support for this calling convention. Differential Revision: http://reviews.llvm.org/D18016 llvm-svn: 263092
* [AArch64] Enable non-leaf frame pointer elimination.Geoff Berry2016-03-021-3/+1
| | | | | | | | | | | | | | | | | | | | Summary: This change enables frame pointer elimination in non-leaf functions. The -fomit-frame-pointer option still needs to be used when compiling via clang (or an equivalent method of not setting the 'no-frame-pointer-elim*' function attributes if generating llvm IR via some other method) to take advantage of this optimization. This change should be NFC when compiling via clang without -fomit-frame-pointer. Reviewers: t.p.northover Subscribers: aemerson, rengolin, tberghammer, qcolombet, llvm-commits, danalbert, mcrosier, srhines Differential Revision: http://reviews.llvm.org/D17730 llvm-svn: 262495
* Simplify some boolean conditional return statements in AArch64.Eric Christopher2016-02-291-3/+1
| | | | | | | | http://reviews.llvm.org/D9979 Patch by Richard Thomson (and some conflict resolution by me). llvm-svn: 262266
* CXX_FAST_TLS calling convention: performance improvement for AArch64.Manman Ren2015-12-161-1/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The access function has a short entry and a short exit, the initialization block is only run the first time. To improve the performance, we want to have a short frame at the entry and exit. We explicitly handle most of the CSRs via copies. Only the CSRs that are not handled via copies will be in CSR_SaveList. Frame lowering and prologue/epilogue insertion will generate a short frame in the entry and exit according to CSR_SaveList. The majority of the CSRs will be handled by register allcoator. Register allocator will try to spill and reload them in the initialization block. We add CSRsViaCopy, it will be explicitly handled during lowering. 1> we first set FunctionLoweringInfo->SplitCSR if conditions are met (the target supports it for the given machine function and the function has only return exits). We also call TLI->initializeSplitCSR to perform initialization. 2> we call TLI->insertCopiesSplitCSR to insert copies from CSRsViaCopy to virtual registers at beginning of the entry block and copies from virtual registers to CSRsViaCopy at beginning of the exit blocks. 3> we also need to make sure the explicit copies will not be eliminated. The target independent portion was committed as r255353. rdar://problem/23557469 Differential Revision: http://reviews.llvm.org/D15341 llvm-svn: 255821
* [CXX TLS calling convention] Add support for AArch64.Manman Ren2015-12-081-0/+4
| | | | | | rdar://9001553 llvm-svn: 254978
* [AArch64] Remove check for Darwin that was needed to decide if x18 shouldAkira Hatanaka2015-07-271-9/+7
| | | | | | | | | be reserved. The decision to reserve x18 is going to be made solely by the front-end, so it isn't necessary to check if the OS is Darwin in the backend. llvm-svn: 243308
* [AArch64] Define subtarget feature "reserve-x18", which is used to decideAkira Hatanaka2015-07-251-9/+8
| | | | | | | | | | | | | | | | | whether register x18 should be reserved. This change is needed because we cannot use a backend option to set cl::opt "aarch64-reserve-x18" when doing LTO. Out-of-tree projects currently using cl::opt option "-aarch64-reserve-x18" to reserve x18 should make changes to add subtarget feature "reserve-x18" to the IR. rdar://problem/21529937 Differential Revision: http://reviews.llvm.org/D11463 llvm-svn: 243186
* Targets: commonize some stack realignment codeJF Bastien2015-07-201-23/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch does the following: * Fix FIXME on `needsStackRealignment`: it is now shared between multiple targets, implemented in `TargetRegisterInfo`, and isn't `virtual` anymore. This will break out-of-tree targets, silently if they used `virtual` and with a build error if they used `override`. * Factor out `canRealignStack` as a `virtual` function on `TargetRegisterInfo`, by default only looks for the `no-realign-stack` function attribute. Multiple targets duplicated the same `needsStackRealignment` code: - Aarch64. - ARM. - Mips almost: had extra `DEBUG` diagnostic, which the default implementation now has. - PowerPC. - WebAssembly. - x86 almost: has an extra `-force-align-stack` option, which the default implementation now has. The default implementation of `needsStackRealignment` used to just return `false`. My current patch changes the behavior by simply using the above shared behavior. This affects: - AMDGPU - BPF - CppBackend - MSP430 - NVPTX - Sparc - SystemZ - XCore - Out-of-tree targets This is a breaking change! `make check` passes. The only implementation of the `virtual` function (besides the slight different in x86) was Hexagon (which did `MF.getFrameInfo()->getMaxAlignment() > 8`), and potentially some out-of-tree targets. Hexagon now uses the default implementation. `needsStackRealignment` was being overwritten in `<Target>GenRegisterInfo.inc`, to return `false` as the default also did. That was odd and is now gone. Reviewers: sunfish Subscribers: aemerson, llvm-commits, jfb Differential Revision: http://reviews.llvm.org/D11160 llvm-svn: 242727
* Target RegisterInfo: devirtualize TargetFrameLoweringJF Bastien2015-07-101-12/+8
| | | | | | | | | | | | | Summary: The target frame lowering's concrete type is always known in RegisterInfo, yet it's only sometimes devirtualized through a static_cast. This change adds an auto-generated static function <Target>GenRegisterInfo::getFrameLowering(const MachineFunction &MF) which does this devirtualization, and uses this function in all targets which can. This change was suggested by sunfish in D11070 for WebAssembly, I figure that I may as well improve the other targets while I'm here. Subscribers: sunfish, ted, llvm-commits, jfb Differential Revision: http://reviews.llvm.org/D11093 llvm-svn: 241921
* [AArch64] Add support for dynamic stack alignmentKristof Beyls2015-04-091-0/+30
| | | | | | Differential Revision: http://reviews.llvm.org/D8876 llvm-svn: 234471
* [ARM] Fix handling of thumb1 out-of-range frame offsetsJohn Brawn2015-03-201-2/+3
| | | | | | | | | | | | | | | | LocalStackSlotPass assumes that isFrameOffsetLegal doesn't change its answer when the base register changes. Unfortunately this isn't true in thumb1, where SP-based loads allow a larger offset than non-SP-based loads, and this causes the base register reuse code to generate instructions that are unencodable, causing an assertion failure. Solve this by adding a BaseReg parameter to isFrameOffsetLegal, which ARMBaseRegisterInfo can then make use of to give the correct answer. Differential Revision: http://reviews.llvm.org/D8419 llvm-svn: 232825
* Revert "Migrate the AArch64 TargetRegisterInfo to its TargetMachine"Eric Christopher2015-03-181-2/+2
| | | | | | | | | as we don't necessarily need to do this yet - though we could move the base class to the TargetMachine as it isn't subtarget dependent. This reverts commit r232103. llvm-svn: 232665
* Migrate the AArch64 TargetRegisterInfo to its TargetMachineEric Christopher2015-03-121-2/+2
| | | | | | | implementation. This requires a bit of scaffolding and a few fixups that'll go away once all of the ports have been migrated. llvm-svn: 232103
* Remove the need to cache the subtarget in the AArch64 TargetRegisterInfoEric Christopher2015-03-121-15/+21
| | | | | | | classes. Replace it with a cache to the Triple and use that where applicable at the moment. llvm-svn: 232005
* Have getCallPreservedMask and getThisCallPreservedMask take aEric Christopher2015-03-111-2/+4
| | | | | | | MachineFunction argument so that we can grab subtarget specific features off of it. llvm-svn: 231979
* AArch64: add backend option to reserve x18 (platform register)Tim Northover2015-01-211-3/+7
| | | | | | | | | AAPCS64 says that it's up to the platform to specify whether x18 is reserved, and a first step on that way is to add a flag controlling it. From: Andrew Turner <andrew@fubar.geek.nz> llvm-svn: 226664
* [AArch64] Implement GHC calling conventionGreg Fitzgerald2015-01-191-1/+9
| | | | | | | | | | Original patch by Luke Iannini. Minor improvements and test added by Erik de Castro Lopo. Differential Revision: http://reviews.llvm.org/D6877 From: Erik de Castro Lopo <erikd@mega-nerd.com> llvm-svn: 226473
* Have MachineFunction cache a pointer to the subtarget to make lookupsEric Christopher2014-08-051-11/+6
| | | | | | | | | | | shorter/easier and have the DAG use that to do the same lookup. This can be used in the future for TargetMachine based caching lookups from the MachineFunction easily. Update the MIPS subtarget switching machinery to update this pointer at the same time it runs. llvm-svn: 214838
* Remove the TargetMachine forwards for TargetSubtargetInfo basedEric Christopher2014-08-041-6/+11
| | | | | | information and update all callers. No functional change. llvm-svn: 214781
* AArch64: implement copies to/from NZCV as a last ditch effort.Tim Northover2014-05-271-1/+1
| | | | | | | | | | A test in test/Generic creates a DAG where the NZCV output of an ADCS is used by multiple nodes. This makes LLVM want to save a copy of NZCV for later, which it couldn't do before. This should be the last fix required for the aarch64 buildbot. llvm-svn: 209651
OpenPOWER on IntegriCloud