summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/ARM64/ARM64ISelLowering.h
Commit message (Collapse)AuthorAgeFilesLines
* AArch64/ARM64: move ARM64 into AArch64's placeTim Northover2014-05-241-464/+0
| | | | | | | | | | | | | | | This commit starts with a "git mv ARM64 AArch64" and continues out from there, renaming the C++ classes, intrinsics, and other target-local objects for consistency. "ARM64" test directories are also moved, and tests that began their life in ARM64 use an arm64 triple, those from AArch64 use an aarch64 triple. Both should be equivalent though. This finishes the AArch64 merge, and everyone should feel free to continue committing as normal now. llvm-svn: 209577
* Revert "Implement global merge optimization for global variables."Rafael Espindola2014-05-161-4/+0
| | | | | | | | | | | | This reverts commit r208934. The patch depends on aliases to GEPs with non zero offsets. That is not supported and fairly broken. The good news is that GlobalAlias is being redesigned and will have support for offsets, so this patch should be a nice match for it. llvm-svn: 208978
* [ARM64]Implement NEON post-increment LD1(lane) and post-increment LD1R.Hao Liu2014-05-161-0/+2
| | | | llvm-svn: 208955
* Implement global merge optimization for global variables.Jiangning Liu2014-05-151-0/+4
| | | | | | | | | | | This commit implements two command line switches -global-merge-on-external and -global-merge-aligned, and both of them are false by default, so this optimization is disabled by default for all targets. For ARM64, some back-end behaviors need to be tuned to get this optimization further enabled. llvm-svn: 208934
* [ARM64] Support aggressive fastcc/tailcallopt breaking ABI by popping out ↵Jiangning Liu2014-05-151-0/+10
| | | | | | argument stack from callee. llvm-svn: 208837
* Rename ComputeMaskedBits to computeKnownBits. "Masked" has beenJay Foad2014-05-141-4/+4
| | | | | | inappropriate since it lost its Mask parameter in r154011. llvm-svn: 208811
* Pass the value type to TLI::getRegisterByNameHal Finkel2014-05-111-1/+1
| | | | | | | | | | | | | We must validate the value type in TLI::getRegisterByName, because if we don't and the wrong type was used with the IR intrinsic, then we'll assert (because we won't be able to find a valid register class with which to construct the requested copy operation). For PPC64, additionally, the type information is necessary to decide between the 64-bit register and the 32-bit subregister. No functionality change. llvm-svn: 208508
* Add 'override' to getRegisterByName in *ISelLowering.hHal Finkel2014-05-111-1/+1
| | | | | | No functionality change intended. llvm-svn: 208507
* AArch64/ARM64: Port NEON post-increment load/store with 2/3/4 vectors to ↵Hao Liu2014-05-081-1/+24
| | | | | | ARM64 backend. llvm-svn: 208284
* Implememting named register intrinsicsRenato Golin2014-05-061-0/+1
| | | | | | | | | | | This patch implements the infrastructure to use named register constructs in programs that need access to specific registers (bare metal, kernels, etc). So far, only the stack pointer is supported as a technology preview, but as it is, the intrinsic can already support all non-allocatable registers from any architecture. llvm-svn: 208104
* [ARM64] Prevent bit extraction to be adjusted by following shiftWeiming Zhao2014-04-301-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | For pattern like ((x >> C1) & Mask) << C2, DAG combiner may convert it into (x >> (C1-C2)) & (Mask << C2), which makes pattern matching of ubfx more difficult. For example: Given %shr = lshr i64 %x, 4 %and = and i64 %shr, 15 %arrayidx = getelementptr inbounds [8 x [64 x i64]]* @arr, i64 0, %i64 2, i64 %and %0 = load i64* %arrayidx With current shift folding, it takes 3 instrs to compute base address: lsr x8, x0, #1 and x8, x8, #0x78 add x8, x9, x8 If using ubfx, it only needs 2 instrs: ubfx x8, x0, #4, #4 add x8, x9, x8, lsl #3 This fixes bug 19589 llvm-svn: 207702
* [C++11] Add 'override' keywords and remove 'virtual'. Additionally add ↵Craig Topper2014-04-291-11/+14
| | | | | | 'final' and leave 'virtual' on some methods that are marked virtual without overriding anything and have no obvious overrides themselves. ARM64 edition llvm-svn: 207509
* [C++] Use 'nullptr'.Craig Topper2014-04-281-1/+1
| | | | llvm-svn: 207394
* AArch64/ARM64: port BSL logic from AArch64 & enable test.Tim Northover2014-04-181-0/+4
| | | | | | | | | | | I enhanced it a little in the process. The decision shouldn't really be beased on whether a BUILD_VECTOR is a splat: any set of constants will do the job provided they're related in the correct way. Also, the BUILD_VECTOR could be any operand of the incoming AND nodes, so it's best to check for all 4 possibilities rather than assuming it'll be the RHS. llvm-svn: 206569
* ARM64: switch to IR-based atomic operations.Tim Northover2014-04-171-13/+9
| | | | | | | | Goodbye code! (Game: spot the bug fixed by the change). llvm-svn: 206490
* Make consistent use of MCPhysReg instead of uint16_t throughout the tree.Craig Topper2014-04-041-1/+1
| | | | llvm-svn: 205610
* ARM64: override all the things.Tim Northover2014-03-301-52/+51
| | | | | | | | Actually, mostly only those in the top-level directory that already had a "virtual" attached. But it's the thought that counts and it's been a long day. llvm-svn: 205131
* ARM64: initial backend importTim Northover2014-03-291-0/+423
This adds a second implementation of the AArch64 architecture to LLVM, accessible in parallel via the "arm64" triple. The plan over the coming weeks & months is to merge the two into a single backend, during which time thorough code review should naturally occur. Everything will be easier with the target in-tree though, hence this commit. llvm-svn: 205090
OpenPOWER on IntegriCloud