summaryrefslogtreecommitdiffstats
path: root/llvm/test/CodeGen/ARM64/bitfield-extract.ll
Commit message (Collapse)AuthorAgeFilesLines
* AArch64/ARM64: move ARM64 into AArch64's placeTim Northover2014-05-241-532/+0
| | | | | | | | | | | | | | | This commit starts with a "git mv ARM64 AArch64" and continues out from there, renaming the C++ classes, intrinsics, and other target-local objects for consistency. "ARM64" test directories are also moved, and tests that began their life in ARM64 use an arm64 triple, those from AArch64 use an aarch64 triple. Both should be equivalent though. This finishes the AArch64 merge, and everyone should feel free to continue committing as normal now. llvm-svn: 209577
* AArch64/ARM64: print BFM instructions as BFI or BFXILTim Northover2014-05-011-18/+18
| | | | | | | The canonical form of the BFM instruction is always one of the more explicit extract or insert operations, which makes reading output much easier. llvm-svn: 207752
* [ARM64] Prevent bit extraction to be adjusted by following shiftWeiming Zhao2014-04-301-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | For pattern like ((x >> C1) & Mask) << C2, DAG combiner may convert it into (x >> (C1-C2)) & (Mask << C2), which makes pattern matching of ubfx more difficult. For example: Given %shr = lshr i64 %x, 4 %and = and i64 %shr, 15 %arrayidx = getelementptr inbounds [8 x [64 x i64]]* @arr, i64 0, %i64 2, i64 %and %0 = load i64* %arrayidx With current shift folding, it takes 3 instrs to compute base address: lsr x8, x0, #1 and x8, x8, #0x78 add x8, x9, x8 If using ubfx, it only needs 2 instrs: ubfx x8, x0, #4, #4 add x8, x9, x8, lsl #3 This fixes bug 19589 llvm-svn: 207702
* ARM64: use hex immediates for movz/movk instructionsTim Northover2014-04-301-4/+4
| | | | | | | | Since these are mostly used in "lsl #16", "lsl #32", "lsl #48" combinations to piece together an immediate in 16-bit chunks, hex is probably the most appropriate format. llvm-svn: 207635
* ARM64: fix assertion in ISelDAGToDAGTim Northover2014-04-251-0/+17
| | | | | | | | Also an unused variable, so double bonus! This should deal with PR19548. llvm-svn: 207221
* [ARM64] Print preferred aliases for SFBM/UBFM in InstPrinterBradley Smith2014-04-251-14/+14
| | | | llvm-svn: 207219
* [CodeGenPrepare] Use APInt to check the value of the immediate in a andQuentin Colombet2014-04-221-0/+23
| | | | | | | | | | while checking candidate for bit field extract. Otherwise the value may not fit in uint64_t and this will trigger an assertion. This fixes PR19503. llvm-svn: 206834
* ARM64: Combine shifts and uses from different basic block to bit-extract ↵Yi Jiang2014-04-211-0/+73
| | | | | | instruction llvm-svn: 206774
* ARM64: use 32-bit moves for constants where possible.Tim Northover2014-04-161-3/+3
| | | | | | | | | | | | If we know that a particular 64-bit constant has all high bits zero, then we can rely on the fact that 32-bit ARM64 instructions automatically zero out the high bits of an x-register. This gives the expansion logic less constraints to satisfy and so sometimes allows it to pick better sequences. Came up while porting test/CodeGen/AArch64/movw-consts.ll: this will allow a 32-bit MOVN to be used in @test8 soon. llvm-svn: 206379
* ARM64: initial backend importTim Northover2014-03-291-0/+406
This adds a second implementation of the AArch64 architecture to LLVM, accessible in parallel via the "arm64" triple. The plan over the coming weeks & months is to merge the two into a single backend, during which time thorough code review should naturally occur. Everything will be easier with the target in-tree though, hence this commit. llvm-svn: 205090
OpenPOWER on IntegriCloud