summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/ARM
Commit message (Collapse)AuthorAgeFilesLines
* GlobalISel: remove unused variable to silence warning.Tim Northover2016-11-152-2/+1
| | | | llvm-svn: 287027
* [ARM] GlobalISel: Remove unused members. NFCIDiana Picus2016-11-153-8/+4
| | | | | | This silences some warnings that I didn't see with my host compiler. llvm-svn: 286981
* [ARM] Make sure GlobalISel is only initialized once. NFCIDiana Picus2016-11-151-12/+12
| | | | | | | | | Move some code inside the proper 'if' block to make sure it is only run once, when the subtarget is first created. Things can still break if we use different ARM target machines or if we have functions with different 'target-cpu' or 'target-features', we should fix that too in the future. llvm-svn: 286974
* [ARM] Add machine scheduler for Cortex-R52 Javed Absar2016-11-153-1/+985
| | | | | | | | | | | | | This patch adds the Sched Machine Model for Cortex-R52. Details of the pipeline and descriptions are in comments in file ARMScheduleR52.td included in this patch. Reviewers: rengolin, jmolloy Differential Revision: https://reviews.llvm.org/D26500 llvm-svn: 286949
* ARM: try to fix GCC 4.8 compilation again after r286881.Tim Northover2016-11-141-1/+2
| | | | llvm-svn: 286882
* Recommit: ARM: sort register lists by encoding in push/pop instructions.Tim Northover2016-11-143-2/+28
| | | | | | | | | | | | | | | | | For example we were producing push {r8, r10, r11, r4, r5, r7, lr} This is misleading (r4, r5 and r7 are actually pushed before the rest), and other components (stack folding recently) often forget to deal with the extra complexity coming from the different order, leading to miscompiles. Finally, we warn about our own code in -no-integrated-as mode without this, which is really not a good idea. Fixed usage of std::sort so that we (hopefully) use instantiations that actually exist in GCC 4.8. llvm-svn: 286881
* Revert "ARM: sort register lists by encoding in push/pop instructions."Tim Northover2016-11-143-28/+2
| | | | | | | This reverts commit 286866. It broke a bot, something to do with exactly which templates std::sort accepts. llvm-svn: 286867
* ARM: sort register lists by encoding in push/pop instructions.Tim Northover2016-11-143-2/+28
| | | | | | | | | | | | | | For example we were producing push {r8, r10, r11, r4, r5, r7, lr} This is misleading (r4, r5 and r7 are actually pushed before the rest), and other components (stack folding recently) often forget to deal with the extra complexity coming from the different order, leading to miscompiles. Finally, we warn about our own code in -no-integrated-as mode without this, which is really not a good idea. llvm-svn: 286866
* [ARM] Add plumbing for GlobalISelDiana Picus2016-11-1113-4/+407
| | | | | | Add GlobalISel skeleton, up to the point where we can select a ret void. llvm-svn: 286573
* [Target] Rename X86/ARM Assembly printer to reflect reality.Davide Italiano2016-11-101-1/+1
| | | | | | | This shows up a lot profiling LTO testcases with -time-passes, so better have a non confusing name. llvm-svn: 286488
* [ARM] Thumb2 LDR (literal) should accept PC as the destinationOliver Stannard2016-11-101-1/+1
| | | | | | | | | The version of this instruction with the .w suffix already correctly accepts this, but the alias without the .w did not. Differential Revision: https://reviews.llvm.org/D26499 llvm-svn: 286446
* [Thumb1] Move padding earlier when synthesizing TBBs off of the PCJames Molloy2016-11-071-0/+8
| | | | | | | | | | When the base register (register pointing to the jump table) is the PC, we expect the jump table to directly follow the jump sequence with no intervening padding. If there is intervening padding, the calculated offsets will not be correct. One solution would be to account for any padding in the emitted LDRB instruction, but at the moment we don't support emitting MCExprs for the load offset. In the meantime, it's correct and only a slight amount worse to just move the padding up, from just before the jump table to just before the jump instruction sequence. We can do that by emitting code alignment before the jump sequence, as we know the number of instructions in the sequence is always 4. llvm-svn: 286107
* ARM: lower fpowi appropriately for Windows ARMSaleem Abdulrasool2016-11-061-0/+57
| | | | | | | | | | | This handles the last case of the builtin function calls that we would generate code which differed from Microsoft's ABI. Rather than generating a call to `__pow{d,s}i2` we now promote the parameter to a float or double and invoke `powf` or `pow` instead. Addresses PR30825! llvm-svn: 286082
* [Cortex-M0] Atomic loweringWeiming Zhao2016-11-032-7/+14
| | | | | | | | | | | | Summary: ARMv6m supports dmb etc fench instructions but not ldrex/strex etc. So for some atomic load/store, LLVM should inline instructions instead of lowering to __sync_ calls. Reviewers: rengolin, efriedma, t.p.northover, jmolloy Subscribers: efriedma, aemerson, llvm-commits Differential Revision: https://reviews.llvm.org/D26120 llvm-svn: 285969
* Remove a redundant condition found by PVS-Studio.Chandler Carruth2016-11-031-2/+2
| | | | | | | Filed http://llvm.org/PR30897 to teach Clang to warn on this kind of stuff. llvm-svn: 285945
* Revert "[Thumb] Teach ISel how to lower compares of AND bitmasks efficiently"James Molloy2016-11-032-138/+5
| | | | | | This reverts commit r285893. It caused (probably) http://lab.llvm.org:8011/builders/clang-cmake-thumbv7-a15-full-sh/builds/83 . llvm-svn: 285912
* [Thumb] Teach ISel how to lower compares of AND bitmasks efficientlyJames Molloy2016-11-032-5/+138
| | | | | | | | | | | | | | | This recommits r281323, which was backed out for two reasons. One, a selfhost failure, and two, it apparently caused Chromium failures. Actually, the latter was a red herring. The log has expired from the former, but I suspect that was a red herring too (actually caused by another problematic patch of mine). Therefore reapplying, and will watch the bots like a hawk. For the common pattern (CMPZ (AND x, #bitmask), #0), we can do some more efficient instruction selection if the bitmask is one consecutive sequence of set bits (32 - clz(bm) - ctz(bm) == popcount(bm)). 1) If the bitmask touches the LSB, then we can remove all the upper bits and set the flags by doing one LSLS. 2) If the bitmask touches the MSB, then we can remove all the lower bits and set the flags with one LSRS. 3) If the bitmask has popcount == 1 (only one set bit), we can shift that bit into the sign bit with one LSLS and change the condition query from NE/EQ to MI/PL (we could also implement this by shifting into the carry bit and branching on BCC/BCS). 4) Otherwise, we can emit a sequence of LSLS+LSRS to remove the upper and lower zero bits of the mask. 1-3 require only one 16-bit instruction and can elide the CMP. 4 requires two 16-bit instructions but can elide the CMP and doesn't require materializing a complex immediate, so is also a win. llvm-svn: 285893
* [ARM][MC] Cleanup ARM Target Assembly ParserNirav Dave2016-11-021-540/+343
| | | | | | | | | | | | | | Summary: Correctly parse end-of-statement tokens and handle preprocessor end-of-line comments in ARM assembly processor. Reviewers: rnk, majnemer Subscribers: aemerson, rengolin, llvm-commits Differential Revision: https://reviews.llvm.org/D26152 llvm-svn: 285830
* [TableGen] Move OperandMatchResultTy enum to MCTargetAsmParser.hAlex Bradbury2016-11-011-20/+20
| | | | | | | | | | | | | | | As it stands, the OperandMatchResultTy is only included in the generated header if there is custom operand parsing. However, almost all backends make use of MatchOperand_Success and friends from OperandMatchResultTy for e.g. parseRegister. This is a pain when starting an AsmParser for a new backend that doesn't yet have custom operand parsing. Move the enum to MCTargetAsmParser.h. This patch is a prerequisite for D23563 Differential Revision: https://reviews.llvm.org/D23496 llvm-svn: 285705
* [Thumb-1] Synthesize TBB/TBH instructions to make use of compressed jump tablesJames Molloy2016-11-013-15/+185
| | | | | | | | | | | | | | | | | | | | | | | | | | | | [Reapplying r284580 and r285917 with fix and testing to ensure emitted jump tables for Thumb-1 have 4-byte alignment] The TBB and TBH instructions in Thumb-2 allow jump tables to be compressed into sequences of bytes or shorts respectively. These instructions do not exist in Thumb-1, however it is possible to synthesize them out of a sequence of other instructions. It turns out this sequence is so short that it's almost never a lose for performance and is ALWAYS a significant win for code size. TBB example: Before: lsls r0, r0, #2 After: add r0, pc adr r1, .LJTI0_0 ldrb r0, [r0, #6] ldr r0, [r0, r1] lsls r0, r0, #1 mov pc, r0 add pc, r0 => No change in prologue code size or dynamic instruction count. Jump table shrunk by a factor of 4. The only case that can increase dynamic instruction count is the TBH case: Before: lsls r0, r4, #2 After: lsls r4, r4, #1 adr r1, .LJTI0_0 add r4, pc ldr r0, [r0, r1] ldrh r4, [r4, #6] mov pc, r0 lsls r4, r4, #1 add pc, r4 => 1 more instruction in prologue. Jump table shrunk by a factor of 2. So there is an argument that this should be disabled when optimizing for performance (and a TBH needs to be generated). I'm not so sure about that in practice, because on small cores with Thumb-1 performance is often tied to code size. But I'm willing to turn it off when optimizing for performance if people want (also note that TBHs are fairly rare in practice!) llvm-svn: 285690
* CodeGen: further loosen -O0 CG for WoA divisionSaleem Abdulrasool2016-10-311-5/+13
| | | | | | | | | | | | Generate the slowest possible codepath for noopt CodeGen. Even trying to be clever with the negated jump can cause out-of-range jumps. Use a wide branch instead. Although the code is modelled simplistically, the later optimizations would recombine the branching into `cbz` if possible. This re-enables the previous optimization as well as hopefully gives us working code in all cases. Addresses PR30356! llvm-svn: 285649
* ARM: ensure that the Windows DBZ check is in rangeSaleem Abdulrasool2016-10-272-8/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The Windows ARM target expects the compiler to emit a division-by-zero check. The check would use the form of: cmp r?, #0 cbz .Ltrap b .Lbody .Lbody: ... .Ltrap: udf #249 @ __brkdiv0 This works great most of the time. However, if the body of the function is greater than 127 bytes, the branch target limitation of cbz becomes an issue. This occurs in the unoptimized code generation cases sometimes (like in compiler-rt). Since this is a matter of correctness, possibly pay a small penalty instead. We now form this slightly differently: cbnz .Lbody udf #249 @ __brkdiv0 .Lbody: ... The positive case is through the branch instead of being the next instruction. However, because of the basic block layout, the negated branch is going to be a short distance always (2 bytes away, after the inserted __brkdiv0). The new t__brkdiv0 instruction is required to explicitly mark the instruction as a terminator as the generic UDF instruction is not a terminator. Addresses PR30532! llvm-svn: 285312
* [ARM] Predicate UMAAL selection on hasDSP.Sam Parker2016-10-272-2/+3
| | | | | | | | | | | | UMAAL is a DSP instruction and it is not available on thumbv7m (Cortex-M3) and thumbv6m (Cortex-M0+1) targets. Also fix wrong CHECK prefix in longMAC.ll test. Patch by Vadzim Dambrouski. Differential Revision: https://reviews.llvm.org/D25890 llvm-svn: 285278
* ARM: don't rely on push/pop reglists being in order when folding SP adjust.Tim Northover2016-10-261-8/+19
| | | | | | | | | | | | | | | | | | It would be a very nice invariant to rely on, but unfortunately it doesn't necessarily hold (and the causes of mis-sorted reglists appear to be quite varied) so to be robust the frame lowering code can't assume that the first register in the list is also the first one that actually gets pushed. Should fix an issue where we were turning something like: push {r8, r4, r7, lr} sub sp, #24 into nonsense like: push {r2, r3, r4, r5, r6, r7, r8, r4, r7, lr} llvm-svn: 285232
* CodeGen/Passes: Pass MachineFunction as functor arg; NFCMatthias Braun2016-10-241-4/+4
| | | | | | | | Passing a MachineFunction as argument is more natural and avoids an unnecessary round-trip through the logic determining the correct Subtarget because MachineFunction already has a reference anyway. llvm-svn: 285039
* Revert r284580+r284917. ("Synthesize TBB/TBH instructions")Eli Friedman2016-10-243-180/+14
| | | | | | | The optimization has correctness issues, so reverting for now to fix tests on thumb1 targets. llvm-svn: 284993
* [ARM] Fix crash in ConstantIslandsJames Molloy2016-10-221-1/+3
| | | | | | tPCRelJT may not be the first instruction in a block. Check that instead of dereferencing a broken iterator. llvm-svn: 284917
* Do a sweep over move ctors and remove those that are identical to the default.Benjamin Kramer2016-10-201-7/+0
| | | | | | | | | | All of these existed because MSVC 2013 was unable to synthesize default move ctors. We recently dropped support for it so all that error-prone boilerplate can go. No functionality change intended. llvm-svn: 284721
* Reapply r284571 (with the new tests fixed).Sjoerd Meijer2016-10-191-14/+15
| | | | llvm-svn: 284588
* [Thumb-1] Synthesize TBB/TBH instructions to make use of compressed jump tablesJames Molloy2016-10-193-14/+178
| | | | | | | | | | | | | | | | | | | | | | | | | | The TBB and TBH instructions in Thumb-2 allow jump tables to be compressed into sequences of bytes or shorts respectively. These instructions do not exist in Thumb-1, however it is possible to synthesize them out of a sequence of other instructions. It turns out this sequence is so short that it's almost never a lose for performance and is ALWAYS a significant win for code size. TBB example: Before: lsls r0, r0, #2 After: add r0, pc adr r1, .LJTI0_0 ldrb r0, [r0, #6] ldr r0, [r0, r1] lsls r0, r0, #1 mov pc, r0 add pc, r0 => No change in prologue code size or dynamic instruction count. Jump table shrunk by a factor of 4. The only case that can increase dynamic instruction count is the TBH case: Before: lsls r0, r4, #2 After: lsls r4, r4, #1 adr r1, .LJTI0_0 add r4, pc ldr r0, [r0, r1] ldrh r4, [r4, #6] mov pc, r0 lsls r4, r4, #1 add pc, r4 => 1 more instruction in prologue. Jump table shrunk by a factor of 2. So there is an argument that this should be disabled when optimizing for performance (and a TBH needs to be generated). I'm not so sure about that in practice, because on small cores with Thumb-1 performance is often tied to code size. But I'm willing to turn it off when optimizing for performance if people want (also note that TBHs are fairly rare in practice!) llvm-svn: 284580
* Revert of r284571 because of failing tests.Sjoerd Meijer2016-10-191-17/+14
| | | | llvm-svn: 284572
* Checking FP function attribute values and adding more build attribute tests.Sjoerd Meijer2016-10-191-14/+17
| | | | | | | | | | This renames the function for checking FP function attribute values and also adds more build attribute tests (which are in separate files because build attributes are set per file). Differential Revision: https://reviews.llvm.org/D25625 llvm-svn: 284571
* Improve ARM lowering for "icmp <2 x i64> eq".Eli Friedman2016-10-181-6/+21
| | | | | | | | | The custom lowering is pretty straightforward: basically, just AND together the two halves of a <4 x i32> compare. Differential Revision: https://reviews.llvm.org/D25713 llvm-svn: 284536
* [ARM] Assign cost of scaling for Cortex-R52Javed Absar2016-10-181-1/+2
| | | | | | | | | | | | This patch assigns cost of the scaling used in addressing for Cortex-R52. On Cortex-R52 a negated register offset takes longer than a non-negated register offset, in a register-offset addressing mode. Differential Revision: http://reviews.llvm.org/D25670 Reviewer: jmolloy llvm-svn: 284460
* [XRay] Support for for tail calls for ARM no-ThumbDean Michael Berris2016-10-183-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds simplified support for tail calls on ARM with XRay instrumentation. Known issue: compiled with generic flags: `-O3 -g -fxray-instrument -Wall -std=c++14 -ffunction-sections -fdata-sections` (this list doesn't include my specific flags like --target=armv7-linux-gnueabihf etc.), the following program #include <cstdio> #include <cassert> #include <xray/xray_interface.h> [[clang::xray_always_instrument]] void __attribute__ ((noinline)) fC() { std::printf("In fC()\n"); } [[clang::xray_always_instrument]] void __attribute__ ((noinline)) fB() { std::printf("In fB()\n"); fC(); } [[clang::xray_always_instrument]] void __attribute__ ((noinline)) fA() { std::printf("In fA()\n"); fB(); } // Avoid infinite recursion in case the logging function is instrumented (so calls logging // function again). [[clang::xray_never_instrument]] void simplyPrint(int32_t functionId, XRayEntryType xret) { printf("XRay: functionId=%d type=%d.\n", int(functionId), int(xret)); } int main(int argc, char* argv[]) { __xray_set_handler(simplyPrint); printf("Patching...\n"); __xray_patch(); fA(); printf("Unpatching...\n"); __xray_unpatch(); fA(); return 0; } gives the following output: Patching... XRay: functionId=3 type=0. In fA() XRay: functionId=3 type=1. XRay: functionId=2 type=0. In fB() XRay: functionId=2 type=1. XRay: functionId=1 type=0. XRay: functionId=1 type=1. In fC() Unpatching... In fA() In fB() In fC() So for function fC() the exit sled seems to be called too much before function exit: before printing In fC(). Debugging shows that the above happens because printf from fC is also called as a tail call. So first the exit sled of fC is executed, and only then printf is jumped into. So it seems we can't do anything about this with the current approach (i.e. within the simplification described in https://reviews.llvm.org/D23988 ). Differential Revision: https://reviews.llvm.org/D25030 llvm-svn: 284456
* [ArmFastISel] Kill dead code. NFCI.Davide Italiano2016-10-161-35/+0
| | | | llvm-svn: 284320
* Tidy the calls to getCurrentSection().first -> getCurrentSectionOnly to helpEric Christopher2016-10-141-3/+3
| | | | | | readability a bit. llvm-svn: 284202
* [ARM]: Assign cost of scaling used in addressing mode for ARM coresJaved Absar2016-10-134-2/+29
| | | | | | | | | | | | | | | | | | | | | This patch assigns cost of the scaling used in addressing. On many ARM cores, a negated register offset takes longer than a non-negated register offset, in a register-offset addressing mode. For instance: LDR R0, [R1, R2 LSL #2] LDR R0, [R1, -R2 LSL #2] Above, (1) takes less cycles than (2). By assigning appropriate scaling factor cost, we enable the LLVM to make the right trade-offs in the optimization and code-selection phase. Differential Revision: http://reviews.llvm.org/D24857 Reviewers: jmolloy, rengolin llvm-svn: 284127
* Re-land "[Thumb] Save/restore high registers in Thumb1 pro/epilogues"Reid Kleckner2016-10-113-24/+375
| | | | | | | | | Reverts r283938 to reinstate r283867 with a fix. The original change had an ArrayRef referring to a destroyed temporary initializer list. Use plain C arrays instead. llvm-svn: 283942
* Revert "[Thumb] Save/restore high registers in Thumb1 pro/epilogues"Reid Kleckner2016-10-113-369/+24
| | | | | | | | | | | | | | | | | | This reverts r283867. This appears to be an infinite loop: while (HiRegToSave != AllHighRegs.end() && CopyReg != AllCopyRegs.end()) { if (HiRegsToSave.count(*HiRegToSave)) { ... CopyReg = findNextOrderedReg(++CopyReg, CopyRegs, AllCopyRegs.end()); HiRegToSave = findNextOrderedReg(++HiRegToSave, HiRegsToSave, AllHighRegs.end()); } } llvm-svn: 283938
* ARMMachineFunctionInfo.cpp: Add an initializer of ↵NAKAMURA Takumi2016-10-111-2/+2
| | | | | | | | ARMFunctionInfo::ReturnRegsCount in the explicit ctor. It caused crash since r283867. llvm-svn: 283909
* Reformat.NAKAMURA Takumi2016-10-111-4/+4
| | | | llvm-svn: 283908
* Silence unused warning in non-assert builds.Daniel Jasper2016-10-111-0/+1
| | | | llvm-svn: 283899
* [Thumb] Save/restore high registers in Thumb1 pro/epiloguesOliver Stannard2016-10-113-24/+368
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The high registers are not allocatable in Thumb1 functions, but they could still be used by inline assembly, so we need to save and restore the callee-saved high registers (r8-r11) in the prologue and epilogue. This is complicated by the fact that the Thumb1 push and pop instructions cannot access these registers. Therefore, we have to move them down into low registers before pushing, and move them back after popping into low registers. In most functions, we will have low registers that are also being pushed/popped, which we can use as the temporary registers for saving/restoring the high registers. However, this is not guaranteed, so we may need to push some extra low registers to ensure that the high registers can be saved/restored. For correctness, it would be sufficient to use just one low register, but if we have enough low registers available then we only need one push/pop instruction, rather than one per high register. We can also use the argument/return registers when they are not live, and the link register when saving (but not restoring), reducing the number of extra registers we need to push. There are still a few extreme edge cases where we need two push/pop instructions, because not enough low registers can be made live in the prologue or epilogue. In addition to the regression tests included here, I've also tested this using a script to generate functions which clobber different combinations of registers, have different numbers of argument and return registers (including variadic arguments), allocate different fixed sized objects on the stack, and do or don't use variable sized allocas and the __builtin_return_address intrinsic (all of which affect the available registers in the prologue and epilogue). I ran these functions in a test harness which verifies that all of the callee-saved registers are correctly preserved. Differential Revision: https://reviews.llvm.org/D24228 llvm-svn: 283867
* [ARM] Fix registers clobbered by SjLj EH on soft-float targetsOliver Stannard2016-10-114-2/+15
| | | | | | | | | | | | | | | | | | | Currently, the Int_eh_sjlj_dispatchsetup intrinsic is marked as clobbering all registers, including floating-point registers that may not be present on the target. This is technically true, as we could get linked against code that does use the FP registers, but that will not actually work, as the soft-float code cannot save and restore the FP registers. SjLj exception handling can only work correctly if either all or none of the code is built for a target with FP registers. Therefore, we can assume that, when Int_eh_sjlj_dispatchsetup is compiled for a soft-float target, it is only going to be linked against other soft-float code, and so only clobbers the general-purpose registers. This allows us to check that no non-savable registers are clobbered when generating the prologue/epilogue. Differential Revision: https://reviews.llvm.org/D25180 llvm-svn: 283866
* Revert r283690, "MC: Remove unused entities."Peter Collingbourne2016-10-101-1/+1
| | | | llvm-svn: 283814
* [ARM] Fix invalid VLDM/VSTM access when targeting Big Endian with NEONAlexandros Lamprineas2016-10-101-2/+12
| | | | | | | | | | | | | | | The instructions VLDM/VSTM can only access word-aligned memory locations and produce alignment fault if the condition is not met. The compiler currently generates VLDM/VSTM for v2f64 load/store regardless the alignment of the memory access. Instead, if a v2f64 load/store is not word-aligned, the compiler should generate VLD1/VST1. For each non double-word-aligned VLD1/VST1, a VREV instruction should be generated when targeting Big Endian. Differential Revision: https://reviews.llvm.org/D25281 llvm-svn: 283763
* Move the global variables representing each Target behind accessor functionMehdi Amini2016-10-097-36/+54
| | | | | | | | This avoids "static initialization order fiasco" Differential Revision: https://reviews.llvm.org/D25412 llvm-svn: 283702
* MC: Remove unused entities.Peter Collingbourne2016-10-091-1/+1
| | | | llvm-svn: 283691
* Target: Remove unused entities.Peter Collingbourne2016-10-091-1/+1
| | | | llvm-svn: 283690
OpenPOWER on IntegriCloud