summaryrefslogtreecommitdiffstats
path: root/llvm/test/CodeGen/Thumb2/aligned-spill.ll
Commit message (Collapse)AuthorAgeFilesLines
* Migrate function attribute "no-frame-pointer-elim" to "frame-pointer"="all" ↵Fangrui Song2019-12-241-3/+3
| | | | as cleanups after D56351
* [ARM] Generate consistent frame records for Thumb2Oliver Stannard2016-08-231-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is not an official documented ABI for frame pointers in Thumb2, but we should try to emit something which is useful. We use r7 as the frame pointer for Thumb code, which currently means that if a function needs to save a high register (r8-r11), it will get pushed to the stack between the frame pointer (r7) and link register (r14). This means that while a stack unwinder can follow the chain of frame pointers up the stack, it cannot know the offset to lr, so does not know which functions correspond to the stack frames. To fix this, we need to push the callee-saved registers in two batches, with the first push saving the low registers, fp and lr, and the second push saving the high registers. This is already implemented, but previously only used for iOS. This patch turns it on for all Thumb2 targets when frame pointers are required by the ABI, and the frame pointer is r7 (Windows uses r11, so this isn't a problem there). If frame pointer elimination is enabled we still emit a single push/pop even if we need a frame pointer for other reasons, to avoid increasing code size. We must also ensure that lr is pushed to the stack when using a frame pointer, so that we end up with a complete frame record. Situations that could cause this were rare, because we already push lr in most situations so that we can return using the pop instruction. Differential Revision: https://reviews.llvm.org/D23516 llvm-svn: 279506
* [opaque pointer type] Add textual IR support for explicit type parameter to ↵David Blaikie2015-02-271-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | load instruction Essentially the same as the GEP change in r230786. A similar migration script can be used to update test cases, though a few more test case improvements/changes were required this time around: (r229269-r229278) import fileinput import sys import re pat = re.compile(r"((?:=|:|^)\s*load (?:atomic )?(?:volatile )?(.*?))(| addrspace\(\d+\) *)\*($| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$)") for line in sys.stdin: sys.stdout.write(re.sub(pat, r"\1, \2\3*\4", line)) Reviewers: rafael, dexonsmith, grosser Differential Revision: http://reviews.llvm.org/D7649 llvm-svn: 230794
* Fix large stack alignment codegen for ARM and Thumb2 targetsKristof Beyls2015-01-081-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This partially fixes PR13007 (ARM CodeGen fails with large stack alignment): for ARM and Thumb2 targets, but not for Thumb1, as it seems stack alignment for Thumb1 targets hasn't been supported at all. Producing an aligned stack pointer is done by zero-ing out the lower bits of the stack pointer. The BIC instruction was used for this. However, the immediate field of the BIC instruction only allows to encode an immediate that can zero out up to a maximum of the 8 lower bits. When a larger alignment is requested, a BIC instruction cannot be used; llvm was silently producing incorrect code in this case. This commit fixes code generation for large stack aligments by using the BFC instruction instead, when the BFC instruction is available. When not, it uses 2 instructions: a right shift, followed by a left shift to zero out the lower bits. The lowering of ARM::Int_eh_sjlj_dispatchsetup still has code that unconditionally uses BIC to realign the stack pointer, so it very likely has the same problem. However, I wasn't able to produce a test case for that. This commit adds an assert so that the compiler will fail the assert instead of silently generating wrong code if this is ever reached. llvm-svn: 225446
* Make ARMAsmPrinter generate the correct alignment specifier syntax in ↵Kristof Beyls2013-02-221-7/+7
| | | | | | | | | instructions. The Printer will now print instructions with the correct alignment specifier syntax, like vld1.8 {d16}, [r0:64] llvm-svn: 175884
* Enable aligned NEON spilling by default.Jakob Stoklund Olesen2012-01-061-2/+2
| | | | | | Experiments show this to be a small speedup for modern ARM cores. llvm-svn: 147689
* Reapply r146997, "Heed spill slot alignment on ARM."Jakob Stoklund Olesen2012-01-051-1/+1
| | | | | | | | | | | | Now that canRealignStack() understands frozen reserved registers, it is safe to use it for aligned spill instructions. It will only return true if the registers reserved at the beginning of register allocation allow for dynamic stack realignment. <rdar://problem/10625436> llvm-svn: 147579
* Revert r146997, "Heed spill slot alignment on ARM."Jakob Stoklund Olesen2012-01-031-1/+1
| | | | | | | | | This patch caused a miscompilation of oggenc because a frame pointer was suddenly needed halfway through register allocation. <rdar://problem/10625436> llvm-svn: 147487
* Experimental support for aligned NEON spills.Jakob Stoklund Olesen2011-12-231-0/+73
| | | | | | | | | | | | | ARM targets with NEON units have access to aligned vector loads and stores that are potentially faster than unaligned operations. Add support for spilling the callee-saved NEON registers to an aligned stack area using 16-byte aligned NEON loads and store. This feature is off by default, controlled by an -align-neon-spills command line option. llvm-svn: 147211
* Heed spill slot alignment on ARM.Jakob Stoklund Olesen2011-12-201-0/+22
Use the spill slot alignment as well as the local variable alignment to determine when the stack needs to be realigned. This works now that the ARM target can always realign the stack by using a base pointer. Still respect the ARMBaseRegisterInfo::canRealignStack() function vetoing a realigned stack. Don't use aligned spill code in that case. llvm-svn: 146997
OpenPOWER on IntegriCloud