summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/ARM/ARMISelDAGToDAG.cpp
Commit message (Collapse)AuthorAgeFilesLines
* [ARM] Handle the inline asm constraint type 'o'James Molloy2015-10-261-0/+1
| | | | | | This means "memory with offset" and requires very little plumbing to get working. This fixes PR25317. llvm-svn: 251280
* ARM: Remove implicit ilist iterator conversions, NFCDuncan P. N. Exon Smith2015-10-191-2/+2
| | | | llvm-svn: 250759
* [ARM] Handle +t2dsp feature as an ArchExtKind in ARMTargetParser.defArtyom Skrobov2015-09-241-4/+4
| | | | | | | | | | | | | | | | | | Currently, the availability of DSP instructions (ACLE 6.4.7) is handled in a hand-rolled tricky condition block in tools/clang/lib/Basic/Targets.cpp, with a FIXME: attached. This patch changes the handling of +t2dsp to be in line with other architecture extensions. Following a revert of r248152 and new review comments, this patch also includes renaming FeatureDSPThumb2 -> FeatureDSP, hasThumb2DSP() -> hasDSP(), etc. The spelling of "t2dsp" is preserved, pending a further investigation of its possible external usage. Differential Revision: http://reviews.llvm.org/D12937 llvm-svn: 248519
* [ARM] Extract shifts out of multiply-by-constantJohn Brawn2015-09-141-19/+111
| | | | | | | | | | | | | | | Turning (op x (mul y k)) into (op x (lsl (mul y k>>n) n)) is beneficial when we can do the lsl as a shifted operand and the resulting multiply constant is simpler to generate. Do this by doing the transformation when trying to select a shifted operand, as that ensures that it actually turns out better (the alternative would be to do it in PreprocessISelDAG, but we don't know for sure there if extracting the shift would allow a shifted operand to be used). Differential Revision: http://reviews.llvm.org/D12196 llvm-svn: 247569
* [ADT] Switch a bunch of places in LLVM that were doing single-characterChandler Carruth2015-09-101-2/+2
| | | | | | | splits to actually use the single character split routine which does less work, and in a debug build is *substantially* faster. llvm-svn: 247245
* [ARM] Get rid of SelectT2ShifterOperandReg, NFCJohn Brawn2015-09-071-25/+1
| | | | | | | | | SelectT2ShifterOperandReg has identical behaviour to SelectImmShifterOperand, so get rid of it and use SelectImmShifterOperand instead. Differential Revision: http://reviews.llvm.org/D12195 llvm-svn: 246962
* [ARM] Reorganise and simplify thumb-1 load/store selectionJohn Brawn2015-08-131-93/+6
| | | | | | | | | | | | | | | | Other than PC-relative loads/store the patterns that match the various load/store addressing modes have the same complexity, so the order that they are matched is the order that they appear in the .td file. Rearrange the instruction definitions in ARMInstrThumb.td, and make use of AddedComplexity for PC-relative loads, so that the instruction matching order is the order that results in the simplest selection logic. This also makes register-offset load/store be selected when it should, as previously it was only selected for too-large immediate offsets. Differential Revision: http://reviews.llvm.org/D11800 llvm-svn: 244882
* ARMISelDAGToDAG.cpp had this self-contradictory code:Artyom Skrobov2015-08-051-5/+5
| | | | | | | | | | | | | | | | | | return StringSwitch<int>(Flags) .Case("g", 0x1) .Case("nzcvq", 0x2) .Case("nzcvqg", 0x3) .Default(-1); ... // The _g and _nzcvqg versions are only valid if the DSP extension is // available. if (!Subtarget->hasThumb2DSP() && (Mask & 0x2)) return -1; ARMARM confirms that the comment is right, and the code was wrong. llvm-svn: 244029
* Make TargetLowering::getPointerTy() taking DataLayout as an argumentMehdi Amini2015-07-091-16/+31
| | | | | | | | | | | | | | | | Summary: This change is part of a series of commits dedicated to have a single DataLayout during compilation by using always the one owned by the module. Reviewers: echristo Subscribers: jholewinski, ted, yaron.keren, rafael, llvm-commits Differential Revision: http://reviews.llvm.org/D11028 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 241775
* Revert r240137 (Fixed/added namespace ending comments using clang-tidy. NFC)Alexander Kornienko2015-06-231-1/+1
| | | | | | Apparently, the style needs to be agreed upon first. llvm-svn: 240390
* Fixed/added namespace ending comments using clang-tidy. NFCAlexander Kornienko2015-06-191-1/+1
| | | | | | | | | | | | | The patch is generated using this command: tools/clang/tools/extra/clang-tidy/tool/run-clang-tidy.py -fix \ -checks=-*,llvm-namespace-comment -header-filter='llvm/.*|clang/.*' \ llvm/lib/ Thanks to Eugene Kosov for the original patch! llvm-svn: 240137
* [arm] Fix r238921. We must handle Constraint_i too.Daniel Sanders2015-06-031-0/+4
| | | | llvm-svn: 238925
* [arm] Distinguish the /U[qytnms]/, 'Uv', 'Q', and 'm' inline assembly memory ↵Daniel Sanders2015-06-031-7/+19
| | | | | | | | | | | | | | | | | | | | | | constraints. Summary: But still handle them the same way since I don't know how they differ on this target. Of these, /U[qytnms]/ do not have backend tests but are accepted by clang. No functional change intended. Reviewers: t.p.northover Reviewed By: t.p.northover Subscribers: t.p.northover, aemerson, llvm-commits Differential Revision: http://reviews.llvm.org/D8203 llvm-svn: 238921
* Re-commit of r238201 with fix for building with shared libraries.Luke Cheeseman2015-06-011-0/+428
| | | | llvm-svn: 238739
* Revert "Re-commit changes in r237579 with fix for bug breaking windows builds."Diego Novillo2015-05-261-428/+0
| | | | | | | This reverts commit r238201 to fix linking problems in x86 Linux http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20150525/278413.html llvm-svn: 238223
* Re-commit changes in r237579 with fix for bug breaking windows builds.Luke Cheeseman2015-05-261-0/+428
| | | | llvm-svn: 238201
* Revert r237579, as it broke windows buildbotsOliver Stannard2015-05-181-423/+0
| | | | llvm-svn: 237583
* [LLVM - ARM/AArch64] Add ACLE special register intrinsicsOliver Stannard2015-05-181-0/+423
| | | | | | | | | | | | | | | | | | | This patch implements LLVM support for the ACLE special register intrinsics in section 10.1, __arm_{w,r}sr{,p,64}. This patch is intended to lower the read/write_register instrinsics, used to implement the special register intrinsics in the clang patch for special register intrinsics (see http://reviews.llvm.org/D9697), to ARM specific instructions MRC,MCR,MSR etc. to allow reading an writing of coprocessor registers in AArch32 and AArch64. This is done by inspecting the register string passed to the intrinsic and then lowering to the appropriate instruction. Patch by Luke Cheeseman. Differential Revision: http://reviews.llvm.org/D9699 llvm-svn: 237579
* Reapply r235977 "[DebugInfo] Add debug locations to constant SD nodes"Sergey Dmitrouk2015-04-281-143/+161
| | | | | | | | | | | | | | | | | | | | | | | | | [DebugInfo] Add debug locations to constant SD nodes This adds debug location to constant nodes of Selection DAG and updates all places that create constants to pass debug locations (see PR13269). Can't guarantee that all locations are correct, but in a lot of cases choice is obvious, so most of them should be. At least all tests pass. Tests for these changes do not cover everything, instead just check it for SDNodes, ARM and AArch64 where it's easy to get incorrect locations on constants. This is not complete fix as FastISel contains workaround for wrong debug locations, which drops locations from instructions on processing constants, but there isn't currently a way to use debug locations from constants there as llvm::Constant doesn't cache it (yet). Although this is a bit different issue, not directly related to these changes. Differential Revision: http://reviews.llvm.org/D9084 llvm-svn: 235989
* Revert "[DebugInfo] Add debug locations to constant SD nodes"Daniel Jasper2015-04-281-161/+143
| | | | | | | This breaks a test: http://bb.pgr.jp/builders/cmake-llvm-x86_64-linux/builds/23870 llvm-svn: 235987
* [DebugInfo] Add debug locations to constant SD nodesSergey Dmitrouk2015-04-281-143/+161
| | | | | | | | | | | | | | | | | | | | | | | This adds debug location to constant nodes of Selection DAG and updates all places that create constants to pass debug locations (see PR13269). Can't guarantee that all locations are correct, but in a lot of cases choice is obvious, so most of them should be. At least all tests pass. Tests for these changes do not cover everything, instead just check it for SDNodes, ARM and AArch64 where it's easy to get incorrect locations on constants. This is not complete fix as FastISel contains workaround for wrong debug locations, which drops locations from instructions on processing constants, but there isn't currently a way to use debug locations from constants there as llvm::Constant doesn't cache it (yet). Although this is a bit different issue, not directly related to these changes. Differential Revision: http://reviews.llvm.org/D9084 llvm-svn: 235977
* Recommit r232027 with PR22883 fixed: Add infrastructure for support of ↵Daniel Sanders2015-03-131-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | multiple memory constraints. The operand flag word for ISD::INLINEASM nodes now contains a 15-bit memory constraint ID when the operand kind is Kind_Mem. This constraint ID is a numeric equivalent to the constraint code string and is converted with a target specific hook in TargetLowering. This patch maps all memory constraints to InlineAsm::Constraint_m so there is no functional change at this point. It just proves that using these previously unused bits in the encoding of the flag word doesn't break anything. The next patch will make each target preserve the current mapping of everything to Constraint_m for itself while changing the target independent implementation of the hook to return Constraint_Unknown appropriately. Each target will then be adapted in separate patches to use appropriate Constraint_* values. PR22883 was caused the matching operands copying the whole of the operand flags for the matched operand. This included the constraint id which needed to be replaced with the operand number. This has been fixed with a conversion function. Following on from this, matching operands also used the operand number as the constraint id. This has been fixed by looking up the matched operand and taking it from there. llvm-svn: 232165
* Revert "r232027 - Add infrastructure for support of multiple memory constraints"Hal Finkel2015-03-121-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | This (r232027) has caused PR22883; so it seems those bits might be used by something else after all. Reverting until we can figure out what else to do. Original commit message: The operand flag word for ISD::INLINEASM nodes now contains a 15-bit memory constraint ID when the operand kind is Kind_Mem. This constraint ID is a numeric equivalent to the constraint code string and is converted with a target specific hook in TargetLowering. This patch maps all memory constraints to InlineAsm::Constraint_m so there is no functional change at this point. It just proves that using these previously unused bits in the encoding of the flag word doesn't break anything. The next patch will make each target preserve the current mapping of everything to Constraint_m for itself while changing the target independent implementation of the hook to return Constraint_Unknown appropriately. Each target will then be adapted in separate patches to use appropriate Constraint_* values. llvm-svn: 232093
* Add infrastructure for support of multiple memory constraints.Daniel Sanders2015-03-121-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: The operand flag word for ISD::INLINEASM nodes now contains a 15-bit memory constraint ID when the operand kind is Kind_Mem. This constraint ID is a numeric equivalent to the constraint code string and is converted with a target specific hook in TargetLowering. This patch maps all memory constraints to InlineAsm::Constraint_m so there is no functional change at this point. It just proves that using these previously unused bits in the encoding of the flag word doesn't break anything. The next patch will make each target preserve the current mapping of everything to Constraint_m for itself while changing the target independent implementation of the hook to return Constraint_Unknown appropriately. Each target will then be adapted in separate patches to use appropriate Constraint_* values. Reviewers: hfinkel Reviewed By: hfinkel Subscribers: hfinkel, jholewinski, llvm-commits Differential Revision: http://reviews.llvm.org/D8171 llvm-svn: 232027
* Make constant arrays that are passed to functions as const.Benjamin Kramer2015-03-071-1/+1
| | | | | | | | In theory this allows the compiler to skip materializing the array on the stack. In practice clang often fails to do that, but that's a different story. NFC. llvm-svn: 231571
* Improve handling of stack accesses in Thumb-1Renato Golin2015-02-251-0/+15
| | | | | | | | | | | | | | | | | Thumb-1 only allows SP-based LDR and STR to be word-sized, and SP-base LDR, STR, and ADD only allow offsets that are a multiple of 4. Make some changes to better make use of these instructions: * Use word loads for anyext byte and halfword loads from the stack. * Enforce 4-byte alignment on objects accessed in this way, to ensure that the offset is valid. * Do the same for objects whose frame index is used, in order to avoid having to use more than one ADD to generate the frame index. * Correct how many bits of offset we think AddrModeT1_s has. Patch by John Brawn. llvm-svn: 230496
* Get the cached subtarget off the MachineFunction rather thanEric Christopher2015-02-201-1/+1
| | | | | | inquiring for a new one from the TargetMachine. llvm-svn: 229999
* [ARM] Re-re-apply VLD1/VST1 base-update combine.Ahmed Bougacha2015-02-191-5/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This re-applies r223862, r224198, r224203, and r224754, which were reverted in r228129 because they exposed Clang misalignment problems when self-hosting. The combine caused the crashes because we turned ISD::LOAD/STORE nodes to ARMISD::VLD1/VST1_UPD nodes. When selecting addressing modes, we were very lax for the former, and only emitted the alignment operand (as in "[r1:128]") when it was larger than the standard alignment of the memory type. However, for ARMISD nodes, we just used the MMO alignment, no matter what. In our case, we turned ISD nodes to ARMISD nodes, and this caused the alignment operands to start being emitted. And that's how we exposed alignment problems that were ignored before (but I believe would have been caught with SCTRL.A==1?). To fix this, we can just mirror the hack done for ISD nodes: only take into account the MMO alignment when the access is overaligned. Original commit message: We used to only combine intrinsics, and turn them into VLD1_UPD/VST1_UPD when the base pointer is incremented after the load/store. We can do the same thing for generic load/stores. Note that we can only combine the first load/store+adds pair in a sequence (as might be generated for a v16f32 load for instance), because other combines turn the base pointer addition chain (each computing the address of the next load, from the address of the last load) into independent additions (common base pointer + this load's offset). rdar://19717869, rdar://14062261. llvm-svn: 229932
* MathExtras: Bring Count(Trailing|Leading)Ones and CountPopulation in line ↵Benjamin Kramer2015-02-121-1/+1
| | | | | | | | with countTrailingZeros Update all callers. llvm-svn: 228930
* [ARM] Also support v2f64 vld1/vst1.Ahmed Bougacha2014-12-091-0/+2
| | | | | | | | | It was missing from the VLD1/VST1 handling logic, even though the corresponding instructions exist (same form as v2i64). In preparation for a future patch. llvm-svn: 223832
* [ARM] Remove more dead code.Tilmann Scheller2014-11-051-4/+0
| | | | | | Dead code identified by the Clang static analyzer. llvm-svn: 221372
* ARM: rework Thumb1 frame index rewritingTim Northover2014-10-201-3/+2
| | | | | | | | | | | | | | | | | | | | | | | The previous code had a few problems, motivating the choices here. 1. It could create instructions clobbering CPSR, but the incoming MachineInstr didn't reflect this. A potential source of corruption. This is why the patch has a new PseudoInst for before lowering. 2. Similarly, there was some code to handle the incoming instruction not being ARMCC::AL, but this would have caused massive problems if it was actually invoked when a complex offset needing more than one instruction was requested. 3. It wasn't designed to handle unaligned pointers (or offsets). These should probably be minimised anyway, but the code needs to deal with them properly regardless. 4. It had some rather dubious ad-hoc code to avoid calling emitThumbRegPlusImmediate, a function which should be designed to do precisely this job. We seem to cover the common cases correctly now, and hopefully can enhance emitThumbRegPlusImmediate to handle any extra optimisations we need to add in future. llvm-svn: 220236
* Cache TargetLowering on SelectionDAGISel and update previousEric Christopher2014-10-081-33/+18
| | | | | | calls to getTargetLowering() with the cached variable. llvm-svn: 219284
* ARM: Negative offset support problemRenato Golin2014-09-091-2/+2
| | | | | | | | This patch is to permit a negative offset usage for a non frame access. Patch by Igor Oblakov. llvm-svn: 217431
* Have MachineFunction cache a pointer to the subtarget to make lookupsEric Christopher2014-08-051-1/+1
| | | | | | | | | | | shorter/easier and have the DAG use that to do the same lookup. This can be used in the future for TargetMachine based caching lookups from the MachineFunction easily. Update the MIPS subtarget switching machinery to update this pointer at the same time it runs. llvm-svn: 214838
* Remove the TargetMachine forwards for TargetSubtargetInfo basedEric Christopher2014-08-041-1/+1
| | | | | | information and update all callers. No functional change. llvm-svn: 214781
* ARM: spot SBFX-compatbile code expressed with sign_extend_inregTim Northover2014-07-231-0/+20
| | | | | | | | We were assuming all SBFX-like operations would have the shl/asr form, but often when the field being extracted is an i8 or i16, we end up with a SIGN_EXTEND_INREG acting on a shift instead. Simple enough to check for though. llvm-svn: 213754
* Move function dependent resetting of a subtarget variable out of theEric Christopher2014-07-041-1/+1
| | | | | | | | | | subtarget. This involved having the movt predicate take the current function - since we care about size in instruction selection for whether or not to use movw/movt take the function so we can check the attributes. This required adding the current MachineFunction to FastISel and propagating through. llvm-svn: 212309
* Remove caching of the target machine and initialization of theEric Christopher2014-07-031-10/+5
| | | | | | | subtarget from ARMISelDAGtoDAG. The former is unnecessary and the latter is initialized on each runOnMachineFunction. llvm-svn: 212297
* Override runOnMachineFunction for ARMISelDAGToDAG so that we canEric Christopher2014-05-221-0/+7
| | | | | | reset the subtarget on each function. llvm-svn: 209386
* Convert more SelectionDAG functions to use ArrayRef.Craig Topper2014-04-281-1/+1
| | | | llvm-svn: 207397
* Convert SelectionDAG::SelectNodeTo to use ArrayRef.Craig Topper2014-04-271-11/+11
| | | | llvm-svn: 207377
* Convert SelectionDAG::getNode methods to use ArrayRef<SDValue>.Craig Topper2014-04-261-2/+1
| | | | llvm-svn: 207327
* [C++] Use 'nullptr'. Target edition.Craig Topper2014-04-251-39/+40
| | | | llvm-svn: 207197
* [Modules] Fix potential ODR violations by sinking the DEBUG_TYPEChandler Carruth2014-04-221-1/+2
| | | | | | | definition below all of the header #include lines, lib/Target/... edition. llvm-svn: 206842
* Tidy up. Trailing whitespace.Jim Grosbach2014-04-031-5/+5
| | | | llvm-svn: 205583
* ARM: expand atomic ldrex/strex loops in IRTim Northover2014-04-031-113/+0
| | | | | | | | | | | | | | | | | | | | | | | | | The previous situation where ATOMIC_LOAD_WHATEVER nodes were expanded at MachineInstr emission time had grown to be extremely large and involved, to account for the subtly different code needed for the various flavours (8/16/32/64 bit, cmpxchg/add/minmax). Moving this transformation into the IR clears up the code substantially, and makes future optimisations much easier: 1. an atomicrmw followed by using the *new* value can be more efficient. As an IR pass, simple CSE could handle this efficiently. 2. Making use of cmpxchg success/failure orderings only has to be done in one (simpler) place. 3. The common "cmpxchg; did we store?" idiom can be exposed to optimisation. I intend to gradually improve this situation within the ARM backend and make sure there are no hidden issues before moving the code out into CodeGen to be shared with (at least ARM64/AArch64, though I think PPC & Mips could benefit too). llvm-svn: 205525
* ARM: teach LLVM that Cortex-A7 is very similar to A8.Tim Northover2014-04-011-2/+2
| | | | llvm-svn: 205314
* ARM: add intrinsics for the v8 ldaex/stlexTim Northover2014-03-261-5/+10
| | | | | | | | | We've already got versions without the barriers, so this just adds IR-level support for generating the new v8 ones. rdar://problem/16227836 llvm-svn: 204813
* Prune includes in ARM target.Craig Topper2014-03-221-1/+0
| | | | llvm-svn: 204548
OpenPOWER on IntegriCloud