summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp
Commit message (Collapse)AuthorAgeFilesLines
...
* [Fast-ISel] Don't mark the first use of a remat constant as killed.Pete Cooper2015-05-091-4/+7
| | | | | | | | | | | | | | | | | | | When emitting something like 'add x, 1000' if we remat the 1000 then we should be able to mark the vreg containing 1000 as killed. Given that we go bottom up in fast-isel, a later use of 1000 will be higher up in the BB and won't kill it, or be impacted by the lower kill. However, rematerialised constant expressions aren't generated bottom up. The local value save area grows downwards. This means that if you remat 2 constant expressions which both use 1000 then the first will kill it, then the second, which is *lower* in the BB will read a killed register. This is the case in the attached test where the 2 GEPs both need to generate 'add x, 6680' for the constant offset. Note that this commit only makes kill flag generation conservative. There's nothing else obviously wrong with the local value save area growing downwards, and in fact it needs to for handling arbitrarily complex constant expressions. However, it would be nice if there was a solution which would let us generate more accurate kill flags, or just kill flags completely. llvm-svn: 236922
* Fix incorrect kill flags in fastisel.Pete Cooper2015-05-061-2/+6
| | | | | | | | If called twice in the same BB on the same constant, FastISel::fastEmit_ri_ was marking the materialized vreg as killed on each use, instead of only the last use. Change this to only mark the last use as killed by making earlier uses check if the vreg is already used elsewhere. llvm-svn: 236650
* [patchpoint] Add support for symbolic patchpoint targets to SelectionDAG and theLang Hames2015-04-221-13/+15
| | | | | | | | | | | | X86 backend. The code generated for symbolic targets is identical to the code generated for constant targets, except that a relocation is emitted to fix up the actual target address at link-time. This allows IR and object files containing patchpoints to be cached across JIT-invocations where the target address may change. llvm-svn: 235483
* DebugInfo: Assert dbg.declare/value insts are validDuncan P. N. Exon Smith2015-04-211-2/+2
| | | | | | | | | | Remove early returns for when `getVariable()` is null, and just assert that it never happens. The Verifier already confirms that there's a valid variable on these intrinsics, so we should assume the debug info isn't broken. I also updated a check for a `!dbg` attachment, which the Verifier similarly guarantees. llvm-svn: 235400
* CodeGen: Stop using DIDescriptor::is*() and auto-castingDuncan P. N. Exon Smith2015-04-061-3/+1
| | | | | | Same as r234255, but for lib/CodeGen and lib/Target. llvm-svn: 234258
* Use sext in fast isel.Rafael Espindola2015-04-061-1/+1
| | | | | | | | | | | | | | | | | | Fast isel used to zero extends immediates to 64 bits. This normally goes unnoticed because the value is truncated to 32 bits for output. Two cases were it is noticed: * We fail to use smaller encodings. * If the original constant was smaller than i32. In the tests using i1 constants, codegen would change to use -1, which is fine (and matches what regular isel does) since only the lowest bit is then used. Instead, this patch then changes the ir to use i8 constants, which looks more like what clang produces. llvm-svn: 234249
* CodeGen: Assert that inlined-at locations agreeDuncan P. N. Exon Smith2015-04-031-0/+4
| | | | | | | | | | | | | | | | | As a follow-up to r234021, assert that a debug info intrinsic variable's `MDLocalVariable::getInlinedAt()` always matches the `MDLocation::getInlinedAt()` of its `!dbg` attachment. The goal here is to get rid of `MDLocalVariable::getInlinedAt()` entirely (PR22778), but I'll let these assertions bake for a while first. If you have an out-of-tree backend that just broke, you're probably attaching the wrong `DebugLoc` to a `DBG_VALUE` instruction. The one you want is the location that was attached to the corresponding `@llvm.dbg.declare` or `@llvm.dbg.value` call that you started with. llvm-svn: 234038
* [WinEH] Run cleanup handlers when an exception is thrownDavid Majnemer2015-03-301-0/+7
| | | | | | | | Generate tables in the .xdata section representing what actions to take when an exception is thrown. This currently fills in state for cleanups, catch handlers are still unfinished. llvm-svn: 233636
* Re-sort includes with sort-includes.py and insert raw_ostream.h where it's used.Benjamin Kramer2015-03-231-0/+1
| | | | llvm-svn: 232998
* Handle big index in getelementptr instructionReid Kleckner2015-03-111-3/+3
| | | | | | | | | | | | | | | CodeGen incorrectly ignores (assert from APInt) constant index bigger than 2^64 in getelementptr instruction. This is a test and fix for that. Patch by Paweł Bylica! Reviewed By: rnk Subscribers: majnemer, rnk, mcrosier, resistor, llvm-commits Differential Revision: http://reviews.llvm.org/D8219 llvm-svn: 231984
* Have getCallPreservedMask and getThisCallPreservedMask take aEric Christopher2015-03-111-1/+2
| | | | | | | MachineFunction argument so that we can grab subtarget specific features off of it. llvm-svn: 231979
* Move DataLayout back to the TargetMachine from TargetSubtargetInfoEric Christopher2015-01-261-1/+1
| | | | | | | | | | | | | | | | | | | derived classes. Since global data alignment, layout, and mangling is often based on the DataLayout, move it to the TargetMachine. This ensures that global data is going to be layed out and mangled consistently if the subtarget changes on a per function basis. Prior to this all targets(*) have had subtarget dependent code moved out and onto the TargetMachine. *One target hasn't been migrated as part of this change: R600. The R600 port has, as a subtarget feature, the size of pointers and this affects global data layout. I've currently hacked in a FIXME to enable progress, but the port needs to be updated to either pass the 64-bitness to the TargetMachine, or fix the DataLayout to avoid subtarget dependent features. llvm-svn: 227113
* [PM] Move TargetLibraryInfo into the Analysis library.Chandler Carruth2015-01-151-1/+1
| | | | | | | | | | | | | | | | While the term "Target" is in the name, it doesn't really have to do with the LLVM Target library -- this isn't an abstraction which LLVM targets generally need to implement or extend. It has much more to do with modeling the various runtime libraries on different OSes and with different runtime environments. The "target" in this sense is the more general sense of a target of cross compilation. This is in preparation for porting this analysis to the new pass manager. No functionality changed, and updates inbound for Clang and Polly. llvm-svn: 226078
* [cleanup] Re-sort all the #include lines in LLVM usingChandler Carruth2015-01-141-1/+1
| | | | | | | | | | | utils/sort_includes.py. I clearly haven't done this in a while, so more changed than usual. This even uncovered a missing include from the InstrProf library that I've added. No functionality changed here, just mechanical cleanup of the include order. llvm-svn: 225974
* [StackMaps] Mark in CallLoweringInfo when lowering a patchpointHal Finkel2015-01-131-0/+1
| | | | | | | | | | | | | | | While, generally speaking, the process of lowering arguments for a patchpoint is the same as lowering a regular indirect call, on some targets it may not be exactly the same. Targets may not, for example, want to add additional register dependencies that apply only to making cross-DSO calls through linker stubs, may not want to load additional registers out of function descriptors, and may not want to add additional side-effect-causing instructions that cannot be removed later with the call itself being generated. The PowerPC target will use this in a future commit (for all of the reasons stated above). llvm-svn: 225806
* Update SetVector to rely on the underlying set's insert to return a ↵David Blaikie2014-11-191-1/+1
| | | | | | | | | | | | | pair<iterator, bool> This is to be consistent with StringSet and ultimately with the standard library's associative container insert function. This lead to updating SmallSet::insert to return pair<iterator, bool>, and then to update SmallPtrSet::insert to return pair<iterator, bool>, and then to update all the existing users of those functions... llvm-svn: 222334
* Revert "IR: MDNode => Value"Duncan P. N. Exon Smith2014-11-111-3/+3
| | | | | | | | | | | | | | | | | Instead, we're going to separate metadata from the Value hierarchy. See PR21532. This reverts commit r221375. This reverts commit r221373. This reverts commit r221359. This reverts commit r221167. This reverts commit r221027. This reverts commit r221024. This reverts commit r221023. This reverts commit r220995. This reverts commit r220994. llvm-svn: 221711
* IR: MDNode => Value: Instruction::getMetadata()Duncan P. N. Exon Smith2014-11-011-3/+3
| | | | | | | | | | Change `Instruction::getMetadata()` to return `Value` as part of PR21433. Update most callers to use `Instruction::getMDNode()`, which wraps the result in a `cast_or_null<MDNode>`. llvm-svn: 221024
* Introduce enum values for previously defined metadata types. (NFC)Philip Reames2014-10-211-2/+2
| | | | | | | | | | | Our metadata scheme lazily assigns IDs to string metadata, but we have a mechanism to preassign them as well. Using a preassigned ID is helpful since we get compile time type checking, and avoid some (minimal) string construction and comparison. This change adds enum value for three existing metadata types: + MD_nontemporal = 9, // "nontemporal" + MD_mem_parallel_loop_access = 10, // "llvm.mem.parallel_loop_access" + MD_nonnull = 11 // "nonnull" I went through an updated various uses as well. I made no attempt to get all uses; I focused on the ones which were easily grepable and easily to translate. For example, there were several items in LoopInfo.cpp I chose not to update. llvm-svn: 220248
* Remove getSubtargetImpl calls from FastISel, we can get it fromEric Christopher2014-10-081-6/+5
| | | | | | the MachineFunction where it's already cached. llvm-svn: 219366
* Remove dead call to getTypeToTransformTo. The result isEric Christopher2014-10-081-3/+1
| | | | | | unused. llvm-svn: 219347
* Move the complex address expression out of DIVariable and into an extraAdrian Prantl2014-10-011-8/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | argument of the llvm.dbg.declare/llvm.dbg.value intrinsics. Previously, DIVariable was a variable-length field that has an optional reference to a Metadata array consisting of a variable number of complex address expressions. In the case of OpPiece expressions this is wasting a lot of storage in IR, because when an aggregate type is, e.g., SROA'd into all of its n individual members, the IR will contain n copies of the DIVariable, all alike, only differing in the complex address reference at the end. By making the complex address into an extra argument of the dbg.value/dbg.declare intrinsics, all of the pieces can reference the same variable and the complex address expressions can be uniqued across the CU, too. Down the road, this will allow us to move other flags, such as "indirection" out of the DIVariable, too. The new intrinsics look like this: declare void @llvm.dbg.declare(metadata %storage, metadata %var, metadata %expr) declare void @llvm.dbg.value(metadata %storage, i64 %offset, metadata %var, metadata %expr) This patch adds a new LLVM-local tag to DIExpressions, so we can detect and pretty-print DIExpression metadata nodes. What this patch doesn't do: This patch does not touch the "Indirect" field in DIVariable; but moving that into the expression would be a natural next step. http://reviews.llvm.org/D4919 rdar://problem/17994491 Thanks to dblaikie and dexonsmith for reviewing this patch! Note: I accidentally committed a bogus older version of this patch previously. llvm-svn: 218787
* Revert r218778 while investigating buldbot breakage.Adrian Prantl2014-10-011-13/+8
| | | | | | "Move the complex address expression out of DIVariable and into an extra" llvm-svn: 218782
* Move the complex address expression out of DIVariable and into an extraAdrian Prantl2014-10-011-8/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | argument of the llvm.dbg.declare/llvm.dbg.value intrinsics. Previously, DIVariable was a variable-length field that has an optional reference to a Metadata array consisting of a variable number of complex address expressions. In the case of OpPiece expressions this is wasting a lot of storage in IR, because when an aggregate type is, e.g., SROA'd into all of its n individual members, the IR will contain n copies of the DIVariable, all alike, only differing in the complex address reference at the end. By making the complex address into an extra argument of the dbg.value/dbg.declare intrinsics, all of the pieces can reference the same variable and the complex address expressions can be uniqued across the CU, too. Down the road, this will allow us to move other flags, such as "indirection" out of the DIVariable, too. The new intrinsics look like this: declare void @llvm.dbg.declare(metadata %storage, metadata %var, metadata %expr) declare void @llvm.dbg.value(metadata %storage, i64 %offset, metadata %var, metadata %expr) This patch adds a new LLVM-local tag to DIExpressions, so we can detect and pretty-print DIExpression metadata nodes. What this patch doesn't do: This patch does not touch the "Indirect" field in DIVariable; but moving that into the expression would be a natural next step. http://reviews.llvm.org/D4919 rdar://problem/17994491 Thanks to dblaikie and dexonsmith for reviewing this patch! llvm-svn: 218778
* [FastISel] Move optimizeCmpPredicate to FastISel base class. NFC.Juergen Ributzka2014-09-151-0/+40
| | | | | | Make the optimizeCmpPredicate function available to all targets. llvm-svn: 217822
* Fast-ISel: Remove dead code after falling back from selecting call ↵Hans Wennborg2014-09-081-15/+10
| | | | | | | | | | | | | | | | | instructions (PR20863) Previously, fast-isel would not clean up after failing to select a call instruction, because it would have called flushLocalValueMap() which moves the insertion point, making SavedInsertPt in selectInstruction() invalid. Fixing this by making SavedInsertPt a member variable, and having flushLocalValueMap() update it. This removes some redundant code at -O0, and more importantly fixes PR20863. Differential Revision: http://reviews.llvm.org/D5249 llvm-svn: 217401
* [FastISel][tblgen] Rename tblgen generated FastISel functions. NFC.Juergen Ributzka2014-09-031-50/+50
| | | | | | | | | | This is the final round of renaming. This changes tblgen to emit lower-case function names for FastEmitInst_* and FastEmit_*, and updates all its uses in the source code. Reviewed by Eric llvm-svn: 217075
* [FastISel] Rename public visible FastISel functions. NFC.Juergen Ributzka2014-09-031-44/+42
| | | | | | | | | | | | | | | | | | | | | This commit renames the following public FastISel functions: LowerArguments -> lowerArguments SelectInstruction -> selectInstruction TargetSelectInstruction -> fastSelectInstruction FastLowerArguments -> fastLowerArguments FastLowerCall -> fastLowerCall FastLowerIntrinsicCall -> fastLowerIntrinsicCall FastEmitZExtFromI1 -> fastEmitZExtFromI1 FastEmitBranch -> fastEmitBranch UpdateValueMap -> updateValueMap TargetMaterializeConstant -> fastMaterializeConstant TargetMaterializeAlloca -> fastMaterializeAlloca TargetMaterializeFloatZero -> fastMaterializeFloatZero LowerCallTo -> lowerCallTo Reviewed by Eric llvm-svn: 217074
* [FastISel] Some long overdue spring cleaning of FastISel.Juergen Ributzka2014-09-031-378/+333
| | | | | | | | | | | | | Things got a little bit messy over the years and it is time for a little bit spring cleaning. This first commit is focused on the FastISel base class itself. It doxyfies all comments, C++11fies the code where it makes sense, renames internal methods to adhere to the coding standard, and clang-formats the files. Reviewed by Eric llvm-svn: 217060
* [FastISel] Provide the option to skip target-independent instruction ↵Juergen Ributzka2014-09-021-18/+24
| | | | | | | | | | | | | selection. NFC. This allows the target to disable target-independent instruction selection and jump directly into the target-dependent instruction selection code. This can be beneficial for targets, such as AArch64, which could emit much better code, but never got a chance to do so, because the target-independent instruction selector was able to find an instruction sequence. llvm-svn: 216947
* [FastISel] Undo phi node updates when falling-back to SelectionDAG.Juergen Ributzka2014-08-281-4/+7
| | | | | | | | | | | | | | | | | | | | The included test case would fail, because the MI PHI node would have two operands from the same predecessor. This problem occurs when a switch instruction couldn't be selected. This happens always, because there is no default switch support for FastISel to begin with. The problem was that FastISel would first add the operand to the PHI nodes and then fall-back to SelectionDAG, which would then in turn add the same operands to the PHI nodes again. This fix removes these duplicate PHI node operands by reseting the PHINodesToUpdate to its original state before FastISel tried to select the instruction. This fixes <rdar://problem/18155224>. llvm-svn: 216640
* [FastISel]Juergen Ributzka2014-08-281-1/+8
| | | | | | | | | | | | | | | | | | | | Currently instructions are folded very aggressively for AArch64 into the memory operation, which can lead to the use of killed operands: %vreg1<def> = ADDXri %vreg0<kill>, 2 %vreg2<def> = LDRBBui %vreg0, 2 ... = ... %vreg1 ... This usually happens when the result is also used by another non-memory instruction in the same basic block, or any instruction in another basic block. This fix teaches hasTrivialKill to not only check the LLVM IR that the value has a single use, but also to check if the register that represents that value has already been used. This can happen when the instruction with the use was folded into another instruction (in this particular case a load instruction). This fixes rdar://problem/18142857. llvm-svn: 216634
* [FastISel] Fix a potential bug in FastEmitInst_riJuergen Ributzka2014-08-271-2/+1
| | | | | | | | FastEmitInst_ri was constraining the first operand without checking if it is a virtual register. Use constrainOperandRegClass as all the other FastEmitInst_* functions. llvm-svn: 216613
* Reapply [FastISel] Let the target decide first if it wants to materialize a ↵Juergen Ributzka2014-08-191-15/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | constant (215588). Note: This was originally reverted to track down a buildbot error. This commit exposed a latent bug that was fixed in r215753. Therefore it is reapplied without any modifications. I run it through SPEC2k and SPEC2k6 for AArch64 and it didn't introduce any new regeressions. Original commit message: This changes the order in which FastISel tries to materialize a constant. Originally it would try to use a simple target-independent approach, which can lead to the generation of inefficient code. On X86 this would result in the use of movabsq to materialize any 64bit integer constant - even for simple and small values such as 0 and 1. Also some very funny floating-point materialization could be observed too. On AArch64 it would materialize the constant 0 in a register even the architecture has an actual "zero" register. On ARM it would generate unnecessary mov instructions or not use mvn. This change simply changes the order and always asks the target first if it likes to materialize the constant. This doesn't fix all the issues mentioned above, but it enables the targets to implement such optimizations. Related to <rdar://problem/17420988>. llvm-svn: 216006
* [FastISel] Remove an performance debugging assert.Juergen Ributzka2014-08-151-1/+0
| | | | | | | | As Jim pointed out this assert isn't really needed to test for correctness, because the code right afterwards does the same check and falls-back to SelectionDAG - as intended. llvm-svn: 215735
* Revert several FastISel commits to track down a buildbot error.Juergen Ributzka2014-08-141-21/+15
| | | | | | | | | | | | This reverts: r215595 "[FastISel][X86] Add large code model support for materializing floating-point constants." r215594 "[FastISel][X86] Use XOR to materialize the "0" value." r215593 "[FastISel][X86] Emit more efficient instructions for integer constant materialization." r215591 "[FastISel][AArch64] Make use of the zero register when possible." r215588 "[FastISel] Let the target decide first if it wants to materialize a constant." r215582 "[FastISel][AArch64] Cleanup constant materialization code. NFCI." llvm-svn: 215673
* [FastISel] Let the target decide first if it wants to materialize a constant.Juergen Ributzka2014-08-131-15/+21
| | | | | | | | | | | | | | | | | | | | | | | | This changes the order in which FastISel tries to materialize a constant. Originally it would try to use a simple target-independent approach, which can lead to the generation of inefficient code. On X86 this would result in the use of movabsq to materialize any 64bit integer constant - even for simple and small values such as 0 and 1. Also some very funny floating-point materialization could be observed too. On AArch64 it would materialize the constant 0 in a register even the architecture has an actual "zero" register. On ARM it would generate unnecessary mov instructions or not use mvn. This change simply changes the order and always asks the target first if it likes to materialize the constant. This doesn't fix all the issues mentioned above, but it enables the targets to implement such optimizations. Related to <rdar://problem/17420988>. llvm-svn: 215588
* Remove the TargetMachine forwards for TargetSubtargetInfo basedEric Christopher2014-08-041-13/+9
| | | | | | information and update all callers. No functional change. llvm-svn: 214781
* [FastISel] Fix the patchpoint intrinsic lowering in FastISel for large ↵Juergen Ributzka2014-07-311-1/+1
| | | | | | | | | target addresses. This fixes a mistake where I accidentially dropped the upper 32bit of a 64bit pointer during FastISel lowering of the patchpoint intrinsic. llvm-svn: 214367
* AA metadata refactoring (introduce AAMDNodes)Hal Finkel2014-07-241-2/+4
| | | | | | | | | | | | | | | | | | | | In order to enable the preservation of noalias function parameter information after inlining, and the representation of block-level __restrict__ pointer information (etc.), additional kinds of aliasing metadata will be introduced. This metadata needs to be carried around in AliasAnalysis::Location objects (and MMOs at the SDAG level), and so we need to generalize the current scheme (which is hard-coded to just one TBAA MDNode*). This commit introduces only the necessary refactoring to allow for the introduction of other aliasing metadata types, but does not actually introduce any (that will come in a follow-up commit). What it does introduce is a new AAMDNodes structure to hold all of the aliasing metadata nodes associated with a particular memory-accessing instruction, and uses that structure instead of the raw MDNode* in AliasAnalysis::Location, etc. No functionality change intended. llvm-svn: 213859
* [FastISel] Local values shouldn't be alive across an inline asm call with ↵Juergen Ributzka2014-07-161-0/+5
| | | | | | | | | | | | | | side effects. This fixes an issue where a local value is defined before and used after an inline asm call with side effects. This fix simply flushes the local value map, which updates the insertion point for the inline asm call to be above any previously defined local values. This fixes <rdar://problem/17694203> llvm-svn: 213203
* Remove TLI from isInTailCallPosition's arguments. NFC.Juergen Ributzka2014-07-161-1/+1
| | | | | | | There is no need to pass on TLI separately to the function. As Eric pointed out the Target Machine already provides everything we need. llvm-svn: 213108
* [FastISel] Insert patchpoint instruction before the target generated call ↵Juergen Ributzka2014-07-151-1/+2
| | | | | | | | | | instruction. The patchpoint instruction should have been inserted before the target generated call instruction to be inside the ADJSTACKDOWN/ADJSTACKUP call sequence window. llvm-svn: 213034
* [FastISel] Fix patchpoint lowering to set the result register.Juergen Ributzka2014-07-151-5/+6
| | | | | | | | Always update the value map with the result register (if there is one), for the patchpoint instruction we created to replace the target-specific call instruction. llvm-svn: 213033
* Avoid a warning from MSVC on "*/" in this code by inserting a spaceReid Kleckner2014-07-121-1/+1
| | | | llvm-svn: 212862
* [FastISel] Add target-independent patchpoint intrinsic support. WIP.Juergen Ributzka2014-07-111-0/+169
| | | | | | | | | | This implements the target-independent lowering for the patchpoint intrinsic. Targets have to implement the FastLowerCall hook to support this intrinsic. Related to <rdar://problem/17427052> llvm-svn: 212849
* [FastISel] Add basic infrastructure to support a target-independent call ↵Juergen Ributzka2014-07-111-2/+208
| | | | | | | | | | | | | | | lowering hook in FastISel. WIP The infrastructure mimics the call lowering we have already in place for SelectionDAG, but with limitations. For example structure return demotion and non-simple types are not supported (yet). Currently every backend has its own implementation and duplicated code for call lowering. There is also no specified interface that could be called from target-independent code. The target-hook is opt-in and doesn't affect current implementations. llvm-svn: 212848
* [FastISel] Breakout intrinsic lowering into a separate function and add a ↵Juergen Ributzka2014-07-111-34/+39
| | | | | | | | | | | target-hook. Create a separate helper function for target-independent intrinsic lowering. Also add an target-hook that allows to directly call into a target-sepcific intrinsic lowering method. Currently the implementation is opt-in and doesn't affect existing target implementations. llvm-svn: 212843
* Move function dependent resetting of a subtarget variable out of theEric Christopher2014-07-041-0/+1
| | | | | | | | | | subtarget. This involved having the movt predicate take the current function - since we care about size in instruction selection for whether or not to use movw/movt take the function so we can check the attributes. This required adding the current MachineFunction to FastISel and propagating through. llvm-svn: 212309
* [FastISel] Factor out stackmap intrinsic selection code into a dedicated ↵Juergen Ributzka2014-07-011-73/+76
| | | | | | helper method. NFCI. llvm-svn: 212140
OpenPOWER on IntegriCloud