summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen/SelectionDAG
Commit message (Collapse)AuthorAgeFilesLines
...
* LegalizeDAG: Move unaligned load/store expansion to TLIMatt Arsenault2016-04-212-310/+304
| | | | | | | | When custom lowered, this is not called if the store is custom lowered. Move it to be a utility function so targets can easily expand unaligned accesses when custom lowering. llvm-svn: 267029
* DAGCombiner: Reduce 64-bit BFE pattern to pattern on 32-bit componentMatt Arsenault2016-04-211-0/+44
| | | | | | | If the extracted bits are restricted to the upper half or lower half, this can be truncated. llvm-svn: 267024
* [SelectionDAG] Teach LegalizeVectorOps to directly Expand ↵Craig Topper2016-04-211-3/+5
| | | | | | | | CTTZ_ZERO_UNDEF/CTLZ_ZERO_UNDEF to CTTZ/CTLZ directly if those ops are Legal/Custom instead of deferring it to LegalizeOps. This is needed to support CTTZ/CTLZ Custom correctly since LegalizeOps would be too late to do the custom lowering. llvm-svn: 266951
* [PPC, SSP] Support PowerPC Linux stack protection.Tim Shen2016-04-191-13/+11
| | | | llvm-svn: 266809
* [SSP, 2/2] Create llvm.stackguard() intrinsic and lower it to LOAD_STACK_GUARDTim Shen2016-04-192-54/+51
| | | | | | | | | | | | | | | | | | | | | | | With this change, ideally IR pass can always generate llvm.stackguard call to get the stack guard; but for now there are still IR form stack guard customizations around (see getIRStackGuard()). Future SSP customization should go through LOAD_STACK_GUARD. There is a behavior change: stack guard values are not CSEed anymore, since we should never reuse the value in case that it has been spilled (and corrupted). See ssp-guard-spill.ll. This also cause the change of stack size and codegen in X86 and AArch64 test cases. Ideally we'd like to know if the guard created in llvm.stackprotector() gets spilled or not. If the value is spilled, discard the value and reload stack guard; otherwise reuse the value. This can be done by teaching register allocator to know how to rematerialize LOAD_STACK_GUARD and force a rematerialization (which seems hard), or check for spilling in expandPostRAPseudo. It only makes sense when the stack guard is a global variable, which requires more instructions to load. Anyway, this seems to go out of the scope of the current patch. llvm-svn: 266806
* [NFC] Header cleanupMehdi Amini2016-04-185-5/+0
| | | | | | | | | | | | | | Removed some unused headers, replaced some headers with forward class declarations. Found using simple scripts like this one: clear && ack --cpp -l '#include "llvm/ADT/IndexedMap.h"' | xargs grep -L 'IndexedMap[<]' | xargs grep -n --color=auto 'IndexedMap' Patch by Eugene Kosov <claprix@yandex.ru> Differential Revision: http://reviews.llvm.org/D19219 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 266595
* Switch lowering: don't add incoming PHI values from skipped bit test MBB's ↵Hans Wennborg2016-04-151-12/+26
| | | | | | | | | | | (PR27135) After r245976, LLVM will skip the last bit test case if knows it will always be true. However, we would still erroneously update PHI nodes with incoming values from the MBB that would perform the final bit test, causing -verify-machineinstrs to fail. llvm-svn: 266479
* SelectionDAGISel: rangeify a loopHans Wennborg2016-04-151-23/+20
| | | | llvm-svn: 266478
* Sink DI metadata usage out of MachineInstr.h and MachineInstrBuilder.hReid Kleckner2016-04-141-0/+1
| | | | | | | | | | | MachineInstr.h and MachineInstrBuilder.h are very popular headers, widely included across all LLVM backends. It turns out that there only a handful of TUs that actually care about DI operands on MachineInstrs. After this change, touching DebugInfoMetadata.h and rebuilding llc only needs 112 actions instead of 542. llvm-svn: 266351
* [CodeGen] Teach LLVM how to lower @llvm.{min,max}num to {MIN,MAX}NANDavid Majnemer2016-04-141-6/+16
| | | | | | | | | | | | | | | The behavior of {MIN,MAX}NAN differs from that of {MIN,MAX}NUM when only one of the inputs is NaN: -NUM will return the non-NaN argument while -NAN would return NaN. It is desirable to lower to @llvm.{min,max}num to -NAN if they don't have a native instruction for -NUM. Notably, ARMv7 NEON's vmin has the -NAN semantics. N.B. Of course, it is only safe to do this if the intrinsic call is marked nnan. llvm-svn: 266279
* AMDGPU: Implement canonicalizeMatt Arsenault2016-04-142-1/+4
| | | | | | Also add generic DAG node for it. llvm-svn: 266272
* TargetLowering: Factor out common code for tail call eligibility checking; NFCMatthias Braun2016-04-141-0/+27
| | | | llvm-svn: 266270
* Cleanup Store Merging in UseAA caseNirav Dave2016-04-131-30/+44
| | | | | | | | | | | | | | This patch fixes a bug (PR26827) when using anti-aliasing in store merging. This sets the chain users of the component stores to point to the new store instead of the component stores chain parent. Reviewers: jyknight Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D18909 llvm-svn: 266217
* [CodeGen] Remove constant-folding dead code. NFC.Ahmed Bougacha2016-04-121-12/+4
| | | | | | | | | | | This code was specific to vector operations with scalar operands: all the opcodes in FoldValue (via FoldConstantArithmetic) can't match those criteria. Replace it with an assert if that ever changes: at that point, we might need to add back a splat BUILD_VECTOR. llvm-svn: 266100
* Introduce an GCRelocateInst class [NFC]Philip Reames2016-04-123-7/+6
| | | | | | Previously, we were using isGCRelocate predicates. Using a subclass of IntrinsicInst is far more idiomatic. The refactoring also enables a couple of minor simplifications and code sharing. llvm-svn: 266098
* [DAGCombiner] Fold xor/and/or (bitcast(A), bitcast(B)) -> bitcast(op (A,B)) ↵Simon Pilgrim2016-04-111-2/+2
| | | | | | | | | | | | | | anytime before LegalizeVectorOprs xor/and/or (bitcast(A), bitcast(B)) -> bitcast(op (A,B)) was only being combined at the AfterLegalizeTypes stage, this patch permits the combine to occur anytime before then as well. The main aim with this to improve the ability to recognise bitmasks that can be converted to shuffles. I had to modify a number of AVX512 mask tests as the basic bitcast to/from scalar pattern was being stripped out, preventing testing of the mmask bitops. By replacing the bitcasts with loads we can get almost the same result. Differential Revision: http://reviews.llvm.org/D18944 llvm-svn: 265998
* TargetRegisterInfo: Add getRegAsmName()Tom Stellard2016-04-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: The motivation for this new function is to move an invalid assumption about the relationship between the names of register definitions in tablegen files and their assembly names into TargetRegisterInfo, so that we can begin working on fixing this assumption. The current problem is that if you have a register definition in TableGen like: def MYReg0 : Register<"r0", 0>; The function TargetLowering::getRegForInlineAsmConstraint() derives the assembly name from the tablegen name: "MyReg0" rather than the given assembly name "r0". This is working, because on most targets the tablegen name and the assembly names are case insensitive matches for each other (e.g. def EAX : X86Reg<"eax", ...> getRegAsmName() will allow targets to override this default assumption and return the correct assembly name. Reviewers: echristo, hfinkel Subscribers: SamWot, echristo, hfinkel, llvm-commits Differential Revision: http://reviews.llvm.org/D15614 llvm-svn: 265955
* [x86] use BMI 'andn' for logic + compare ops Sanjay Patel2016-04-091-0/+4
| | | | | | | | | | With BMI, we can use 'andn' to save an instruction when the result is only used in a compare. This is related to one of the potential sequences to check 'isfinite' in: https://llvm.org/bugs/show_bug.cgi?id=27164 Differential Revision: http://reviews.llvm.org/D18910 llvm-svn: 265875
* [SSP] Remove llvm.stackprotectorcheck.Tim Shen2016-04-083-30/+16
| | | | | | | | | | This is a cleanup patch for SSP support in LLVM. There is no functional change. llvm.stackprotectorcheck is not needed, because SelectionDAG isn't actually lowering it in SelectBasicBlock; rather, it adds check code in FinishBasicBlock, ignoring the position where the intrinsic is inserted (See FindSplitPointForStackProtector()). llvm-svn: 265851
* Fix Load Control Dependence in MemCpy GenerationNirav Dave2016-04-082-57/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In Memcpy lowering we had missed a dependence from the load of the operation to successor operations. This causes us to potentially construct an in initial DAG with a memory dependence not fully represented in the chain sub-DAG but rather require looking at the entire DAG breaking alias analysis by allowing incorrect repositioning of memory operations. To work around this, r200033 changed DAGCombiner::GatherAllAliases to be conservative if any possible issues to happen. Unfortunately this check forbade many non-problematic situations as well. For example, it's common for incoming argument lowering to add a non-aliasing load hanging off of EntryNode. Then, if GatherAllAliases visited EntryNode, it would find that other (unvisited) use of the EntryNode chain, and just give up entirely. Furthermore, the check was incomplete: it would not actually detect all such potentially problematic DAG constructions, because GatherAllAliases did not guarantee to visit all chain nodes going up to the root EntryNode. This is in general fine -- giving up early will just miss a potential optimization, not generate incorrect results. But, for this non-chain dependency detection code, it's possible that you could have a load attached to a higher-up chain node than any which were visited. If that load aliases your store, but the only dependency is through the value operand of a non-aliasing store, it would've been missed by this code, and potentially reordered. With the dependence added, this check can be removed and Alias Analysis can be much more aggressive. This fixes code quality regression in the Consecutive Store Merge cleanup (D14834). Test Change: ppc64-align-long-double.ll now may see multiple serializations of its stores Differential Revision: http://reviews.llvm.org/D18062 llvm-svn: 265836
* NFC: make AtomicOrdering an enum classJF Bastien2016-04-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: In the context of http://wg21.link/lwg2445 C++ uses the concept of 'stronger' ordering but doesn't define it properly. This should be fixed in C++17 barring a small question that's still open. The code currently plays fast and loose with the AtomicOrdering enum. Using an enum class is one step towards tightening things. I later also want to tighten related enums, such as clang's AtomicOrderingKind (which should be shared with LLVM as a 'C++ ABI' enum). This change touches a few lines of code which can be improved later, I'd like to keep it as NFC for now as it's already quite complex. I have related changes for clang. As a follow-up I'll add: bool operator<(AtomicOrdering, AtomicOrdering) = delete; bool operator>(AtomicOrdering, AtomicOrdering) = delete; bool operator<=(AtomicOrdering, AtomicOrdering) = delete; bool operator>=(AtomicOrdering, AtomicOrdering) = delete; This is separate so that clang and LLVM changes don't need to be in sync. Reviewers: jyknight, reames Subscribers: jyknight, llvm-commits Differential Revision: http://reviews.llvm.org/D18775 llvm-svn: 265602
* Lower @llvm.experimental.deoptimize as a noreturn callSanjoy Das2016-04-063-7/+35
| | | | | | | | | | | | | | | | | | | | | | | While preserving the return value for @llvm.experimental.deoptimize at the IR level is useful during mid-level optimization, doing so at the machine instruction level requires generating some extra code and a return that is non-ideal. This change has LLVM lower ``` %val = call @llvm.experimental.deoptimize ret %val ``` to effectively ``` call @__llvm_deoptimize() unreachable ``` instead. llvm-svn: 265502
* Swift Calling Convention: swifterror target-independent change.Manman Ren2016-04-055-4/+353
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | At IR level, the swifterror argument is an input argument with type ErrorObject**. For targets that support swifterror, we want to optimize it to behave as an inout value with type ErrorObject*; it will be passed in a fixed physical register. The main idea is to track the virtual registers for each swifterror value. We define swifterror values as AllocaInsts with swifterror attribute or a function argument with swifterror attribute. In SelectionDAGISel.cpp, we set up swifterror values (SwiftErrorVals) before handling the basic blocks. When iterating over all basic blocks in RPO, before actually visiting the basic block, we call mergeIncomingSwiftErrors to merge incoming swifterror values when there are multiple predecessors or to simply propagate them. There, we create a virtual register for each swifterror value in the entry block. For predecessors that are not yet visited, we create virtual registers to hold the swifterror values at the end of the predecessor. The assignments are saved in SwiftErrorWorklist and will be materialized at the end of visiting the basic block. When visiting a load from a swifterror value, we copy from the current virtual register assignment. When visiting a store to a swifterror value, we create a virtual register to hold the swifterror value and update SwiftErrorMap to track the current virtual register assignment. Differential Revision: http://reviews.llvm.org/D18108 llvm-svn: 265433
* fix typos; NFCSanjay Patel2016-04-041-2/+2
| | | | llvm-svn: 265356
* Swift Calling Convention: add swifterror attribute.Manman Ren2016-04-013-0/+9
| | | | | | | | | | | | A ``swifterror`` attribute can be applied to a function parameter or an AllocaInst. This commit does not include any target-specific change. The target-specific optimization will come as a follow-up patch. Differential Revision: http://reviews.llvm.org/D18092 llvm-svn: 265189
* Don't use an i64 return type with webkit_jsccSanjoy Das2016-04-011-6/+7
| | | | | | | | | Re-enable an assertion enabled by Justin Lebar in rL265092. rL265092 was breaking test/CodeGen/X86/deopt-intrinsic.ll because webkit_jscc does not like non-i64 return types. Change the test case to not do that. llvm-svn: 265099
* Revert "Protect some assertions with NDEBUG rather than DEBUG()."Justin Lebar2016-04-011-7/+6
| | | | | | This reverts r265092, because it breaks CodeGen/X86/deopt-intrinsic.ll. llvm-svn: 265093
* Protect some assertions with NDEBUG rather than DEBUG().Justin Lebar2016-04-011-6/+7
| | | | | | | DEBUG() only runs if you pass -debug, but these assertions are generally useful. llvm-svn: 265092
* fix typo; NFCSanjay Patel2016-03-311-2/+1
| | | | llvm-svn: 265054
* Prevent X86ISelLowering from merging volatile loadsNirav Dave2016-03-312-14/+9
| | | | | | | | | | | | | Change isConsecutiveLoads to check that loads are non-volatile as this is a requirement for any load merges. Propagate change to two callers. Reviewers: RKSimon Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D18546 llvm-svn: 265013
* LegalizeDAG: Don't replace vector store with integer if not legalMatt Arsenault2016-03-303-41/+87
| | | | | | | | | | | For the same reason as the corresponding load change. Note that ExpandStore is completely broken for non-byte sized element vector stores, but preserve the current broken behavior which has tests for it. The behavior should be the same, but now introduces a new typed store that is incorrectly split later rather than doing it directly. llvm-svn: 264928
* LegalizeDAG: Don't replace vector load with integer unless legalMatt Arsenault2016-03-303-28/+71
| | | | | | | | | | | | | On AMDGPU we want to be able to promote i64/f64 loads to v2i32. If the access is unaligned, this would conclude that since i64 is legal, it would convert it back to i64 and there is an endless legalization loop. Extract the logic for scalarizing the load into a new TargetLowering function, where this can also replace the custom function AMDGPU has for this. llvm-svn: 264927
* Add support for no-jump-tablesNirav Dave2016-03-291-2/+7
| | | | | | | | | | | | | Add function soft attribute to the generation of Jump Tables in CodeGen as initial step towards clang support of gcc's no-jump-table support Reviewers: hans, echristo Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D18321 llvm-svn: 264756
* Swift Calling Convention: add swiftself attribute.Manman Ren2016-03-293-0/+9
| | | | | | Differential Revision: http://reviews.llvm.org/D17866 llvm-svn: 264754
* [Codegen] Decrease minimum jump table density.Kyle Butt2016-03-292-8/+23
| | | | | | | | | | | Minimum density for both optsize and non optsize are now options -sparse-jump-table-density (default 10) for non optsize functions -dense-jump-table-density (default 40) for optsize functions, which matches the current default. This improves several benchmarks at google at the cost of a small codesize increase. For code compiled with -Os, the old behavior continues llvm-svn: 264689
* Minor code cleanup. NFC.Junmo Park2016-03-261-1/+1
| | | | llvm-svn: 264505
* Prevent construction of cycle in DAG store mergeNirav Dave2016-03-253-40/+44
| | | | | | | | | | | | | | | | | | | | | When merging stores in DAGCombiner, add check to ensure that no dependenices exist that would cause the construction of a cycle in our DAG. This may happen if one store has a data dependence on another instruction (e.g. a load) which itself has a (chain) dependence on another store being merged. These stores cannot be merged safely and doing so results in a cycle that is discovered in LegalizeDAG. This test is only done in cases where Antialias analysis is used (UseAA) as non-AA store merge candidates will be merged logically after all loads which have been checked to not alias. Reviewers: ahatanak, spatel, niravd, arsenm, hfinkel, tstellarAMD, jyknight Subscribers: llvm-commits, tberghammer, danalbert, srhines Differential Revision: http://reviews.llvm.org/D18336 llvm-svn: 264461
* CXX TLS: collect return blocks after SelectAllBasicBlocks.Manman Ren2016-03-241-7/+15
| | | | | | | | | | It is incorrect to get the corresponding MBB for a ReturnInst before SelectAllBasicBlocks since SelectAllBasicBlocks can change the correspondence between a ReturnInst and the MBB it is in. PR27062 llvm-svn: 264358
* Reduce code duplication by extracting out a helper function; NFCSanjoy Das2016-03-242-30/+21
| | | | llvm-svn: 264355
* Lower varargs correctly in deopt bundle loweringSanjoy Das2016-03-241-0/+1
| | | | | | | Earlier we were ignoring varargs in LowerCallSiteWithDeoptBundle because populateCallLoweringInfo does not set CallLoweringInfo::IsVarArg. llvm-svn: 264354
* Add lowering support for llvm.experimental.deoptimizeSanjoy Das2016-03-243-0/+40
| | | | | | | | | | | | | | | Summary: Only adds support for "naked" calls to llvm.experimental.deoptimize. Support for round-tripping through RewriteStatepointsForGC will come as a separate patch (should be simpler than this one). Reviewers: reames Subscribers: sanjoy, mcrosier, llvm-commits Differential Revision: http://reviews.llvm.org/D18429 llvm-svn: 264329
* [Statepoints] Fix yet another issue around gc pointer uniqueingSanjoy Das2016-03-242-19/+22
| | | | | | | | | | | | | Given that StatepointLowering now uniques derived pointers before putting them in the per-statepoint spill map, we may end up with missing entries for derived pointers when we visit a gc.relocate on a pointer that was de-duplicated away. Fix this by keeping two maps, one mapping gc pointers to their de-duplicated values, and one mapping a de-duplicated value to the slot it is spilled in. llvm-svn: 264320
* Minor cosmestic changes (NFC)Sanjoy Das2016-03-241-7/+7
| | | | | | | - Reflow comments - Rename function llvm-svn: 264319
* CodeGen: extend RHS when splitting ATOMIC_CMP_SWAP_WITH_SUCCESS.Tim Northover2016-03-241-3/+18
| | | | | | | | | | | | | If the operation's type has been promoted during type legalization, we need to account for the fact that the high bits of the comparison operand are likely unspecified. The LHS is usually zero-extended, but MIPS sign extends it, so we have to be slightly careful. Patch by Simon Dardis. llvm-svn: 264296
* Remove unsafe AssertZext after promoting result of FP_TO_FP16Pirama Arumuga Nainar2016-03-241-4/+1
| | | | | | | | | | | | | | | Summary: Some target lowerings of FP_TO_FP16, for instance ARM's vcvtb.f16.f32 instruction, do not guarantee that the top 16 bits are zeroed out. Remove the unsafe AssertZext and add tests to exercise this. Reviewers: jmolloy, sbaranga, kristof.beyls, aadg Subscribers: llvm-commits, srhines, aemerson Differential Revision: http://reviews.llvm.org/D18426 llvm-svn: 264285
* SelectionDAG: Remove a tautological dyn_cast. NFCJustin Bogner2016-03-231-3/+2
| | | | | | Index is already a StoreSDNode, so this dyn_cast doesn't do anything. llvm-svn: 264177
* Remove stale commentSanjoy Das2016-03-231-2/+1
| | | | llvm-svn: 264131
* [StatepointLowering] Don't do two DenseMap lookups; nfciSanjoy Das2016-03-231-2/+3
| | | | llvm-svn: 264130
* [StatepointLowering] Minor NFC cleanupsSanjoy Das2016-03-231-11/+12
| | | | | | | | | - Use auto - Name variables in LLVM style - Use llvm::find instead of std::find - Blank lines between declarations llvm-svn: 264129
* [StatepointLowering] Minor nfc refactoringSanjoy Das2016-03-231-29/+6
| | | | | | | | | | Now that StatepointLoweringInfo represents base pointers, derived pointers and gc relocates as SmallVectors and not ArrayRefs, we no longer need to allocate "backing storage" on stack in LowerStatepoint. So elide the backing storage, and inline the trivial body of getIncomingStatepointGCValues. llvm-svn: 264128
OpenPOWER on IntegriCloud