summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
Commit message (Collapse)AuthorAgeFilesLines
...
* [LLVM][Alignment] Introduce Alignment Type in DataLayoutGuillaume Chatelet2019-08-051-1/+1
| | | | | | | | | | | | | | | | | | | Summary: This is patch is part of a serie to introduce an Alignment type. See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html See this patch for the introduction of the type: https://reviews.llvm.org/D64790 Reviewers: courbet, jfb, jakehehrlich Subscribers: hiraditya, dexonsmith, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D65521 Make getFunctionPtrAlign() return MaybeAlign llvm-svn: 367817
* [SelectionDAG] Add node creation debug message to getMemIntrinsicNode.Craig Topper2019-08-041-1/+3
| | | | llvm-svn: 367771
* [SelectionDAG] Use APInt::isSubsetOf/intersects to simplify some code.Craig Topper2019-08-011-2/+2
| | | | | | Also use KnownBits::isNegative/isNonNegative to further simplify. llvm-svn: 367518
* Migrate some more fadd and fsub cases away from UnsafeFPMath control to ↵Michael Berg2019-07-311-1/+1
| | | | | | | | | | | | | | | | utilize NoSignedZerosFPMath options control Summary: Honoring no signed zeroes is also available as a user control through clang separately regardless of fastmath or UnsafeFPMath context, DAG guards should reflect this context. Reviewers: spatel, arsenm, hfinkel, wristow, craig.topper Reviewed By: spatel Subscribers: rampitec, foad, nhaehnle, wuzish, nemanjai, jvesely, wdng, javed.absar, MaskRay, jsji Differential Revision: https://reviews.llvm.org/D65170 llvm-svn: 367486
* SelectionDAG, MI, AArch64: Widen target flags fields/arguments from unsigned ↵Peter Collingbourne2019-07-311-15/+12
| | | | | | | | | | | | | char to unsigned. This makes the field wider than MachineOperand::SubReg_TargetFlags so that we don't end up silently truncating any higher bits. We should still catch any bits truncated from the MachineOperand field as a consequence of the assertion in MachineOperand::setTargetFlags(). Differential Revision: https://reviews.llvm.org/D65465 llvm-svn: 367474
* [SelectionDAG] Check for any recursion depth greater than or equal to limit ↵Simon Pilgrim2019-07-271-3/+3
| | | | | | | | | | | | instead of just equal the limit. If anything called the recursive isKnownNeverNaN/computeKnownBits/ComputeNumSignBits/SimplifyDemandedBits/SimplifyMultipleUseDemandedBits with an incorrect depth then we could continue to recurse if we'd already exceeded the depth limit. This replaces the limit check (Depth == 6) with a (Depth >= 6) to make sure that we don't circumvent it. This causes a couple of regressions as a mixture of calls (SimplifyMultipleUseDemandedBits + combineX86ShufflesRecursively) were calling with depths that were already over the limit. I've fixed SimplifyMultipleUseDemandedBits to not do this. combineX86ShufflesRecursively is trickier as we get a lot of regressions if we reduce its own limit from 8 to 6 (it also starts at Depth == 1 instead of Depth == 0 like the others....) - I'll see what I can do in future patches. llvm-svn: 367171
* [SelectionDAG] GetDemandedBits - update SIGN_EXTEND_INREG op to just call ↵Simon Pilgrim2019-07-261-9/+1
| | | | | | SimplifyMultipleUseDemandedBits. llvm-svn: 367098
* [SelectionDAG] GetDemandedBits - update OR/XOR ops to just call ↵Simon Pilgrim2019-07-261-6/+2
| | | | | | | | SimplifyMultipleUseDemandedBits. Eventually all of these will be moved over, but we create nodes in GetDemandedBits recursion at the moment which causes regressions when we try to remove them all. llvm-svn: 367092
* Fix signed/unsigned comparison warning. NFCI.Simon Pilgrim2019-07-241-1/+1
| | | | llvm-svn: 366935
* [DAGCombine] matchBinOpReduction - add partial reduction matchingSimon Pilgrim2019-07-241-7/+32
| | | | | | | | | | | | | | | | | | | | This patch adds support for recognizing cases where a larger vector type is being used to reduce just the elements in the lower subvector: e.g. <8 x i32> reduction pattern in a <16 x i32> vector: <4,5,6,7,u,u,u,u,u,u,u,u,u,u,u,u> <2,3,u,u,u,u,u,u,u,u,u,u,u,u,u,u> <1,u,u,u,u,u,u,u,u,u,u,u,u,u,u,u> matchBinOpReduction returns the lower extracted subvector in such cases, assuming isExtractSubvectorCheap accepts the extraction. I've only enabled it for X86 reduction sums so far. I intend to enable it for the bitop/minmax cases in future patches, and eventually I think its worth turning it on all the time. This is mainly just a case of ensuring calls to matchBinOpReduction don't make assumptions on the vector width based on the original vector extraction. Fixes the x86 partial reduction sum cases in PR33758 and PR42023. Differential Revision: https://reviews.llvm.org/D65047 llvm-svn: 366933
* [SelectionDAG] makeEquivalentMemoryOrdering - early out for equal chains ↵Simon Pilgrim2019-07-241-1/+1
| | | | | | | | | | (PR42727) If we are already using the same chain for the old/new memory ops then just return. Fixes PR42727 which had getLoad() reusing an existing node. llvm-svn: 366922
* Fixing @llvm.memcpy not honoring volatile.Guillaume Chatelet2019-07-091-19/+24
| | | | | | | | | | | | | | | | This is explicitly not addressing target-specific code, or calls to memcpy. Summary: https://bugs.llvm.org/show_bug.cgi?id=42254 Reviewers: courbet Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D63215 llvm-svn: 365449
* [SelectionDAG] Propagate alias metadata to target intrinsic nodesJames Molloy2019-07-031-2/+2
| | | | | | | | When a target intrinsic has been determined to touch memory, we construct a MachineMemOperand during SDAG construction. In this case, we should propagate AAMDNodes metadata to the MachineMemOperand where available. Differential revision: https://reviews.llvm.org/D64131 llvm-svn: 365043
* [SelectionDAG] Use the memory VT instead of result VT for FoldingSet ↵Craig Topper2019-06-301-3/+2
| | | | | | | | | profiling in getMaskedLoad/getMaskedStore. This matches what is done by the Profile function. Otherwise CSE won't work properly. llvm-svn: 364717
* [X86] X86DAGToDAGISel::matchBitExtract(): pattern b: truncation awarenessRoman Lebedev2019-06-261-0/+6
| | | | | | | | | | | | | | | | | | Summary: (Not so) boringly identical to pattern a (D62786) Not yet sure how do deal with the last pattern c. Reviewers: RKSimon, craig.topper, spatel Reviewed By: RKSimon Subscribers: llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D62793 llvm-svn: 364418
* [DAGCombiner] Support (shl (ext (shl x, c1)), c2) -> 0 non-uniform folds.Simon Pilgrim2019-06-191-4/+4
| | | | | | | | Use matchBinaryPredicate instead of isConstOrConstSplat to let us handle non-uniform shift cases. This requires us to tweak matchBinaryPredicate to allow it to (optionally) handle constants with different type widths. llvm-svn: 363792
* [SelectionDAG] Fold insert_subvector(undef, extract_subvector(v, c), c) -> v ↵Simon Pilgrim2019-06-171-0/+6
| | | | | | | | in getNode This is already done in DAGCombiner::visitINSERT_SUBVECTOR, but this helps a number of shuffles across different vector widths recognise when they come from the same source. llvm-svn: 363542
* [SelectionDAG] ComputeNumSignBits - support constant pool values from targetSimon Pilgrim2019-06-041-0/+30
| | | | | | | | | | | | As I mentioned on D61887 we don't get many hits on ComputeNumSignBits as we did on computeKnownBits. The case we do get is interesting though - it allows us to use the 'ConditionalNegate' combine in combineLogicBlendIntoPBLENDV to remove a select. It comes too late for SSE41 (BLENDV) cases, but SSE2 tests can hit it now. We should probably try to make use of this for SSE41+ targets as well - avoiding variable blends is usually a good idea. I'll investigate as a followup. Differential Revision: https://reviews.llvm.org/D62777 llvm-svn: 362486
* [SelectionDAG] ComputeNumSignBits - clang-format + improve *EXTLOAD ↵Simon Pilgrim2019-06-041-7/+7
| | | | | | | | comments. NFCI. Pre-commit requested for D62777. llvm-svn: 362485
* [SelectionDAG] Add fpto[us]i(undef) --> undef constant foldSimon Pilgrim2019-06-041-0/+5
| | | | | | | | Follow up to D62807. Differential Revision: https://reviews.llvm.org/D62811 llvm-svn: 362483
* [SelectionDAG] Add [us]itofp(undef) --> 0 constant fold (PR39205)Simon Pilgrim2019-06-031-0/+6
| | | | | | | | We were missing this fold in the DAG, which I've copied directly from llvm::ConstantFoldCastInstruction Differential Revision: https://reviews.llvm.org/D62807 llvm-svn: 362397
* [DAG] isBitwiseNot / isConstOrConstSplat - add support for build vector ↵Simon Pilgrim2019-06-021-13/+28
| | | | | | | | | | | | undefs + truncation (PR41020) Add (opt-in) support for implicit truncation to isConstOrConstSplat, which allows us to match truncated 'all ones' cases in isBitwiseNot. PR41020 compares against using ISD::isBuildVectorAllOnes() instead, but that predicate silently accepts any UNDEF elements in the build vector which might not be what we want in isBitwiseNot - so I've added an opt-in 'AllowUndefs' flag that is set to false by default but will allow us to enable it on individual cases where its safe. Differential Revision: https://reviews.llvm.org/D62783 llvm-svn: 362323
* [SelectionDAG] Make the code in mutateStrictFPToFP less aware of how many ↵Craig Topper2019-05-311-55/+34
| | | | | | | | | | | | | operands each node has. NFCI Just copy all of the operands except the chain and call MorphNode on that. This removes the IsUnary and IsTernary flags. Also always get the result type from the result type of the original nodes. Previously we got it from the operand except for two nodes where that didn't work. llvm-svn: 362269
* [SelectionDAG] fold concat of extract subvectorsSanjay Patel2019-05-271-0/+25
| | | | | | | | | | | | | | This is derived from the related fold for build vectors. We also have a version of this in DAGCombiner. The benefit of having this fold at node creation time is (1) efficiency and (2) preventing infinite looping from creating patterns that should not exist in the first place. Currently, the inf-loop could happen with MergeConsecutiveStores() because it naively creates concat of extracts when forming a wider vector store. That could fight with target-specific store narrowing. llvm-svn: 361780
* [SelectionDAG] fix formatting and redundant comments; NFCSanjay Patel2019-05-271-7/+6
| | | | | | | | | There's a possible missing fold here for extracting from the same source vector. It's similar to a check that we use to squash a build vector with all extracted elements from the same source vector. llvm-svn: 361778
* [SelectionDAG] GetDemandedBits - add demanded elements wrapper implementationSimon Pilgrim2019-05-271-1/+15
| | | | | | The DemandedElts variable is pretty much inert at the moment - the original GetDemandedBits implementation calls it with an 'all ones' DemandedElts value so the function is active and behaves exactly as it used to. llvm-svn: 361773
* [SelectionDAG] GetDemandedBits - cleanup to more closely match ↵Simon Pilgrim2019-05-261-16/+21
| | | | | | | | SimplifyDemandedBits. NFCI. Prep work before adding demanded elts support. llvm-svn: 361739
* [SelectionDAG] MaskedValueIsZero - add demanded elements implementationSimon Pilgrim2019-05-261-2/+15
| | | | | | Will be used in an upcoming patch but I've updated the original implementation to call this to ensure test coverage. llvm-svn: 361738
* [SelectionDAG] computeKnownBits - support constant pool values from targetSimon Pilgrim2019-05-241-2/+53
| | | | | | | | | | | | | | | | This patch adds the overridable TargetLowering::getTargetConstantFromLoad function which allows targets to return any constant value loaded by a LoadSDNode node - only X86 makes use of this so far but everything should be in place for other targets. computeKnownBits then uses this function to improve codegen, notably vector code after legalization. A future commit will do the same for ComputeNumSignBits but computeKnownBits sees the bigger benefit. This required a couple of fixes: * SimplifyDemandedBits must early-out for getTargetConstantFromLoad cases to prevent infinite loops of constant regeneration (similar to what we already do for BUILD_VECTOR). * Fix a DAGCombiner::visitTRUNCATE issue as we had trunc(shl(v8i32),v8i16) <-> shl(trunc(v8i16),v8i32) infinite loops after legalization on AVX512 targets. Differential Revision: https://reviews.llvm.org/D61887 llvm-svn: 361620
* [SelectionDAG] fold insert subvector of undef into undefSanjay Patel2019-05-211-0/+3
| | | | | | | | | | | | DAGCombiner simplifies this more liberally as: // If inserting an UNDEF, just return the original vector. if (N1.isUndef()) return N0; So there's no way to make this visible in output AFAIK, but doing this at node creation time should be slightly more efficient. llvm-svn: 361287
* [DebugInfoMetadata] Refactor DIExpression::prepend constants (NFC)Petar Jovanovic2019-05-201-3/+2
| | | | | | | | | | | Refactor DIExpression::With* into a flag enum in order to be less error-prone to use (as discussed on D60866). Patch by Djordje Todorovic. Differential Revision: https://reviews.llvm.org/D61943 llvm-svn: 361137
* [codeview] Fix SDNode representation of annotation labelsReid Kleckner2019-05-151-1/+2
| | | | | | | | | | | Before this change, they were erroneously constructed with the EH_LABEL SDNode opcode, which caused other passes to interact with them in incorrect ways. See the FIXME about fastisel that this addresses in the existing test case. Fixes PR41890 llvm-svn: 360818
* Add constrained fptrunc and fpext intrinsics.Kevin P. Neal2019-05-131-1/+16
| | | | | | | | | | | The new fptrunc and fpext intrinsics are constrained versions of the regular fptrunc and fpext instructions. Reviewed by: Andrew Kaylor, Craig Topper, Cameron McInally, Conner Abbot Approved by: Craig Topper Differential Revision: https://reviews.llvm.org/D55897 llvm-svn: 360581
* [SelectionDAG] fold 'fneg undef' to undefSanjay Patel2019-05-081-0/+4
| | | | | | | | | | | | | | | | | This is extracted from the original draft of D61419 with some additional tests. We don't currently get this in IR (it's conservatively turned into a NaN), but presumably that'll get updated as we add real IR support for 'fneg' rather than 'fsub -0.0, x'. The x86-32 run shows the following, and I haven't looked further to see why, but that seems to be independent: Legalizing: t1: f32 = undef Trying to expand node Creating fp constant: t4: f32 = ConstantFP<0.000000e+00> Differential Revision: https://reviews.llvm.org/D61516 llvm-svn: 360296
* [SelectionDAG] Use any_of/all_of where possible. NFCI.Simon Pilgrim2019-05-051-14/+4
| | | | llvm-svn: 359974
* [SelectionDAG] CreateTopologicalOrder - don't use iteratorSimon Pilgrim2019-05-031-10/+6
| | | | | | | | We shouldn't use an iterator to loop across a std::vector when the same loop is adding elements to that std::vector Found by cppcheck llvm-svn: 359900
* [SelectionDAG] Use INT_MIN as (1 << 31) is UB for signed integers. NFCI.Simon Pilgrim2019-05-031-2/+2
| | | | llvm-svn: 359873
* [SelectionDAG] computeKnownBits - remove some duplicate/shadow variables. NFCI.Simon Pilgrim2019-05-031-6/+4
| | | | llvm-svn: 359872
* [SelectionDAG] Add asserts to verify the vectorness of input and output ↵Craig Topper2019-05-021-0/+12
| | | | | | | | | | types of TRUNCATE/ZERO_EXTEND/ANY_EXTEND/SIGN_EXTEND agree As a result of the underlying cause of PR41678 we created an ANY_EXTEND node with a scalar result type and v1i1 input type. Ideally we would have asserted for this instead of letting it go through to instruction selection and generate bad machine IR Differential Revision: https://reviews.llvm.org/D61463 llvm-svn: 359836
* [SelectionDAG] remove constant folding limitations based on FP exceptionsSanjay Patel2019-05-021-26/+16
| | | | | | | | | | | | | | | | | We don't have FP exception limits in the IR constant folder for the binops (apart from strict ops), so it does not make sense to have them here in the DAG either. Nothing else in the backend tries to preserve exceptions (again outside of strict ops), so I don't see how this could have ever worked for real code that cares about FP exceptions. There are still cases (examples: unary opcodes in SDAG, FMA in IR) where we are trying (at least partially) to preserve exceptions without even asking if the target supports FP exceptions. Those should be corrected in subsequent patches. Real support for FP exceptions requires several changes to handle the constrained/strict FP ops. Differential Revision: https://reviews.llvm.org/D61331 llvm-svn: 359791
* DAG: allow DAG pointer size different from memory representation.Tim Northover2019-05-011-0/+12
| | | | | | | | | | | | | | | | | | | | | In preparation for supporting ILP32 on AArch64, this modifies the SelectionDAG builder code so that pointers are allowed to have a larger type when "live" in the DAG compared to memory. Pointers get zero-extended whenever they are loaded, and truncated prior to stores. In addition, a few not quite so obvious locations need updating: * A GEP that has not been marked inbounds needs to enforce the IR-documented 2s-complement wrapping at the memory pointer size. Inbounds GEPs are undefined if they overflow the address space, so no additional operations are needed. * Signed comparisons would give incorrect results if performed on the zero-extended values. This shouldn't affect CodeGen for now, but will become active when the AArch64 ILP32 support is committed. llvm-svn: 359676
* [SelectionDAG] remove div-by-zero constant folding restrictionSanjay Patel2019-04-301-7/+3
| | | | | | | | | | | | | | | | We don't have this restriction in IR, so it should not be here either simply out of consistency. Code that wants to handle FP exceptions is expected to use the 'strict' variants of these nodes. We don't get the frem case because frem by 0.0 produces NaN (invalid), and that's the remaining check here (so the removed check for frem was dead code AFAIK). This is the only place in SDAG that uses "HasFPExceptions", so I think we should remove that entirely as a follow-up patch. llvm-svn: 359566
* [TargetLowering] findOptimalMemOpLowering. NFCI.Sjoerd Meijer2019-04-301-123/+18
| | | | | | | | | | This was a local static funtion in SelectionDAG, which I've promoted to TargetLowering so that I can reuse it to estimate the cost of a memory operation in D59787. Differential Revision: https://reviews.llvm.org/D59766 llvm-svn: 359543
* [TargetLowering] Change getOptimalMemOpType to take a function attribute listSjoerd Meijer2019-04-301-1/+2
| | | | | | | | | | | | The MachineFunction wasn't used in getOptimalMemOpType, but more importantly, this allows reuse of findOptimalMemOpLowering that is calling getOptimalMemOpType. This is the groundwork for the changes in D59766 and D59787, that allows implementation of TTI::getMemcpyCost. Differential Revision: https://reviews.llvm.org/D59785 llvm-svn: 359537
* [SelectionDAG] move splat util functions up from x86 loweringSanjay Patel2019-04-221-0/+52
| | | | | | | | | | This was supposed to be NFC, but the change in SDLoc definitions causes instruction scheduling changes. There's nothing x86-specific in this code, and it can likely be used from DAGCombiner's simplifyVBinOp(). llvm-svn: 358930
* [SelectionDAG] soften splat mask assert/unreachable (PR41535)Sanjay Patel2019-04-191-1/+4
| | | | | | | | These are general queries, so they should not die when given a degenerate input like an all undef mask. Callers should be able to deal with an op that will eventually be simplified away. llvm-svn: 358761
* DAG: propagate whether an arg is a pointer for CallingConv decisions.Tim Northover2019-04-151-5/+8
| | | | | | | | | | | | | | | The arm64_32 ABI specifies that pointers (despite being 32-bits) should be zero-extended to 64-bits when passed in registers for efficiency reasons. This means that the SelectionDAG needs to be able to tell the backend that an argument was originally a pointer, which is implmented here. Additionally, some memory intrinsics need to be declared as taking an i8* instead of an iPTR. There should be no CodeGen change yet, but it will be triggered when AArch64 backend support for ILP32 is added. llvm-svn: 358398
* [SelectionDAG] Use KnownBits::computeForAddSub/computeForAddCarryBjorn Pettersson2019-04-151-58/+21
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: Use KnownBits::computeForAddSub/computeForAddCarry in SelectionDAG::computeKnownBits when doing value tracking for addition/subtraction. This should improve the precision of the known bits, as we only used to make a simple estimate of known zeroes. The KnownBits support functions are also able to deduce bits that are known to be one in the result. Reviewers: spatel, RKSimon, nikic, lebedev.ri Reviewed By: nikic Subscribers: nikic, javed.absar, lebedev.ri, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D60460 llvm-svn: 358372
* Revert rL357745: [SelectionDAG] Compute known bits of CopyFromRegDavid Green2019-04-101-20/+0
| | | | | | | | | | Certain optimisations from ConstantHoisting and CGP rely on Selection DAG not seeing through to the constant in other blocks. Revert this patch while we come up with a better way to handle that. I will try to follow this up with some better tests. llvm-svn: 358113
* [SelectionDAG] Add fcmp UNDEF handling to SelectionDAG::FoldSetCCSimon Pilgrim2019-04-051-3/+8
| | | | | | | | | | Second half of PR40800, this patch adds DAG undef handling to fcmp instructions to match the behavior in llvm::ConstantFoldCompareInstruction, this permits constant folding of vector comparisons where some elements had been reduced to UNDEF (by SimplifyDemandedVectorElts etc.). This involves a lot of tweaking to reduced tests as bugpoint loves to reduce fcmp arguments to undef........ Differential Revision: https://reviews.llvm.org/D60006 llvm-svn: 357765
OpenPOWER on IntegriCloud