summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen
Commit message (Collapse)AuthorAgeFilesLines
* [MIPS GlobalISel] Select G_UADDOPetar Avramovic2019-02-261-0/+12
| | | | | | | | | Lower G_UADDO. Legalize G_UADDO for MIPS32 Differential Revision: https://reviews.llvm.org/D58671 llvm-svn: 354900
* [DAG] Fix constant store folding to handle non-byte sizes.Nirav Dave2019-02-262-12/+14
| | | | | | | | | | | | | | | | Avoid crashes from zero-byte values due to sub-byte store sizes. Reviewers: uabelho, courbet, rnk Reviewed By: courbet Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D58626 llvm-svn: 354884
* [LegalizeDAG] Use APInt::getSplat helper to create bitreverse masks. NFCI.Simon Pilgrim2019-02-261-10/+6
| | | | llvm-svn: 354867
* [LegalizeDAG] Expand SADDO/SSUBO using SADDSAT/SSUBSAT (PR37763)Simon Pilgrim2019-02-261-5/+17
| | | | | | | | | | If SADDSAT/SSUBSAT are legal, then we can expand SADDO/SSUBO by performing a ADD/SUB and a SADDO/SSUBO and then compare the results. I looked at doing this for UADDO/USUBO as well but as we don't have to do as many range comparisons I didn't see any/much benefit. Differential Revision: https://reviews.llvm.org/D58637 llvm-svn: 354866
* [CodeView] Emit HasConstructorOrDestructor class option for non-trivial ↵Aaron Smith2019-02-261-4/+12
| | | | | | | | | | | | | | | | constructors Reviewers: zturner, rnk, llvm-commits, aleksandr.urakov Reviewed By: zturner, rnk Subscribers: jdoerfert, majnemer, asmith Tags: #llvm Differential Revision: https://reviews.llvm.org/D44406 llvm-svn: 354841
* RegBankSelect: Handle slightly more complex value mappingsMatt Arsenault2019-02-251-12/+16
| | | | | | | | Try to use concat_vectors. Also remove unnecessary assert on pointers. Fixes asserting for <4 x s16> operations and 64-bit pointers for AMDGPU. llvm-svn: 354828
* RegisterScavenger: Allow fail without spillMatt Arsenault2019-02-251-15/+23
| | | | | | | | AMDGPU wants to use this in some contexts where the spilling is either impossible, or a worse alternative to doing something else. llvm-svn: 354816
* Fix a sign compare warning breaking the -Werror build.Andrea Di Biagio2019-02-251-1/+1
| | | | | | The warning was introduced at r354793. llvm-svn: 354810
* [SelectionDAG] Add demanded elts variants to isConstOrConstSplat helpers. NFCI.Simon Pilgrim2019-02-251-37/+74
| | | | | | | | | | | | These helpers extend the existing isConstOrConstSplat helper checks to support DemandedElts masks as well. We already had a local version of this in SelectionDAG that computeKnownBits/ComputeNumSignBits made use of, but this adds the functionality directly to the BuildVectorSDNode node and extends isConstOrConstSplat etc. to use that. This will allow us to reuse the functionality in SimplifyDemandedVectorElts/SimplifyDemandedBits. Differential Revision: https://reviews.llvm.org/D58503 llvm-svn: 354797
* [DAGCombine] Add undef shuffle elt support to partitionShuffleOfConcatsSimon Pilgrim2019-02-251-28/+29
| | | | | | | | Support undef shuffle mask indices in the shuffle(concat_vectors, concat_vectors) -> concat_vectors fold Differential Revision: https://reviews.llvm.org/D58585 llvm-svn: 354793
* [SelectionDAG] Add a OPC_CheckChild2CondCode to SelectionDAGISel to remove a ↵Craig Topper2019-02-251-0/+14
| | | | | | | | | | MoveChild and MoveParent pair. OPC_CheckCondCode is always used as operand 2 of a setcc. And its always surrounded by a MoveChild2 and a MoveParent. By having a dedicated opcode for this case we can reduce the number of bytes needed for this pattern from 4 bytes to 2. This saves ~3000 bytes in the X86 table. llvm-svn: 354763
* [LegalizeTypes][AArch64][X86] Make type legalization of vector ↵Craig Topper2019-02-242-7/+17
| | | | | | | | | | | | | | | | | | | | | (S/U)ADD/SUB/MULO follow getSetCCResultType for the overflow bits. Make UnrollVectorOverflowOp properly convert from scalar boolean contents to vector boolean contents Summary: When promoting the over flow vector for these ops we should use the target's desired setcc result type. This way a v8i32 result type will use a v8i32 overflow vector instead of a v8i16 overflow vector. A v8i16 overflow vector will cause LegalizeDAG/LegalizeVectorOps to have to use v8i32 and truncate to v8i16 in its expansion. By doing this in type legalization instead, we get the truncate into the DAG earlier and give DAG combine more of a chance to optimize it. We also have to fix unrolling to use the scalar setcc result type for the scalarized operation, and convert it to the required vector element type after the scalar operation. We have to observe the vector boolean contents when doing this conversion. The previous code was just taking the scalar result and putting it in the vector. But for X86 and AArch64 that would have only put a the boolean value in bit 0 of the element and left all other bits in the element 0. We need to ensure all bits in the element are the same. I'm using a select with constants here because that's what setcc unrolling in LegalizeVectorOps used. Reviewers: spatel, RKSimon, nikic Reviewed By: nikic Subscribers: javed.absar, kristof.beyls, dmgreen, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D58567 llvm-svn: 354753
* [CGP] add special-cases to form unsigned add with overflow (PR40486)Sanjay Patel2019-02-241-8/+27
| | | | | | | | | | | | | | | | | | | | There's likely a missed IR canonicalization for at least 1 of these patterns. Otherwise, we wouldn't have needed the pattern-matching enhancement in D57516. Note that -- unlike usubo added with D57789 -- the TLI hook for this transform defaults to 'on'. So if there's any perf fallout from this, targets should look at how they're lowering the uaddo node in SDAG and/or override that hook. The x86 diffs suggest that there's some missing pattern-matching for forming inc/dec. This should fix the remaining known problems in: https://bugs.llvm.org/show_bug.cgi?id=40486 https://bugs.llvm.org/show_bug.cgi?id=31754 llvm-svn: 354746
* [TwoAddressInstructionPass] After commuting an instruction and before trying ↵Craig Topper2019-02-231-0/+5
| | | | | | | | | | | | to look for more commutable operands, resample the number of operands. The new instruciton might have less operands than the original instruction. If we don't resample, the next loop iteration might read an operand that doesn't exist. X86 can commute blends to movss/movsd which reduces from 4 operands to 3. This happened in the test case that caused r354363 & company to be reverted. A reduced version of that has been committed here. Really this whole checking for more commutable operands is a little fragile. It assumes that the new instructions operands are the same order and positions as the original except for the pair that was swapped. I don't know of anything that breaks this assumption today, but I've left a fixme. Fixing this will likely require an interface change. llvm-svn: 354738
* Recommit r354647 and r354648 "[LegalizeTypes] When promoting the result of ↵Craig Topper2019-02-231-3/+7
| | | | | | | | | | EXTRACT_SUBVECTOR, also check if the input needs to be promoted. Use that to determine the element type to extract" r354648 was a follow up to fix a regression "[X86] Add a DAG combine for (aext_vector_inreg (aext_vector_inreg X)) -> (aext_vector_inreg X) to fix a regression from my previous commit." These were reverted in r354713 as their context depended on other patches that were reverted for a bug. llvm-svn: 354734
* [NFC] Fix typos: preceeding -> precedingJordan Rupprecht2019-02-232-5/+5
| | | | llvm-svn: 354715
* Revert r354363 & co "[X86][SSE] Generalize X86ISD::BLENDI support to more ↵Reid Kleckner2019-02-231-7/+3
| | | | | | | | | | | | | | | value types" r354363 caused https://crbug.com/934963#c1, which has a plain C reduced test case. I also had to revert some dependent changes: - r354648 - r354647 - r354640 - r354511 llvm-svn: 354713
* [LegalizeTypes] Use PromoteTargetBoolean in PromoteIntOp_ADDSUBCARRY instead ↵Craig Topper2019-02-231-13/+1
| | | | | | of reimplementing it. NFCI llvm-svn: 354710
* Restore ability for C++ API users to Enable IPRA.Daniel Sanders2019-02-221-1/+1
| | | | | | | | | | | | | | | | | | | Summary: Prior to r310876 one of our out-of-tree targets was enabling IPRA by modifying the TargetOptions::EnableIPRA. This no longer works on current trunk since the useIPRA() hook overrides any values that are set in advance. This patch adjusts the behaviour of the hook so that API users and useIPRA() can both enable it but useIPRA() cannot disable it if the API user already enabled it. Reviewers: arsenm Reviewed By: arsenm Subscribers: wdng, mgorny, llvm-commits Differential Revision: https://reviews.llvm.org/D38043 llvm-svn: 354692
* [CGP] move overflow intrinsic insertion to common location; NFCISanjay Patel2019-02-221-17/+28
| | | | | | | | | | | We need to enhance the uaddo matching to handle special-cases as seen in PR40486 and PR31754. That means we won't necessarily have a def-use pattern, so we'll need to check dominance to determine where to place the intrinsic (as we already do for usubo). This preliminary patch is just rearranging the code, so the planned follow-up to improve uaddo will be more clear. llvm-svn: 354689
* MIR: Preserve incoming frame index numbersMatt Arsenault2019-02-221-4/+4
| | | | | | | | | | | | | | | | | | | | | Don't skip incrementing the frame index number if the object is dead. Instructions can still be referencing the old frame index number, and this doesn't attempt to remap those. The resulting MIR then fails to load because the use instructions use a higher frame index number than recorded list of stack objects. I'm not sure it's possible to craft a testcase with the existing set of passes. It requires selectively marking some stack objects dead in an essentially random order. StackSlotColoring condenses towards the low indexes. This avoids a regression in a future AMDGPU commit when some frame indexes are lowered separately from PEI. llvm-svn: 354688
* CodeGen: Make RegAllocRegistry a template classMatt Arsenault2019-02-221-4/+0
| | | | | | | | | | | Will allow re-using the machinery for independent sets of register allocators. This will allow AMDGPU to use separate command line options for the allocator to use for SGPRs separate from VGPRs. llvm-svn: 354687
* [MBP] Factor out function hasViableTopFallthrough and enhancementGuozhi Wei2019-02-221-9/+36
| | | | | | | | This patch factor out the function hasViableTopFallthrough from rotateLoop. It is also enhanced. Original code checks only if there is a block can be placed before current loop top. This patch also checks if the loop top is the most possible successor of its predecessor. The attached test case shows its effect. Differential Revision: https://reviews.llvm.org/D58393 llvm-svn: 354682
* Disable big-endian constant store merges from rL354676.Nirav Dave2019-02-221-10/+11
| | | | llvm-svn: 354677
* [DAGCombine] Fold overlapping constant storesNirav Dave2019-02-222-3/+28
| | | | | | | | | | | | | | | Fold a smaller constant store into larger constant stores immediately preceeding it. Reviewers: rnk, courbet Subscribers: javed.absar, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D58468 llvm-svn: 354676
* [LegalizeVectorOps] Improve the placement of ANDs in the ExpandLoad path for ↵Craig Topper2019-02-221-6/+7
| | | | | | | | | | | | non-byte-sized loads. When we need to merge two adjacent loads the AND mask for the low piece was still sized for the full src element size. But we didn't have that many bits. The upper bits are already zero due to the SRL. So we can skip the AND if we're going to combine with the high bits. We do need an AND to clear out any bits from the high part. We were anding the high part before combining with the low part, but it looks like ANDing after the OR gets better results. So we can just emit the final AND after the optional concatentation is done. That will handling skipping before the OR and get rid of extra high bits after the OR. llvm-svn: 354655
* [LegalizeVectorOps] Simplify the non-byte sized load handling ↵Craig Topper2019-02-221-11/+8
| | | | | | | | VectorLegalizer::ExpandLoad. NFCI Remove an if that should always be true. Merge the body of another into the only block that could make the if true. llvm-svn: 354654
* DAG: Add helper for creating shifts with correct typeMatt Arsenault2019-02-222-1/+8
| | | | llvm-svn: 354649
* [LegalizeTypes] When promoting the result of EXTRACT_SUBVECTOR, also check ↵Craig Topper2019-02-221-3/+7
| | | | | | | | | | | | | | if the input needs to be promoted. Use that to determine the element type to extract. Otherwise we end up creating extract_vector_elts that then each need to have their input promoted. This can lead to truncates needing to be emitted for each of those. But we already emitted any_extends when we legalized the extract_subvector. So now we have pairs of any_extend+trunc that partially cancel. But depending on how DAGCombiner visits them we can get weird results. By promoting the input at the same time we can create only a single any_extend or truncate. There's one regression in the vector-narrow-binop.ll case, but that looks easy to fix with a follow up patch. llvm-svn: 354647
* [DAGCombiner] prevent infinite looping by truncating 'and' (PR40793)Sanjay Patel2019-02-211-2/+3
| | | | | | | | | | | | | | This fold can occur during legalization, so it can fight with promotion to the larger type. It apparently takes a special sequence and subtarget to avoid more basic simplifications that would hide the problem. But there's a bigger question raised here: why does distributeTruncateThroughAnd() even exist? It duplicates functionality from a more minimal pattern that we already have. But getting rid of this function requires some preliminary steps. https://bugs.llvm.org/show_bug.cgi?id=40793 llvm-svn: 354594
* RegBankSelect: Allow targets to introduce control flow for mappingMatt Arsenault2019-02-211-0/+13
| | | | | | | | | | | | | For AMDGPU, if an operand requires an SGPR but is only available as a VGPR, a loop needs to be introduced to execute the instruction with each unique combination of values across all lanes. The rest of the instructions in the block will be moved to a new block following the loop. Check if the next instruction's parent changed, and update the iterators and insertion block if this happened. Tests will be included in a future patch. llvm-svn: 354591
* Re-land part of r354244 "[DAGCombiner] Eliminate dead stores to stack."Clement Courbet2019-02-213-10/+44
| | | | | | This part introduces the lifetime node. llvm-svn: 354578
* Add skipFunction to PostRA machine sinking pass.Xin Tong2019-02-211-0/+3
| | | | | | | | | | | | | | Summary: Add skipFunction to PostRA machine sinking pass. Reviewers: junbuml Subscribers: arsenm, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D57847 llvm-svn: 354541
* [CGP] match a special-case of unsigned subtract overflowSanjay Patel2019-02-201-0/+5
| | | | | | | This is the 'sub0' (negate) pattern from PR31754: https://bugs.llvm.org/show_bug.cgi?id=31754 llvm-svn: 354519
* [DAGCombine] Generalize Dead Store to overlapping stores.Nirav Dave2019-02-201-14/+17
| | | | | | | | | | | | | | | | | | Summary: Remove stores that are immediately overwritten by larger stores. Reviewers: courbet, rnk Reviewed By: rnk Subscribers: javed.absar, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D58467 llvm-svn: 354518
* [SelectionDAG] Teach GetDemandedBits to look at the known zeros of the LHS ↵Craig Topper2019-02-201-3/+7
| | | | | | | | | | | | when handling ISD::AND If the LHS has known zeros, then the RHS immediate mask might have been simplified to remove those bits. This patch adds a call to computeKnownBits to get the known zeroes to handle that possibility. I left an early out to skip the call if all of the demanded bits are set in the mask. Differential Revision: https://reviews.llvm.org/D58464 llvm-svn: 354514
* [SDAG] Support vector UMULO/SMULONikita Popov2019-02-204-19/+89
| | | | | | | | | | | | | | | Second part of https://bugs.llvm.org/show_bug.cgi?id=40442. This adds an extra UnrollVectorOverflowOp() method to SDAG, because the general UnrollOverflowOp() method can't deal with multiple results. Additionally we need to expand UMULO/SMULO during vector op legalization, as it may result in unrolling, which may need additional type legalization. Differential Revision: https://reviews.llvm.org/D57997 llvm-svn: 354513
* Revert r354498 "[X86] Add test case to show missed opportunity to remove an ↵Craig Topper2019-02-201-7/+3
| | | | | | | | explicit AND on the bit position from BT when it has known zeros." I accidentally committed more than just the test. llvm-svn: 354499
* [X86] Add test case to show missed opportunity to remove an explicit AND on ↵Craig Topper2019-02-201-3/+7
| | | | | | | | | | the bit position from BT when it has known zeros. If the bit position has known zeros in it, then the AND immediate will likely be optimized to remove bits. This can prevent GetDemandedBits from recognizing that the AND is unnecessary. llvm-svn: 354498
* GlobalISel: Fix fewerElementsVector for ctlz with different result typeMatt Arsenault2019-02-201-1/+5
| | | | | | Also complete the set of related operations. llvm-svn: 354480
* GlobalISel: Implement moreElementsVector for g_insert resultsMatt Arsenault2019-02-201-0/+8
| | | | llvm-svn: 354477
* Re-land the refactoring part of r354244 "[DAGCombiner] Eliminate dead stores ↵Clement Courbet2019-02-202-35/+80
| | | | | | | | to stack." This is an NFC. llvm-svn: 354476
* [Codegen] Remove dead flags on Physical Defs in machine cseDavid Green2019-02-201-19/+24
| | | | | | | | | | We may leave behind incorrect dead flags on instructions that are CSE'd. Make sure we remove the dead flags on physical registers to prevent other incorrect code motion. Differential Revision: https://reviews.llvm.org/D58115 llvm-svn: 354443
* [RegAllocGreedy] Take last chance recoloring into account in split and assignMikael Holmen2019-02-201-12/+16
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: This is a follow-up to r353988 where tryEvict was extended to take last chance recoloring into account. Now we do the same thing for trySplit and tryAssign. Now we always pass a "FixedRegisters" argument to canEvictInterference and tryEvict so it doesn't need to have a default value anymore. The need for this was found long ago in an out-of-tree target. Unfortunately I don't have a reproducer for an in-tree target. Reviewers: qcolombet, rudkx Reviewed By: qcolombet, rudkx Subscribers: rudkx, MatzeB, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D58376 llvm-svn: 354439
* [NFC] add/modify wrapper function for findRegisterDefOperand().Chen Zheng2019-02-203-3/+4
| | | | llvm-svn: 354438
* [WebAssembly] Update MC for bulk memoryThomas Lively2019-02-191-3/+14
| | | | | | | | | | | | | | | | | | Summary: Rename MemoryIndex to InitFlags and implement logic for determining data segment layout in ObjectYAML and MC. Also adds a "passive" flag for the .section assembler directive although this cannot be assembled yet because the assembler does not support data sections. Reviewers: sbc100, aardappel, aheejin, dschuff Subscribers: jgravelle-google, hiraditya, sunfish, rupprecht, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D57938 llvm-svn: 354397
* [SDAG] Use shift amount type in MULO promotion; NFCNikita Popov2019-02-191-2/+4
| | | | | | | | | | Directly use the correct shift amount type if it is possible, and future-proof the code against vectors. The added test makes sure that bitwidths that do not fit into the shift amount type do not assert. Split out from D57997. llvm-svn: 354359
* GlobalISel: Implement moreElementsVector for selectMatt Arsenault2019-02-191-0/+12
| | | | llvm-svn: 354354
* GlobalISel: Implement moreElementsVector for G_EXTRACT sourceMatt Arsenault2019-02-191-0/+7
| | | | llvm-svn: 354348
* GlobalISel: Implement moreElementsVector for bit opsMatt Arsenault2019-02-191-0/+40
| | | | llvm-svn: 354345
OpenPOWER on IntegriCloud