summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen
Commit message (Collapse)AuthorAgeFilesLines
...
* Added single use check to ShrinkDemandedConstantStanislav Mekhanoshin2019-01-051-0/+3
| | | | | | | | | Fixes cvt_f32_ubyte combine. performCvtF32UByteNCombine() could shrink source node to demanded bits only even if there are other uses. Differential Revision: https://reviews.llvm.org/D56289 llvm-svn: 350475
* [X86] Add INSERT_SUBVECTOR to ComputeNumSignBitsCraig Topper2019-01-041-1/+35
| | | | | | | | | | This adds support for calculating sign bits of insert_subvector. I based it on the computeKnownBits. My motivating case is propagating sign bits information across basic blocks on AVX targets where concatenating using insert_subvector is common. Differential Revision: https://reviews.llvm.org/D56283 llvm-svn: 350432
* [DAGCombiner][x86] scalarize binop followed by extractelementSanjay Patel2019-01-031-5/+44
| | | | | | | | | | | | | | | | | | | | As noted in PR39973 and D55558: https://bugs.llvm.org/show_bug.cgi?id=39973 ...this is a partial implementation of a fold that we do as an IR canonicalization in instcombine: // extelt (binop X, Y), Index --> binop (extelt X, Index), (extelt Y, Index) We want to have this in the DAG too because as we can see in some of the test diffs (reductions), the pattern may not be visible in IR. Given that this is already an IR canonicalization, any backend that would prefer a vector op over a scalar op is expected to already have the reverse transform in DAG lowering (not sure if that's a realistic expectation though). The transform is limited with a TLI hook because there's an existing transform in CodeGenPrepare that tries to do the opposite transform. Differential Revision: https://reviews.llvm.org/D55722 llvm-svn: 350354
* Revert "Resubmit rL345008 "Split MachinePipeliner code into header and cpp ↵Stefan Granitz2019-01-031-5/+595
| | | | | | | | files"" This reverts commit r350290. llvm-svn: 350345
* Resubmit rL345008 "Split MachinePipeliner code into header and cpp files"Lama Saba2019-01-031-595/+5
| | | | | | | | | The commit caused unclear failures in http://green.lab.llvm.org/green//job/lldb-cmake/ will revert if the error reappears Differential Revision: https://reviews.llvm.org/D56084 llvm-svn: 350290
* [CodeGen] Skip over dbg-instr in twoaddr passMarkus Lavin2019-01-031-3/+6
| | | | | | | | | | A DBG_VALUE between a two-address instruction and a following COPY would prevent rescheduleMIBelowKill optimization inside TwoAddressInstructionPass. Differential Revision: https://reviews.llvm.org/D55987 llvm-svn: 350289
* [DAGCombiner] After performing the division by constant optimization for a ↵Craig Topper2019-01-021-2/+29
| | | | | | | | | | | | DIV or REM node, replace the users of the corresponding REM or DIV node if it exists. Currently we expand the two nodes separately. This gives DAG combiner an opportunity to optimize the expanded sequence taking into account only one set of users. When we expand the other node we'll create the expansion again, but might not be able to optimize it the same way. So the nodes won't CSE and we'll have two similarish sequences in the same basic block. By expanding both nodes at the same time we'll avoid prematurely optimizing the expansion until both the division and remainder have been replaced. Improves the test case from PR38217. There may be additional opportunities after this. Differential Revision: https://reviews.llvm.org/D56145 llvm-svn: 350239
* [LegalizeIntegerTypes] When promoting the result of an extract_vector_elt ↵Craig Topper2019-01-021-2/+20
| | | | | | | | | | | | | | also promote the input type if necessary By also promoting the input type we get a better idea for what scalar type to use. This can provide better results if the result of the extract is sign extended. What was previously happening is that the extract result would be legalized, sometime later the input of the sign extend would be legalized using the result of the extract. Then later the extract input would be legalized forcing a truncate into the input of the sign extend using a replace all uses. This requires DAG combine to combine out the sext/truncate pair. But sometimes we visited the truncate first and messed things up before the sext could be combined. By creating the extract with the correct scalar type when we create legalize the result type, the truncate will be added right away. Then when the sign_extend input is legalized it will create an any_extend of the truncate which can be optimized by getNode to maybe remove the truncate. And then a sign_extend_inreg. Now DAG combine doesn't have to worry about getting rid of the extend. This fixes the regression on X86 in D56156. Differential Revision: https://reviews.llvm.org/D56176 llvm-svn: 350236
* [DAGCombiner][X86][PowerPC] Teach visitSIGN_EXTEND_INREG to fold ↵Craig Topper2019-01-021-2/+5
| | | | | | | | | | | | (sext_in_reg (aext/sext x)) -> (sext x) when x has more than 1 sign bit and the sext_inreg is from one of them. If x has multiple sign bits than it doesn't matter which one we extend from so we can sext from x's msb instead. The X86 setcc-combine.ll changes are a little weird. It appears we ended up with a (sext_inreg (aext (trunc (extractelt)))) after type legalization. The sext_inreg+aext now gets optimized by this combine to leave (sext (trunc (extractelt))). Then we visit the trunc before we visit the sext. This ends up changing the truncate to an extractvectorelt from a bitcasted vector. I have a follow up patch to fix this. Differential Revision: https://reviews.llvm.org/D56156 llvm-svn: 350235
* Reversing the commit in revision 350186. Revision causes regression in 4Ayonam Ray2019-01-012-33/+53
| | | | | | tests. llvm-svn: 350187
* Omit range checks from jump tables when lowering switches with unreachableAyonam Ray2019-01-012-53/+33
| | | | | | | | | | | | | | | default During the lowering of a switch that would result in the generation of a jump table, a range check is performed before indexing into the jump table, for the switch value being outside the jump table range and a conditional branch is inserted to jump to the default block. In case the default block is unreachable, this conditional jump can be omitted. This patch implements omitting this conditional branch for unreachable defaults. Review Reference: D52002 llvm-svn: 350186
* [SelectionDAG] Add SIGN_EXTEND_VECTOR_INREG support to computeKnownBits.Craig Topper2018-12-311-1/+9
| | | | | | Differential Revision: https://reviews.llvm.org/D56168 llvm-svn: 350179
* [DAGCombiner] Add missing one use check on the shuffle in the ↵Craig Topper2018-12-311-1/+1
| | | | | | | | bitcast(shuffle(bitcast(s0),bitcast(s1))) -> shuffle(s0,s1) transform. Found while trying out some other changes so I don't really have a test case. llvm-svn: 350172
* [PowerPC] Fix ADDE, SUBE do not know how to promote operatorKang Zhang2018-12-301-0/+5
| | | | | | | | | | | | | | Summary: This patch is created to fix the Bugzilla bug 39815: https://bugs.llvm.org/show_bug.cgi?id=39815 This patch is to support promotion integer result for the instruction ADDE, SUBE. Reviewed By: hfinkel Differential Revision: https://reviews.llvm.org/D56119 llvm-svn: 350161
* Add vtable anchor to classes.Richard Trieu2018-12-292-0/+6
| | | | llvm-svn: 350142
* [codeview] Check if this 'this' type of a method is a pointerReid Kleckner2018-12-262-8/+19
| | | | | | | | | Fixes crash reported after r347354 for frontends that don't always emit 'this' pointers for methods. Now we will silently produce debug info that makes functions like this look like static methods, which seems reasonable. llvm-svn: 350073
* [NVPTX] Allow libcalls that are defined in the current module.Justin Lebar2018-12-261-0/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The patch adds a possibility to make library calls on NVPTX. An important thing about library functions - they must be defined within the current module. This basically should guarantee that we produce a valid PTX assembly (without calls to not defined functions). The one who wants to use the libcalls is probably will have to link against compiler-rt or any other implementation. Currently, it's completely impossible to make library calls because of error LLVM ERROR: Cannot select: i32 = ExternalSymbol '...'. But we can lower ExternalSymbol to TargetExternalSymbol and verify if the function definition is available. Also, there was an issue with a DAG during legalisation. When we expand instruction into libcall, the inner call-chain isn't being "integrated" into outer chain. Since the last "data-flow" (call retval load) node is located in call-chain earlier than CALLSEQ_END node, the latter becomes a leaf and therefore a dead node (and is being removed quite fast). Proposed here solution relies on another data-flow pseudo nodes (ProxyReg) which purpose is only to keep CALLSEQ_END at legalisation and instruction selection phases - we remove the pseudo instructions before register scheduling phase. Patch by Denys Zariaiev! Differential Revision: https://reviews.llvm.org/D34708 llvm-svn: 350069
* [MIPS GlobalISel] Select G_SELECTPetar Avramovic2018-12-251-8/+11
| | | | | | | | | | Add widen scalar for type index 1 (i1 condition) for G_SELECT. Select G_SELECT for pointer, s32(integer) and smaller low level types on MIPS32. Differential Revision: https://reviews.llvm.org/D56001 llvm-svn: 350063
* [X86] Use GetDemandedBits to simplify the operands of PMULDQ/PMULUDQ.Craig Topper2018-12-241-0/+9
| | | | | | | | | | | | | | This is an alternative to what I attempted in D56057. GetDemandedBits is a special version of SimplifyDemandedBits that allows simplifications even when the operand has other uses. GetDemandedBits will only do simplifications that allow a node to be bypassed. It won't create new nodes or alter any of the other users. I had to add support for bypassing SIGN_EXTEND_INREG to GetDemandedBits. Based on a patch that Simon Pilgrim sent me in email. Fixes PR40142. llvm-svn: 350059
* Revert rL350048 and rL350050Max Kazantsev2018-12-242-16/+9
| | | | | | | These patches have broken almost all buildbots on test DebugInfo/X86/addr_comments.ll. Reverting to green. llvm-svn: 350052
* Fix build - follow-up to r350048 which broke headerless (v4) address poolDavid Blaikie2018-12-242-8/+10
| | | | llvm-svn: 350050
* DebugInfo: Use assembly label arithmetic for address pool size for easier ↵David Blaikie2018-12-242-5/+10
| | | | | | reading/editing llvm-svn: 350048
* DebugInfo: Add assembly comments for debug_addr contribution header fieldsDavid Blaikie2018-12-241-0/+4
| | | | llvm-svn: 350047
* [SelectionDAGBuilder] Use ::precise LocationSizes; NFCGeorge Burgess IV2018-12-241-11/+23
| | | | | | | | | | | | | | | More migration so we can disable the implicit int -> LocationSize conversion. All of these are either scatter/gather'ed vector instructions, or direct loads. Hence, they're all precise. Perhaps if we see way more getTypeStoreSize calls, we can make a getTypeStoreLocationSize (or similar) as a wrapper that applies this ::precise. Doesn't appear that it's a good idea to make getTypeStoreSize return a LocationSize itself, however. llvm-svn: 350042
* [DAGCombiner] limit shuffle to extend transform (PR40146)Sanjay Patel2018-12-231-4/+5
| | | | | | | | | | It's dangerous to knowingly create an illegal vector type no matter what stage of combining we're in. This prevents the missed folding/scalarization seen in: https://bugs.llvm.org/show_bug.cgi?id=40146 llvm-svn: 350034
* [DAGCombiner] allow hoisting vector bitwise logic ahead of extendsSanjay Patel2018-12-231-6/+5
| | | | llvm-svn: 350032
* [DAGCombiner] allow narrowing of add followed by truncateSanjay Patel2018-12-221-2/+1
| | | | | | | | | | | | | | | trunc (add X, C ) --> add (trunc X), C' If we're throwing away the top bits of an 'add' instruction, do it in the narrow destination type. This makes the truncate-able opcode list identical to the sibling transform done in IR (in instcombine). This change used to show regressions for x86, but those are gone after D55494. This gets us closer to deleting the x86 custom function (combineTruncatedArithmetic) that does almost the same thing. Differential Revision: https://reviews.llvm.org/D55866 llvm-svn: 350006
* [IR] Add Instruction::isLifetimeStartOrEnd, NFCVedant Kumar2018-12-213-17/+5
| | | | | | | | | | | Instruction::isLifetimeStartOrEnd() checks whether an Instruction is an llvm.lifetime.start or an llvm.lifetime.end intrinsic. This was suggested as a cleanup in D55967. Differential Revision: https://reviews.llvm.org/D56019 llvm-svn: 349964
* [DAGCombiner] simplify code leading to scalarizeExtractedVectorLoad; NFCSanjay Patel2018-12-211-6/+5
| | | | llvm-svn: 349958
* [GlobalISel][AArch64] Add support for widening G_FCEILJessica Paquette2018-12-211-0/+9
| | | | | | | | | | | | This adds support for widening G_FCEIL in LegalizerHelper and AArch64LegalizerInfo. More specifically, it teaches the AArch64 legalizer to widen G_FCEIL from a 16-bit float to a 32-bit float when the subtarget doesn't support full FP 16. This also updates AArch64/f16-instructions.ll to show that we perform the correct transformation. llvm-svn: 349927
* [SelectionDAG] Always use the version of computeKnownBits that returns a ↵Simon Pilgrim2018-12-215-27/+16
| | | | | | | | value. NFCI. Continues the work started by @bogner in rL340594 to remove uses of the KnownBits output paramater version. llvm-svn: 349907
* [ARM] Complete the Thumb1 shift+and->shift+shift transforms.Eli Friedman2018-12-201-1/+2
| | | | | | | | | | | | | | This saves materializing the immediate. The additional forms are less common (they don't usually show up for bitfield insert/extract), but they're still relevant. I had to add a new target hook to prevent DAGCombine from reversing the transform. That isn't the only possible way to solve the conflict, but it seems straightforward enough. Differential Revision: https://reviews.llvm.org/D55630 llvm-svn: 349857
* DebugInfo: Fix for missing comp_dir handling with r349207David Blaikie2018-12-201-9/+10
| | | | | | | | | | | When deciding lazily whether a CU would be split or non-split I accidentally dropped some handling for the line tables comp_dir (by doing it lazily it was too late to be handled properly by the MC line table code). Move that bit of the code back to the non-lazy place. llvm-svn: 349819
* [CodeView] Emit global variables within lexical scopes to limit visibilityBrock Wyma2018-12-202-79/+151
| | | | | | | | | Emit static locals within the correct lexical scope so variables with the same name will not confuse the debugger into getting the wrong value. Differential Revision: https://reviews.llvm.org/D55336 llvm-svn: 349777
* [SelectionDAGBuilder] Enable funnel shift building to custom rotatesSimon Pilgrim2018-12-201-4/+2
| | | | | | | | | | This patch enables funnel shift -> rotate building for all ROTL/ROTR custom/legal operations. AFAICT X86 was the last target that was missing modulo support (PR38243), but I've tried to CC stakeholders for every target that has ROTL/ROTR custom handling for their final OK. Differential Revision: https://reviews.llvm.org/D55747 llvm-svn: 349765
* Re-land r349731 "[CodeGen][ExpandMemcmp] Add an option for allowing ↵Clement Courbet2018-12-201-96/+137
| | | | | | | | overlapping loads. Update PPC ir following GEP->bitcat to bitcat->GEP->bitcat change. llvm-svn: 349747
* Revert r349731 "[CodeGen][ExpandMemcmp] Add an option for allowing ↵Clement Courbet2018-12-201-137/+96
| | | | | | | | overlapping loads." Forgot to update PowerPC tests for the GEP->bitcast change. llvm-svn: 349733
* [CodeGen][ExpandMemcmp] Add an option for allowing overlapping loads.Clement Courbet2018-12-201-96/+137
| | | | | | | | | | | | | | Summary: This allows expanding {7,11,13,14,15,21,22,23,25,26,27,28,29,30,31}-byte memcmp in just two loads on X86. These were previously calling memcmp. Reviewers: spatel, gchatelet Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D55263 llvm-svn: 349731
* [DAGCombiner] Fix a place that was creating a SIGN_EXTEND with an extra operand.Craig Topper2018-12-201-1/+1
| | | | llvm-svn: 349726
* [DwarfExpression] Fix a typo in a doxygen comment. NFC.Matt Davis2018-12-201-1/+1
| | | | llvm-svn: 349703
* [CodeGenPrepare] Fix bad IR created by large offset GEP splitting.Eli Friedman2018-12-191-3/+4
| | | | | | | | | | | | | | Creating the IR builder, then modifying the CFG, leads to an IRBuilder where the BB and insertion point are inconsistent, so new instructions have the wrong parent. Modified an existing test because the test wasn't covering anything useful (the "invoke" was not actually an invoke by the time we hit the code in question). Differential Revision: https://reviews.llvm.org/D55729 llvm-svn: 349693
* Fix test commitRhys Perry2018-12-191-1/+1
| | | | | | Seems that was actually a eight space tab... llvm-svn: 349690
* Test commitRhys Perry2018-12-191-1/+1
| | | | | | Replace tab with 4 spaces. llvm-svn: 349689
* [GlobalISel][AArch64] Add support for @llvm.ceilJessica Paquette2018-12-191-0/+5
| | | | | | | | | | | | This adds a G_FCEIL generic instruction and uses it in AArch64. This adds selection for floating point ceil where it has a supported, dedicated instruction. Other cases aren't handled here. It updates the relevant gisel tests and adds a select-ceil test. It also adds a check to arm64-vcvt.ll which ensures that we don't fall back when we run into one of the relevant cases. llvm-svn: 349664
* [SelectionDAG] Optional handling of UNDEF elements in matchBinaryPredicate ↵Simon Pilgrim2018-12-191-4/+4
| | | | | | | | | | | | | | (part 2 of 2) Now that SimplifyDemandedBits/SimplifyDemandedVectorElts is simplifying vector elements, we're seeing more constant BUILD_VECTOR containing undefs. This patch provides opt-in support for UNDEF elements in matchBinaryPredicate, passing NULL instead of the result ConstantSDNode* argument. I've updated the (or (and X, c1), c2) -> (and (or X, c2), c1|c2) fold to demonstrate its use, which I believe is safe for undef cases. Differential Revision: https://reviews.llvm.org/D55822 llvm-svn: 349629
* [SelectionDAG] Optional handling of UNDEF elements in matchBinaryPredicate ↵Simon Pilgrim2018-12-191-6/+13
| | | | | | | | | | | | (part 1 of 2) Now that SimplifyDemandedBits/SimplifyDemandedVectorElts is simplifying vector elements, we're seeing more constant BUILD_VECTOR containing undefs. This patch provides opt-in support for UNDEF elements in matchBinaryPredicate, passing NULL instead of the result ConstantSDNode* argument. Differential Revision: https://reviews.llvm.org/D55822 llvm-svn: 349628
* [TargetLowering] Fix propagation of undefs in zero extension ops (PR40091)Simon Pilgrim2018-12-192-4/+23
| | | | | | | | | | | | As described on PR40091, we have several places where zext (and zext_vector_inreg) fold an undef input into an undef output. For zero extensions this is incorrect as the output should guarantee to least have the new upper bits set to zero. SimplifyDemandedVectorElts is the worst offender (and its the most likely to cause new undefs to appear) but DAGCombiner's tryToFoldExtendOfConstant has a similar issue. Thanks to @dmgreen for catching this. Differential Revision: https://reviews.llvm.org/D55883 llvm-svn: 349625
* [SelectionDAG] Optional handling of UNDEF elements in matchUnaryPredicateSimon Pilgrim2018-12-191-4/+13
| | | | | | | | | | | | Now that SimplifyDemandedBits/SimplifyDemandedVectorElts are simplifying vector elements, we're seeing more constant BUILD_VECTOR containing UNDEFs. This patch provides opt-in handling of UNDEF elements in matchUnaryPredicate, passing NULL instead of the ConstantSDNode* argument. I've updated SelectionDAG::simplifyShift to demonstrate its use. Differential Revision: https://reviews.llvm.org/D55819 llvm-svn: 349616
* [DebugInfo] Move several private headers to include directoryYonghong Song2018-12-1811-309/+10
| | | | | | | | | | | | | | | | | This patch moved the following files in lib/CodeGen/AsmPrinter/ AsmPrinterHandler.h DbgEntityHistoryCalculator.h DebugHandlerBase.h to include/llvm/CodeGen directory. Such a change will enable Target to extend DebugHandlerBase and emit Target specific debug info sections. Signed-off-by: Yonghong Song <yhs@fb.com> Differential Revision: https://reviews.llvm.org/D55755 llvm-svn: 349564
* Preserve the linkage for objc* intrinsics as clang will set them to ↵Pete Cooper2018-12-181-5/+8
| | | | | | | | | | weak_external in some cases Clang uses weak linkage for objc runtime functions when they are not available on the platform. The intrinsic has this linkage so we just need to pass that on to the runtime call. llvm-svn: 349559
OpenPOWER on IntegriCloud