summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target
Commit message (Collapse)AuthorAgeFilesLines
* [X86] Add broadcast load unfolding support for vpcmpeq/vpcmpgt/vpcmp/vpcmpu.Craig Topper2019-09-091-0/+24
| | | | llvm-svn: 371368
* [X86] Add broadcast load unfold support for smin/umin/smax/umax.Craig Topper2019-09-091-0/+24
| | | | llvm-svn: 371366
* AMDGPU: Remove pointless wrapper nodes for init.exec intrinsicsMatt Arsenault2019-09-095-28/+6
| | | | llvm-svn: 371364
* [X86] Add broadcast load unfolding support for VMAXPS/PD and VMINPS/PD.Craig Topper2019-09-091-0/+24
| | | | llvm-svn: 371363
* [X86] Use xorps to create fp128 +0.0 constants.Craig Topper2019-09-095-4/+24
| | | | | | This matches what we do for f32/f64. gcc also does this for fp128. llvm-svn: 371357
* [X86][SSE] SimplifyDemandedVectorEltsForTargetNode - add faux shuffle support.Simon Pilgrim2019-09-081-26/+52
| | | | | | This patch decodes target and faux shuffles with getTargetShuffleInputs - a reduced version of resolveTargetShuffleInputs that doesn't resolve SM_SentinelZero cases, so we can correctly remove zero vectors if they aren't demanded. llvm-svn: 371353
* [X86] Add a hack to combineVSelectWithAllOnesOrZeros to turn selects with ↵Craig Topper2019-09-081-0/+9
| | | | | | | | | | | two zero/undef vector inputs into an all zeroes vector. If the two zero vectors have undefs in different places they won't get combined by simplifySelect. This fixes a regression from an earlier commit. llvm-svn: 371351
* [X86] Remove call to getZeroVector from materializeVectorConstant. Add isel ↵Craig Topper2019-09-083-9/+17
| | | | | | | | | | | | | | | | patterns for zero vectors with all types. The change to avx512-vec-cmp.ll is a regression, but should be easy to fix. It occurs because the getZeroVector call was canonicalizing both sides to the same node, then SimplifySelect was able to simplify it. But since only called getZeroVector on some VTs this isn't a robust way to combine this. The change to vector-shuffle-combining-ssse3.ll is more instructions, but removes a constant pool load so its unclear if its a regression or not. llvm-svn: 371350
* [X86] X86DAGToDAGISel::combineIncDecVector(): call getSplatBuildVector() ↵Roman Lebedev2019-09-081-3/+6
| | | | | | | | | | | | | | | manually As reported in post-commit review of r370327, there is some case where the code crashes. As discussed with Craig Topper, the problem is that getConstant() internally calls getSplatBuildVector(), so we don't insert the constant itself. If we do that manually we're good. llvm-svn: 371346
* [X86] Use DAG.getConstant instead of getZeroVector in combinePMULDQ.Craig Topper2019-09-081-1/+1
| | | | | | | | getZeroVector canonicalizes the type to vXi32, but that's a legalization action. We should use the most correct type if possible. llvm-svn: 371345
* [X86] Teach materializeVectorConstant to not call ↵Craig Topper2019-09-081-3/+3
| | | | | | getZeroVector/getOnesVector on the types we already have isel patterns for. llvm-svn: 371343
* [DebugInfo][X86] Describe call site values for zero-valued immsDavid Stenberg2019-09-081-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Add zero-materializing XORs to X86's describeLoadedValue() hook in order to produce call site values. I have had to change the defs logic in collectCallSiteParameters() a bit to be able to describe the XORs. The XORs implicitly define $eflags, which would cause them to never be considered, due to a guard condition that I->getNumDefs() is one. I have changed that condition so that we now only consider instructions where a forwarded register overlaps with the instruction's single explicit define. We still need to collect the implicit defines of other forwarded registers to remove them from the work list. I'm not sure how to move towards supporting instructions with multiple explicit defines, cases where forwarded register are implicitly defined, and/or cases where an instruction produces values for multiple forwarded registers. Perhaps the describeLoadedValue() hook should take a register argument, and we then leave it up to the hook to describe the loaded value in that register? I have not yet encountered a situation where that would be necessary though. Reviewers: aprantl, vsk, djtodoro, NikolaPrica Reviewed By: vsk Subscribers: ychen, hiraditya, llvm-commits Tags: #debug-info, #llvm Differential Revision: https://reviews.llvm.org/D67225 llvm-svn: 371333
* [NFC] Make the describeLoadedValue() hook return machine operand objectsDavid Stenberg2019-09-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This changes the ParamLoadedValue pair which the describeLoadedValue() hook returns so that MachineOperand objects are returned instead of pointers. When describing call site values we may need to describe operands which are not part of the instruction. One such example is zero-materializing XORs on x86, which I have implemented support for in a child revision. Instead of having to return a pointer to an operand stored somewhere outside the instruction, start returning objects directly instead, as that simplifies the code. The MachineOperand class only holds POD members, and on x86-64 it is 32 bytes large. That combined with copy elision means that the overhead of returning a machine operand object from the hook does not become very large. I benchmarked this on a 8-thread i7-8650U machine with 32 GB RAM. The benchmark consisted of building a clang 8.0 binary configured with: -DCMAKE_BUILD_TYPE=RelWithDebInfo \ -DLLVM_TARGETS_TO_BUILD=X86 \ -DLLVM_USE_SANITIZER=Address \ -DCMAKE_CXX_FLAGS="-Xclang -femit-debug-entry-values -stdlib=libc++" The average wall clock time increased by 4 seconds, from 62:05 to 62:09, which is an 0.1% increase. Reviewers: aprantl, vsk, djtodoro, NikolaPrica Reviewed By: vsk Subscribers: hiraditya, ychen, llvm-commits Tags: #debug-info, #llvm Differential Revision: https://reviews.llvm.org/D67261 llvm-svn: 371332
* [ARM] Remove declaration of unimplemented function. NFC.David Green2019-09-081-2/+0
| | | | llvm-svn: 371331
* [X86][SSE] Fix out of range shift introduced in D67070/rL371328Simon Pilgrim2019-09-081-1/+2
| | | | | | Use APInt to create the comparison mask instead. llvm-svn: 371330
* [X86][SSE] Add support for <64 x i1> bool reductionSimon Pilgrim2019-09-081-11/+14
| | | | | | | | | | This generalizes the existing <32 x i1> pre-AVX2 split code to support reductions from <64 x i1> as well, we can probably generalize to any larger pow2 case in the future if the (unlikely) need ever arises. We still need to tweak combineBitcastvxi1 to improve AVX512F codegen as its assumes vXi1 types should be handled on the mask registers even when they aren't legal. Differential Revision: https://reviews.llvm.org/D67070 llvm-svn: 371328
* [X86] Make getZeroVector return floating point vectors in their native type ↵Craig Topper2019-09-083-2/+23
| | | | | | | | | | | | | | on SSE2 and later. isel used to require zero vectors to be canonicalized to a single type to minimize the number of patterns needed to match. This is no longer required. I plan to do this to integers too, but floating point was simpler to start with. Integer has a complication where v32i16/v64i8 aren't legal when the other 512-bit integer types are. llvm-svn: 371325
* [X86] Add support for unfold broadcast loads from FMA instructions.Craig Topper2019-09-071-0/+121
| | | | llvm-svn: 371323
* [aarch64] Add combine patterns for fp16 fmlaSebastian Pop2019-09-071-62/+280
| | | | | | | | | This patch enables generation of fused multiply add/sub for instructions operating on fp16. Tested on aarch64-linux. Differential Revision: https://reviews.llvm.org/D67297 llvm-svn: 371321
* [X86] Add prefer-128-bit subtarget feature.Craig Topper2019-09-074-0/+10
| | | | | | | | | | | | | | | | | | | Summary: Similar to the previous prefer-256-bit flag. We might want to enable this by default some CPUs. This just starts the initial work to implement and prove that it effects TTI's vector width. Reviewers: RKSimon, echristo, spatel, atdt Reviewed By: RKSimon Subscribers: lebedev.ri, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67311 llvm-svn: 371319
* [X86] Avoid uses of getZextValue(). NFCI.Simon Pilgrim2019-09-071-22/+19
| | | | | | | | Use getAPIntValue() directly - this is mainly a best practice style issue to help prevent fuzz tests blowing up when a i12345 (or whatever) is generated. Use getConstantOperandVal/getConstantOperandAPInt wrappers where possible. llvm-svn: 371315
* [X86] Fix pshuflw formation from repeated shuffle mask (PR43230)Nikita Popov2019-09-071-2/+2
| | | | | | | | | | | | Fix for https://bugs.llvm.org/show_bug.cgi?id=43230. When creating PSHUFLW from a repeated shuffle mask, we have to apply the checks to the repeated mask, not the original one. For the test case from PR43230 the inspected part of the original mask is all undef. Differential Revision: https://reviews.llvm.org/D67314 llvm-svn: 371307
* Fix MSVC "32-bit shift implicitly converted to 64 bits" warnings. NFCI.Simon Pilgrim2019-09-072-2/+2
| | | | llvm-svn: 371302
* Replicate the change "[Alignment][NFC] Use Align with ↵Sylvestre Ledru2019-09-071-1/+1
| | | | | | | | | TargetLowering::setMinFunctionAlignment" on AVR to avoid a breakage. See r371200 / https://reviews.llvm.org/D67229 llvm-svn: 371293
* Change TargetLibraryInfo analysis passes to always require FunctionTeresa Johnson2019-09-075-12/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This is the first change to enable the TLI to be built per-function so that -fno-builtin* handling can be migrated to use function attributes. See discussion on D61634 for background. This is an enabler for fixing handling of these options for LTO, for example. This change should not affect behavior, as the provided function is not yet used to build a specifically per-function TLI, but rather enables that migration. Most of the changes were very mechanical, e.g. passing a Function to the legacy analysis pass's getTLI interface, or in Module level cases, adding a callback. This is similar to the way the per-function TTI analysis works. There was one place where we were looking for builtins but not in the context of a specific function. See FindCXAAtExit in lib/Transforms/IPO/GlobalOpt.cpp. I'm somewhat concerned my workaround could provide the wrong behavior in some corner cases. Suggestions welcome. Reviewers: chandlerc, hfinkel Subscribers: arsenm, dschuff, jvesely, nhaehnle, mehdi_amini, javed.absar, sbc100, jgravelle-google, eraman, aheejin, steven_wu, george.burgess.iv, dexonsmith, jfb, asbirlea, gchatelet, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D66428 llvm-svn: 371284
* [AArch64][GlobalISel] Enable the localizer for optimized builds.Amara Emerson2019-09-061-3/+1
| | | | | | | | | | | | | | | | | | | Despite the fact that the localizer's original motivation was to fix horrendous constant spilling at -O0, shortening live ranges still has net benefits even with optimizations enabled. On an -Os build of CTMark, doing this improves code size by 0.5% geomean. There are a few regressions, bullet increasing in size by 0.5%. One example from bullet where code size increased slightly was due to GlobalISel actually now generating the same code as SelectionDAG. So we actually have an opportunity in future to implement better heuristics for localization and therefore be *better* than SDAG in some cases. In relation to other optimizations though that one is relatively minor. Differential Revision: https://reviews.llvm.org/D67303 llvm-svn: 371266
* GlobalISel: Support physical register inputs in patternsMatt Arsenault2019-09-061-5/+7
| | | | llvm-svn: 371253
* Remove dead .seh_stackalloc parsing method in X86AsmParserReid Kleckner2019-09-061-14/+0
| | | | | | | The shared COFF asm parser code handles this directive, since it is shared with AArch64. Spotted by Alexandre Ganea in review. llvm-svn: 371251
* AMDGPU: Fix typoMatt Arsenault2019-09-061-4/+4
| | | | llvm-svn: 371249
* [X86] Use MOVSX by default instead of CBW to extend i8 to AX for i8 sdivrem.Craig Topper2019-09-061-5/+8
| | | | | | | | | | | | | | | | | We can use a MOVSX16 here then rely on FixupBWInst to change to MOVSX32 if the upper bits are dead. With a special case to not promote if it could be turned into CBW. Then we can rely on X86MCInstLower to turn the MOVSX into CBW very late if register allocation worked out. Using MOVSX gives an opportunity to use the MOVSX as a both a copy and a sign extend since the input and output register aren't tied together. Differential Revision: https://reviews.llvm.org/D67192 llvm-svn: 371243
* [X86] Use MOVZX16rr8/MOVZXrm8 when extending input for i8 udivrem.Craig Topper2019-09-061-3/+3
| | | | | | | | We can rely on X86FixupBWInsts to turn these into MOVZX32. This simplifies a follow up commit to use MOVSX for i8 sdivrem with a late optimization to use CBW when register allocation works out. llvm-svn: 371242
* [X86] Teach FixupBWInsts to turn MOVSX16rr8/MOVZX16rr8/MOVSX16rm8/MOVZX16rm8 ↵Craig Topper2019-09-061-6/+48
| | | | | | into their 32-bit dest equivalents when the upper part of the register is dead. llvm-svn: 371240
* [ARM] Add patterns for VSUB with q and r registersOliver Cruickshank2019-09-061-0/+9
| | | | | | | Added patterns for VSUB to support q and r registers, which reduces pressure on q registers. llvm-svn: 371231
* [ARM] Add patterns for VADD with q and r registersOliver Cruickshank2019-09-061-0/+9
| | | | | | | Added support for VADD to use q and r registers, which reduces pressure on q registers. llvm-svn: 371230
* [ARM] Add patterns for VMUL with q and r registersOliver Cruickshank2019-09-061-0/+9
| | | | | | | Added support for VMUL to use an r register, this reduces pressure on the q registers. llvm-svn: 371229
* [AArch64][GlobalISel] Always fall back on tail calls with -tailcalloptJessica Paquette2019-09-061-0/+6
| | | | | | | | | | | | | | | | | | -tailcallopt requires that we perform different stack adjustments than with sibling calls. For example, the `@caller_to0_from8` function in test/CodeGen/AArch64/tail-call.ll requires that we adjust SP. Without -tailcallopt, this adjustment does not happen. With it, however, it is expected. So, to ensure that adding sibling call support doesn't break -tailcallopt, make CallLowering always fall back on possible tail calls when -tailcallopt is passed in. Update test/CodeGen/AArch64/tail-call.ll with a GlobalISel line to make sure that we don't differ from the SDAG implementation at any point. Differential Revision: https://reviews.llvm.org/D67245 llvm-svn: 371227
* [ARM] Sink add/mul(shufflevector(insertelement())) for MVE instruction selectionSam Tebbs2019-09-061-10/+48
| | | | | | | | | | | | This patch sinks add/mul(shufflevector(insertelement())) into the basic block in which they are used so that they can then be selected together. This is useful for various MVE instructions, such as vmla and others that take R registers. Loop tests have been added to the vmla test file to make sure vmlas are generated in loops. Differential revision: https://reviews.llvm.org/D66295 llvm-svn: 371218
* [AMDGPU] Enable constant offset promotion to immediate operand for VMEM storesValery Pykhtin2019-09-061-4/+5
| | | | | | Differential revision: https://reviews.llvm.org/D66958 llvm-svn: 371214
* [Alignment][NFC] Use Align with TargetLowering::setPrefFunctionAlignmentGuillaume Chatelet2019-09-0610-14/+15
| | | | | | | | | | | | | | | | | Summary: This is patch is part of a series to introduce an Alignment type. See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html See this patch for the introduction of the type: https://reviews.llvm.org/D64790 Reviewers: courbet Subscribers: nemanjai, javed.absar, hiraditya, kbarton, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, jsji, s.egerton, pzheng, ychen, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67267 llvm-svn: 371212
* [Alignment][NFC] Use Align with TargetLowering::setPrefLoopAlignmentGuillaume Chatelet2019-09-065-8/+10
| | | | | | | | | | | | | | | | | Summary: This is patch is part of a series to introduce an Alignment type. See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html See this patch for the introduction of the type: https://reviews.llvm.org/D64790 Reviewers: courbet Subscribers: nemanjai, hiraditya, kbarton, MaskRay, jsji, ychen, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67278 llvm-svn: 371210
* [Alignment] fix dubious min function alignmentGuillaume Chatelet2019-09-061-1/+1
| | | | | | | | | | | | | | | | Summary: This was discovered while introducing the llvm::Align type. The original setMinFunctionAlignment used to take alignment as log2, looking at the comment it seems like instructions are to be 2-bytes aligned and not 4-bytes aligned. Reviewers: uweigand Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67271 llvm-svn: 371204
* [Alignment][NFC] Use Align with TargetLowering::setMinFunctionAlignmentGuillaume Chatelet2019-09-0612-14/+16
| | | | | | | | | | | | | | | | | Summary: This is patch is part of a series to introduce an Alignment type. See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html See this patch for the introduction of the type: https://reviews.llvm.org/D64790 Reviewers: courbet Subscribers: jyknight, sdardis, nemanjai, javed.absar, hiraditya, kbarton, fedor.sergeev, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, jsji, s.egerton, pzheng, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67229 llvm-svn: 371200
* [DFAPacketizer] Track resources for packetized instructionsJames Molloy2019-09-061-0/+11
| | | | | | | | | | | | | | | | | | | | | | This patch allows the DFAPacketizer to be queried after a packet is formed to work out which resources were allocated to the packetized instructions. This is particularly important for targets that do their own bundle packing - it's not sufficient to know simply that instructions can share a packet; which slots are used is also required for encoding. This extends the emitter to emit a side-table containing resource usage diffs for each state transition. The packetizer maintains a set of all possible resource states in its current state. After packetization is complete, all remaining resource states are possible packetization strategies. The sidetable is only ~500K for Hexagon, but the extra tracking is disabled by default (most uses of the packetizer like MachinePipeliner don't care and don't need the extra maintained state). Differential Revision: https://reviews.llvm.org/D66936 llvm-svn: 371198
* [AMDGPU] Mark s_barrier as having side effects but not accessing memory.Jay Foad2019-09-061-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: This fixes poor scheduling in a function containing a barrier and a few load instructions. Without this fix, ScheduleDAGInstrs::buildSchedGraph adds an artificial edge in the dependency graph from the barrier instruction to the exit node representing live-out latency, with a latency of about 500 cycles. Because of this it thinks the critical path through the graph also has a latency of about 500 cycles. And because of that it does not think that any of the load instructions are on the critical path, so it schedules them with no regard for their (80 cycle) latency, which gives poor results. Reviewers: arsenm, dstuttard, tpr, nhaehnle Subscribers: kzhuravl, jvesely, wdng, yaxunl, t-tye, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67218 llvm-svn: 371192
* [ARM] MVE Tail PredicationSam Parker2019-09-064-1/+476
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The MVE and LOB extensions of Armv8.1m can be combined to enable 'tail predication' which removes the need for a scalar remainder loop after vectorization. Lane predication is performed implicitly via a system register. The effects of predication is described in Section B5.6.3 of the Armv8.1-m Arch Reference Manual, the key points being: - For vector operations that perform reduction across the vector and produce a scalar result, whether the value is accumulated or not. - For non-load instructions, the predicate flags determine if the destination register byte is updated with the new value or if the previous value is preserved. - For vector store instructions, whether the store occurs or not. - For vector load instructions, whether the value that is loaded or whether zeros are written to that element of the destination register. This patch implements a pass that takes a hardware loop, containing masked vector instructions, and converts it something that resembles an MVE tail predicated loop. Currently, if we had code generation, we'd generate a loop in which the VCTP would generate the predicate and VPST would then setup the value of VPR.PO. The loads and stores would be placed in VPT blocks so this is not tail predication, but normal VPT predication with the predicate based upon a element counting induction variable. Further work needs to be done to finally produce a true tail predicated loop. Because only the loads and stores are predicated, in both the LLVM IR and MIR level, we will restrict support to only lane-wise operations (no horizontal reductions). We will perform a final check on MIR during loop finalisation too. Another restriction, specific to MVE, is that all the vector instructions need operate on the same number of elements. This is because predication is performed at the byte level and this is set on entry to the loop, or by the VCTP instead. Differential Revision: https://reviews.llvm.org/D65884 llvm-svn: 371179
* [X86] Fix bad indentation. NFCCraig Topper2019-09-061-1/+1
| | | | llvm-svn: 371167
* AMDGPU/GlobalISel: Avoid repeating 32-bit type listsMatt Arsenault2019-09-064-6/+14
| | | | llvm-svn: 371156
* AMDGPU/GlobalISel: Fix load/store of types in other address spacesMatt Arsenault2019-09-062-5/+26
| | | | | | There should probably be a size only matcher. llvm-svn: 371155
* AMDGPU: Allow getMemOperandWithOffset to analyze stack accessesMatt Arsenault2019-09-051-2/+19
| | | | | | | Report soffset as a base register if the scratch resource can be ignored. llvm-svn: 371149
* AMDGPU: Fix emitting multiple stack loads for stack passed workitemsMatt Arsenault2019-09-051-1/+15
| | | | | | | | | | The same stack is loaded for each workitem ID, and each use. Nothing prevents you from creating multiple fixed stack objects with the same offsets, so this was creating a load for each unique frame index, despite them being the same offset. Re-use the same frame index so the loads are CSEable. llvm-svn: 371148
OpenPOWER on IntegriCloud