summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
...
* AMDGPU/GlobalISel: Legalize G_BUILD_VECTOR_TRUNCMatt Arsenault2019-09-093-0/+70
| | | | | | | | | | Treat this as legal on gfx9 since it can use S_PACK_* instructions for this. This isn't used by anything yet. The same will probably apply to 16-bit G_BUILD_VECTOR without the trunc. llvm-svn: 371423
* Revert "[MachineCopyPropagation] Remove redundant copies after TailDup via ↵Dmitri Gribenko2019-09-091-65/+0
| | | | | | | | | machine-cp" This reverts commit 371359. I'm suspecting a miscompile, I posted a reproducer to https://reviews.llvm.org/D65267. llvm-svn: 371421
* [yaml2obj] Simplify p_filesz/p_memsz computingFangrui Song2019-09-091-27/+15
| | | | | | | | | | | | | | | This fixes a bug as well. When "FileSize:" (p_filesz) is specified and different from the actual value, the following code probably should not use PHeader.p_filesz: if (SHeader->sh_offset == PHeader.p_offset + PHeader.p_filesz) PHeader.p_memsz += SHeader->sh_size; Reviewed By: jhenderson Differential Revision: https://reviews.llvm.org/D67256 llvm-svn: 371420
* [ARM] Fix loads and stores for predicate vectorsDavid Green2019-09-092-18/+65
| | | | | | | | | | | | | | | | | | | | | | | | These predicate vectors can usually be loaded and stored with a single instruction, a VSTR_P0. However this instruction will store the entire P0 predicate, 16 bits, zeroextended to 32bits. Each lane of the the v4i1/v8i1/v16i1 representing 4/2/1 bits. As far as I understand, when llvm says "store this v4i1", it really does need to store 4 bits (or 8, that being the size of a byte, with this bottom 4 as the interesting bits). For example a bitcast from a v8i1 to a i8 is defined as a store followed by a load, which is how the code is expanded. So this instead lowers the v4i1/v8i1 load/store through some shuffles to get the bits into the correct positions. This, as you might imagine, is not as efficient as a single instruction. But I believe it is needed for correctness. v16i1 equally should not load/store 32bits, only storing the 16bits of data. Stack loads/stores are still using the VSTR_P0 (as can be seen by the test not changing). This is fine as they are self-consistent, it is only "externally observable loads/stores" (from our point of view) that need to be corrected. Differential revision: https://reviews.llvm.org/D67085 llvm-svn: 371419
* AMDGPU/GlobalISel: Select atomic loadsMatt Arsenault2019-09-092-5/+18
| | | | | | | A new check for an explicitly atomic MMO is needed to avoid incorrectly matching pattern for non-atomic loads llvm-svn: 371418
* AMDGPU/GlobalISel: Fix RegBankSelect for unaligned, uniform constant loadsMatt Arsenault2019-09-091-4/+5
| | | | llvm-svn: 371416
* AMDGPU/GlobalISel: Fix regbankselect for uniform extloadsMatt Arsenault2019-09-091-4/+4
| | | | | | There are no scalar extloads. llvm-svn: 371414
* AMDGPU: Remove code address space predicatesMatt Arsenault2019-09-093-25/+57
| | | | | | | Fixes 8-byte, 8-byte aligned LDS loads. 16-byte case still broken due to not be reported as legal. llvm-svn: 371413
* AMDGPU/GlobalISel: Select G_PTR_MASKMatt Arsenault2019-09-093-0/+70
| | | | llvm-svn: 371412
* AMDGPU/GlobalISel: Fix reg bank for uniform LDS loadsMatt Arsenault2019-09-091-8/+14
| | | | | | | The pointer is always a VGPR. Also fix hardcoding the pointer size to 64. llvm-svn: 371411
* AMDGPU/GlobalISel: Use known bits for selectionMatt Arsenault2019-09-091-8/+3
| | | | llvm-svn: 371409
* AMDGPU/GlobalISel: Legalize wavefrontsize intrinsicMatt Arsenault2019-09-091-0/+6
| | | | llvm-svn: 371407
* AMDGPU/GlobalISel: Try generated matcher before add/sub codeMatt Arsenault2019-09-091-4/+4
| | | | | | This will allow optimization patterns which fold adds away to work. llvm-svn: 371406
* [ARM] Remove some spurious MVE reduction instructions.Simon Tatham2019-09-091-79/+80
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The family of 'dual-accumulating' vector multiply-add instructions (VMLADAV, VMLALDAV and VRMLALDAVH) can all operate on both signed and unsigned integer types, and they all have an 'exchange' variant (with an X in the name) that modifies which pairs of vector lanes in the two inputs are multiplied together. But there's a clause in the spec that says that the X variants //don't// operate on unsigned integer types, only signed. You can have X, or unsigned, or neither, but not both. We didn't notice that clause when we implemented the MC support for these instructions, so LLVM believes that things like VMLADAVX.U8 do exist, contradicting the spec. Here I fix that by conditioning them out in Tablegen. In order to do that, I've reversed the nesting order of the Tablegen multiclasses for those instructions. Previously, the innermost multiclass generated the X and not-X variants, and the one outside that generated the A and not-A variants. Now X is done by the outer multiclass, which allows me to bypass that one when I only want the two not-X variants. Changing the multiclass nesting order also changes the names of the instruction ids unless I make a special effort not to. I decided that while I was changing them anyway I'd make them look nicer; so now the instructions have names like MVE_VMLADAVs32 or MVE_VMLADAVaxs32, instead of cumbersome _noacc_noexch suffixes. The corresponding multiply-subtract instructions are unaffected. Those don't accept unsigned types at all, either in the spec or in LLVM. Reviewers: ostannard, dmgreen Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67214 llvm-svn: 371405
* AMDGPU/GlobalISel: Remove dead patternsMatt Arsenault2019-09-091-5/+0
| | | | llvm-svn: 371404
* [DFAPacketizer] Reapply: Track resources for packetized instructionsJames Molloy2019-09-092-11/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Reapply with fix to reduce resources required by the compiler - use unsigned[2] instead of std::pair. This causes clang and gcc to compile the generated file multiple times faster, and hopefully will reduce the resource requirements on Visual Studio also. This fix is a little ugly but it's clearly the same issue the previous author of DFAPacketizer faced (the previous tables use unsigned[2] rather uglily too). This patch allows the DFAPacketizer to be queried after a packet is formed to work out which resources were allocated to the packetized instructions. This is particularly important for targets that do their own bundle packing - it's not sufficient to know simply that instructions can share a packet; which slots are used is also required for encoding. This extends the emitter to emit a side-table containing resource usage diffs for each state transition. The packetizer maintains a set of all possible resource states in its current state. After packetization is complete, all remaining resource states are possible packetization strategies. The sidetable is only ~500K for Hexagon, but the extra tracking is disabled by default (most uses of the packetizer like MachinePipeliner don't care and don't need the extra maintained state). Differential Revision: https://reviews.llvm.org/D66936 llvm-svn: 371399
* [ARM][MVE] VCTP instruction selectionSam Parker2019-09-091-0/+9
| | | | | | | | Add codegen support for vctp{8,16,32}. Differential Revision: https://reviews.llvm.org/D67344 llvm-svn: 371395
* Revert rL371198 from llvm/trunk: [DFAPacketizer] Track resources for ↵Simon Pilgrim2019-09-092-65/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | packetized instructions This patch allows the DFAPacketizer to be queried after a packet is formed to work out which resources were allocated to the packetized instructions. This is particularly important for targets that do their own bundle packing - it's not sufficient to know simply that instructions can share a packet; which slots are used is also required for encoding. This extends the emitter to emit a side-table containing resource usage diffs for each state transition. The packetizer maintains a set of all possible resource states in its current state. After packetization is complete, all remaining resource states are possible packetization strategies. The sidetable is only ~500K for Hexagon, but the extra tracking is disabled by default (most uses of the packetizer like MachinePipeliner don't care and don't need the extra maintained state). Differential Revision: https://reviews.llvm.org/D66936 ........ Reverted as this is causing "compiler out of heap space" errors on MSVC 2017/19 NDEBUG builds llvm-svn: 371393
* [AArch64][SVE] Implement abs and neg intrinsicsCullen Rhodes2019-09-092-3/+17
| | | | | | | | | | | | | | | | Summary: This patch implements two arithmetic intrinsics: * int_aarch64_sve_abs * int_aarch64_sve_neg testing the support for scalable vector types in intrinsics added in D65930. Reviewed By: greened Differential Revision: https://reviews.llvm.org/D65931 llvm-svn: 371388
* [ARM] Prevent generating NEON stack accesses under MVE.David Green2019-09-091-4/+8
| | | | | | | | | | | | We should not be generating Neon stack loads/stores even for these large registers. No test here because my understanding is we will only generate these QQPR regs for intrinsics and VLDn's. The tests will follow once those are available. Differential revision: https://reviews.llvm.org/D67169 llvm-svn: 371386
* GlobalISel: fix unused warnings in release builds.Tim Northover2019-09-091-0/+4
| | | | llvm-svn: 371385
* GlobalISel: add combiner to form indexed loads.Tim Northover2019-09-095-16/+246
| | | | | | | | | | | Loosely based on DAGCombiner version, but this part is slightly simpler in GlobalIsel because all address calculation is performed by G_GEP. That makes the inc/dec distinction moot so there's just pre/post to think about. No targets can handle it yet so testing is via a special flag that overrides target hooks. llvm-svn: 371384
* [lib/ObjectYAML] - Improve and cleanup error reporting in ELFState<ELFT> class.George Rimar2019-09-091-166/+129
| | | | | | | | | | | | | | | | | | The aim of this patch is to refactor how we handle and report error. I suggest to use the same approach we use in LLD: delayed error reporting. For that I introduced 'HasError' flag which triggers when we report an error. Now we do not exit instantly on any error. The benefits are: 1) There are no more 'exit(1)' calls in the library code. 2) Code was simplified significantly in a few places. 3) It is now possible to print multiple errors instead of only one. Also, I changed the messages to be lower case and removed a full stop. Differential revision: https://reviews.llvm.org/D67182 llvm-svn: 371380
* [ARM][MVE] Decoding of uqrshl and sqrshl accepts unpredictable encodingsOliver Stannard2019-09-092-0/+8
| | | | | | | | | | Specify the Unpredictable bits, and return softfails when appropriate. Patch by Mark Murray! Differential revision: https://reviews.llvm.org/D66939 llvm-svn: 371374
* [ARM][ParallelDSP] Fix for sext inputSam Parker2019-09-091-3/+9
| | | | | | | | | | The incoming accumulator value can be discovered through a sext, in which case there will be a mismatch between the input and the result. So sign extend the accumulator input if we're performing a 64-bit mac. Differential Revision: https://reviews.llvm.org/D67220 llvm-svn: 371370
* [SystemZ] NFC: use clearRegisterDeads() in SystemZElimCompare.cppJonas Paulsson2019-09-091-5/+2
| | | | | | | This is simpler than using findRegisterDefOperandIdx() + setIsDead(). Review: Ulrich Weigand. llvm-svn: 371369
* [X86] Add broadcast load unfolding support for vpcmpeq/vpcmpgt/vpcmp/vpcmpu.Craig Topper2019-09-091-0/+24
| | | | llvm-svn: 371368
* [X86] Add broadcast load unfold support for smin/umin/smax/umax.Craig Topper2019-09-091-0/+24
| | | | llvm-svn: 371366
* AMDGPU: Remove pointless wrapper nodes for init.exec intrinsicsMatt Arsenault2019-09-095-28/+6
| | | | llvm-svn: 371364
* [X86] Add broadcast load unfolding support for VMAXPS/PD and VMINPS/PD.Craig Topper2019-09-091-0/+24
| | | | llvm-svn: 371363
* [MachineCopyPropagation] Remove redundant copies after TailDup via machine-cpKai Luo2019-09-091-0/+65
| | | | | | | | | | | | | | | | | | | | | | | Summary: After tailduplication, we have redundant copies. We can remove these copies in machine-cp if it's safe to, i.e. ``` $reg0 = OP ... ... <<< No read or clobber of $reg0 and $reg1 $reg1 = COPY $reg0 <<< $reg0 is killed ... <RET> ``` will be transformed to ``` $reg1 = OP ... ... <RET> ``` Differential Revision: https://reviews.llvm.org/D65267 llvm-svn: 371359
* [X86] Use xorps to create fp128 +0.0 constants.Craig Topper2019-09-095-4/+24
| | | | | | This matches what we do for f32/f64. gcc also does this for fp128. llvm-svn: 371357
* [X86][SSE] SimplifyDemandedVectorEltsForTargetNode - add faux shuffle support.Simon Pilgrim2019-09-081-26/+52
| | | | | | This patch decodes target and faux shuffles with getTargetShuffleInputs - a reduced version of resolveTargetShuffleInputs that doesn't resolve SM_SentinelZero cases, so we can correctly remove zero vectors if they aren't demanded. llvm-svn: 371353
* [X86] Add a hack to combineVSelectWithAllOnesOrZeros to turn selects with ↵Craig Topper2019-09-081-0/+9
| | | | | | | | | | | two zero/undef vector inputs into an all zeroes vector. If the two zero vectors have undefs in different places they won't get combined by simplifySelect. This fixes a regression from an earlier commit. llvm-svn: 371351
* [X86] Remove call to getZeroVector from materializeVectorConstant. Add isel ↵Craig Topper2019-09-083-9/+17
| | | | | | | | | | | | | | | | patterns for zero vectors with all types. The change to avx512-vec-cmp.ll is a regression, but should be easy to fix. It occurs because the getZeroVector call was canonicalizing both sides to the same node, then SimplifySelect was able to simplify it. But since only called getZeroVector on some VTs this isn't a robust way to combine this. The change to vector-shuffle-combining-ssse3.ll is more instructions, but removes a constant pool load so its unclear if its a regression or not. llvm-svn: 371350
* [InstSimplify] simplifyUnsignedRangeCheck(): if we know that X != 0, handle ↵Roman Lebedev2019-09-081-9/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | more cases (PR43246) Summary: This is motivated by D67122 sanitizer check enhancement. That patch seemingly worsens `-fsanitize=pointer-overflow` overhead from 25% to 50%, which strongly implies missing folds. In this particular case, given ``` char* test(char& base, unsigned long offset) { return &base + offset; } ``` it will end up producing something like https://godbolt.org/z/LK5-iH which after optimizations reduces down to roughly ``` define i1 @t0(i8* nonnull %base, i64 %offset) { %base_int = ptrtoint i8* %base to i64 %adjusted = add i64 %base_int, %offset %non_null_after_adjustment = icmp ne i64 %adjusted, 0 %no_overflow_during_adjustment = icmp uge i64 %adjusted, %base_int %res = and i1 %non_null_after_adjustment, %no_overflow_during_adjustment ret i1 %res } ``` Without D67122 there was no `%non_null_after_adjustment`, and in this particular case we can get rid of the overhead: Here we add some offset to a non-null pointer, and check that the result does not overflow and is not a null pointer. But since the base pointer is already non-null, and we check for overflow, that overflow check will already catch the null pointer, so the separate null check is redundant and can be dropped. Alive proofs: https://rise4fun.com/Alive/WRzq There are more patterns of "unsigned-add-with-overflow", they are not handled here, but this is the main pattern, that we currently consider canonical, so it makes sense to handle it. https://bugs.llvm.org/show_bug.cgi?id=43246 Reviewers: spatel, nikic, vsk Reviewed By: spatel Subscribers: hiraditya, llvm-commits, reames Tags: #llvm Differential Revision: https://reviews.llvm.org/D67332 llvm-svn: 371349
* [X86] X86DAGToDAGISel::combineIncDecVector(): call getSplatBuildVector() ↵Roman Lebedev2019-09-081-3/+6
| | | | | | | | | | | | | | | manually As reported in post-commit review of r370327, there is some case where the code crashes. As discussed with Craig Topper, the problem is that getConstant() internally calls getSplatBuildVector(), so we don't insert the constant itself. If we do that manually we're good. llvm-svn: 371346
* [X86] Use DAG.getConstant instead of getZeroVector in combinePMULDQ.Craig Topper2019-09-081-1/+1
| | | | | | | | getZeroVector canonicalizes the type to vXi32, but that's a legalization action. We should use the most correct type if possible. llvm-svn: 371345
* [DAGCombiner][X86][ARM] Teach visitMULO to fold multiplies with 0 to 0 and ↵Craig Topper2019-09-081-3/+19
| | | | | | | | | no carry. I modified the ARM test to use two inputs instead of 0 so the test hopefully still tests what was intended. llvm-svn: 371344
* [X86] Teach materializeVectorConstant to not call ↵Craig Topper2019-09-081-3/+3
| | | | | | getZeroVector/getOnesVector on the types we already have isel patterns for. llvm-svn: 371343
* [InstCombine] fold extract+insert into identity shuffleSanjay Patel2019-09-081-0/+52
| | | | | | | | | | | | | | | This is similar to the existing fold for splats added with: rL365379 If we can adjust the shuffle mask to include another element in an identity mask (if it changes vector length, that's an extract/insert subvector operation in the backend), then that can eliminate extractelement/insertelement pairs in IR. All targets are expected to lower shuffles with identity masks efficiently. llvm-svn: 371340
* [DebugInfo][X86] Describe call site values for zero-valued immsDavid Stenberg2019-09-082-12/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Add zero-materializing XORs to X86's describeLoadedValue() hook in order to produce call site values. I have had to change the defs logic in collectCallSiteParameters() a bit to be able to describe the XORs. The XORs implicitly define $eflags, which would cause them to never be considered, due to a guard condition that I->getNumDefs() is one. I have changed that condition so that we now only consider instructions where a forwarded register overlaps with the instruction's single explicit define. We still need to collect the implicit defines of other forwarded registers to remove them from the work list. I'm not sure how to move towards supporting instructions with multiple explicit defines, cases where forwarded register are implicitly defined, and/or cases where an instruction produces values for multiple forwarded registers. Perhaps the describeLoadedValue() hook should take a register argument, and we then leave it up to the hook to describe the loaded value in that register? I have not yet encountered a situation where that would be necessary though. Reviewers: aprantl, vsk, djtodoro, NikolaPrica Reviewed By: vsk Subscribers: ychen, hiraditya, llvm-commits Tags: #debug-info, #llvm Differential Revision: https://reviews.llvm.org/D67225 llvm-svn: 371333
* [NFC] Make the describeLoadedValue() hook return machine operand objectsDavid Stenberg2019-09-083-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This changes the ParamLoadedValue pair which the describeLoadedValue() hook returns so that MachineOperand objects are returned instead of pointers. When describing call site values we may need to describe operands which are not part of the instruction. One such example is zero-materializing XORs on x86, which I have implemented support for in a child revision. Instead of having to return a pointer to an operand stored somewhere outside the instruction, start returning objects directly instead, as that simplifies the code. The MachineOperand class only holds POD members, and on x86-64 it is 32 bytes large. That combined with copy elision means that the overhead of returning a machine operand object from the hook does not become very large. I benchmarked this on a 8-thread i7-8650U machine with 32 GB RAM. The benchmark consisted of building a clang 8.0 binary configured with: -DCMAKE_BUILD_TYPE=RelWithDebInfo \ -DLLVM_TARGETS_TO_BUILD=X86 \ -DLLVM_USE_SANITIZER=Address \ -DCMAKE_CXX_FLAGS="-Xclang -femit-debug-entry-values -stdlib=libc++" The average wall clock time increased by 4 seconds, from 62:05 to 62:09, which is an 0.1% increase. Reviewers: aprantl, vsk, djtodoro, NikolaPrica Reviewed By: vsk Subscribers: hiraditya, ychen, llvm-commits Tags: #debug-info, #llvm Differential Revision: https://reviews.llvm.org/D67261 llvm-svn: 371332
* [ARM] Remove declaration of unimplemented function. NFC.David Green2019-09-081-2/+0
| | | | llvm-svn: 371331
* [X86][SSE] Fix out of range shift introduced in D67070/rL371328Simon Pilgrim2019-09-081-1/+2
| | | | | | Use APInt to create the comparison mask instead. llvm-svn: 371330
* [X86][SSE] Add support for <64 x i1> bool reductionSimon Pilgrim2019-09-081-11/+14
| | | | | | | | | | This generalizes the existing <32 x i1> pre-AVX2 split code to support reductions from <64 x i1> as well, we can probably generalize to any larger pow2 case in the future if the (unlikely) need ever arises. We still need to tweak combineBitcastvxi1 to improve AVX512F codegen as its assumes vXi1 types should be handled on the mask registers even when they aren't legal. Differential Revision: https://reviews.llvm.org/D67070 llvm-svn: 371328
* [X86] Make getZeroVector return floating point vectors in their native type ↵Craig Topper2019-09-083-2/+23
| | | | | | | | | | | | | | on SSE2 and later. isel used to require zero vectors to be canonicalized to a single type to minimize the number of patterns needed to match. This is no longer required. I plan to do this to integers too, but floating point was simpler to start with. Integer has a complication where v32i16/v64i8 aren't legal when the other 512-bit integer types are. llvm-svn: 371325
* [X86] Add support for unfold broadcast loads from FMA instructions.Craig Topper2019-09-071-0/+121
| | | | llvm-svn: 371323
* [aarch64] Add combine patterns for fp16 fmlaSebastian Pop2019-09-071-62/+280
| | | | | | | | | This patch enables generation of fused multiply add/sub for instructions operating on fp16. Tested on aarch64-linux. Differential Revision: https://reviews.llvm.org/D67297 llvm-svn: 371321
* [X86] Add prefer-128-bit subtarget feature.Craig Topper2019-09-074-0/+10
| | | | | | | | | | | | | | | | | | | Summary: Similar to the previous prefer-256-bit flag. We might want to enable this by default some CPUs. This just starts the initial work to implement and prove that it effects TTI's vector width. Reviewers: RKSimon, echristo, spatel, atdt Reviewed By: RKSimon Subscribers: lebedev.ri, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67311 llvm-svn: 371319
OpenPOWER on IntegriCloud