summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
...
* [AArch64][GlobalISel] Add some missing vector support for FP arithmetic ops.Amara Emerson2019-01-281-2/+2
| | | | | | | Moved the fneg lowering legalization test from AArch64 to X86, as we want to specify that it's already legal. llvm-svn: 352338
* [AArch64][GlobalISel] Add some vector support for fp <-> int conversions.Amara Emerson2019-01-282-2/+6
| | | | | | Some unrelated, but benign, test changes as well due to the test update script. llvm-svn: 352337
* GlobalISel: Don't reduce elements for atomic load/storeMatt Arsenault2019-01-271-1/+9
| | | | | | | This is invalid for the same reason as in the narrowScalar handling for load. llvm-svn: 352334
* [x86] add restriction for lowering to vpermpsSanjay Patel2019-01-271-2/+19
| | | | | | | | | This transform was added with rL351346, and we had an escape for shufps, but we also want one for unpckps vs. vpermps because vpermps doesn't take an immediate shuffle index operand. llvm-svn: 352333
* GlobalISel: Factor fewerElementVectors into separate functionsMatt Arsenault2019-01-271-156/+170
| | | | llvm-svn: 352332
* [X86][SSE] Add UNDEF handling to combineSelect ISD::USUBSAT matching (PR40083)Simon Pilgrim2019-01-271-5/+7
| | | | llvm-svn: 352330
* [X86][SSE] Permit UNDEFs in combineAddToSUBUS matching (PR40083)Simon Pilgrim2019-01-271-3/+4
| | | | llvm-svn: 352328
* [COFF] Add new relocation types.Martin Storsjo2019-01-272-0/+6
| | | | | | Differential Revision: https://reviews.llvm.org/D57291 llvm-svn: 352324
* [x86] refactor logic in lowerShuffleWithUndefHalfSanjay Patel2019-01-271-28/+49
| | | | | | | | Although this is longer code, this is no-functional-change-intended. The goal is to untangle the conditions under which we bail out, so that's easier to adjust. llvm-svn: 352320
* GlobalISel: Verify load/store has a pointer inputMatt Arsenault2019-01-271-1/+6
| | | | | | | I expected this to be automatically verified, but it seems nothing uses that the type index was declared as a "ptype" llvm-svn: 352319
* Re-apply "r351584: "GlobalISel: Verify g_zextload and g_sextload""Amara Emerson2019-01-271-1/+14
| | | | | | | I reverted it originally due to a bot failing. The underlying bug has been fixed as of r352311. llvm-svn: 352312
* [AArch64][GlobalISel] Fix the G_EXTLOAD combiner creating non-extending ↵Amara Emerson2019-01-271-0/+8
| | | | | | | | | | | | | illegal instructions. This fixes loads like 's1 = load %p (load 1 from %p)' being combined with an extend into an illegal 's8 = g_extload %p (load 1 from %p)' which doesn't do any extension, by avoiding touching those < s8 size loads. This bug was uncovered by a verifier update r351584, which I reverted it to keep the bots green. llvm-svn: 352311
* Revert "Add support for prefix-only CLI options"Thomas Preud'homme2019-01-271-14/+5
| | | | | | This reverts commit r351038. llvm-svn: 352310
* [X86] Add some missing blsr patternsGabor Buella2019-01-271-2/+10
| | | | | | | | | | | | | | | | | | The add+and sequence followed by a branch can happen e.g. when looping over the set bits of an integer: ``` while (x != 0) { func(x & ~x); x &= x - 1; } ``` Reviewed By: ctopper Differential Revision: https://reviews.llvm.org/D57296 llvm-svn: 352306
* [X86] Add a pattern for (i64 (and (anyext def32:), 0x00000000FFFFFFFF)) to ↵Craig Topper2019-01-271-0/+2
| | | | | | | | | | produce SUBREG_TO_REG def32 here means the producing instruction zeroed bits 63:32. We already do this for zext, but it looks like we can get an and+anyext sometimes. Spotted in the diffs from D33587. llvm-svn: 352303
* GlobalISel: Fix typo in assert messagesMatt Arsenault2019-01-271-2/+2
| | | | llvm-svn: 352301
* GlobalISel: Implement narrowScalar for mulMatt Arsenault2019-01-272-0/+48
| | | | llvm-svn: 352300
* GlobalISel: fewerElementsVector for intrinsic_trunc/intrinsic_roundMatt Arsenault2019-01-272-2/+5
| | | | llvm-svn: 352298
* AMDGPU/GlobalISel: Use scalarize instead of clampMaxNumElementsMatt Arsenault2019-01-261-2/+1
| | | | llvm-svn: 352297
* [GlobalISel][IRTranslator] Fix crash on translation of fneg.Amara Emerson2019-01-261-1/+1
| | | | | | | When the fneg IR instruction was added the code to do translation wasn't tested, and tried to get an invalid operand. llvm-svn: 352296
* AMDGPU/GlobalISel: Legalize more bit opsMatt Arsenault2019-01-262-4/+10
| | | | llvm-svn: 352295
* AMDGPU/GlobalISel: Widen small uaddo/usuboMatt Arsenault2019-01-261-1/+2
| | | | llvm-svn: 352294
* [ValueTracking] Look through casts when determining non-nullnessJohannes Doerfert2019-01-261-0/+22
| | | | | | | | | | Bitcast and certain Ptr2Int/Int2Ptr instructions will not alter the value of their operand and can therefore be looked through when we determine non-nullness. Differential Revision: https://reviews.llvm.org/D54956 llvm-svn: 352293
* [X86] combineAddOrSubToADCOrSBB/combineCarryThroughADD - use oneuse for ↵Simon Pilgrim2019-01-261-2/+3
| | | | | | | | | | entire SDNode Fix issue noted in D57281 that only tested the one use for the SDValue (the result flag), not the entire SUB. I've added the getNode() to make it clearer what is intended than just the -> redirection. llvm-svn: 352291
* [X86] combineCarryThroughADD - add support for X86::COND_A commutations ↵Simon Pilgrim2019-01-261-6/+25
| | | | | | | | | | (PR24545) As discussed on PR24545, we should try to commute X86::COND_A 'icmp ugt' cases to X86::COND_B 'icmp ult' to more optimally bind the carry flag output to a SBB instruction. Differential Revision: https://reviews.llvm.org/D57281 llvm-svn: 352289
* [X86] Fold X86ISD::SBB(ISD::SUB(X,Y),0) -> X86ISD::SBB(X,Y) (PR25858)Simon Pilgrim2019-01-261-0/+9
| | | | | | | | | | We often generate X86ISD::SBB(X, 0) for carry flag arithmetic. I had tried to create test cases for the ADC equivalent (which often uses the same pattern) but haven't managed to find anything yet. Differential Revision: https://reviews.llvm.org/D57169 llvm-svn: 352288
* [X86][SSE] Generalized unsigned compares to support nonsplat constant ↵Simon Pilgrim2019-01-261-7/+10
| | | | | | vectors (PR39859) llvm-svn: 352283
* [x86] add helper for creating a half-width shuffle; NFCSanjay Patel2019-01-261-28/+39
| | | | | | | | | | This reduces a bit of duplication between the combining and lowering places that use it, but the primary motivation is to make it easier to rearrange the lowering logic and solve PR40434: https://bugs.llvm.org/show_bug.cgi?id=40434 llvm-svn: 352280
* [X86] Remove and autoupgrade vpconflict intrinsics that take a mask and ↵Craig Topper2019-01-262-12/+16
| | | | | | | | passthru argument. We have unmasked versions as of r352172 llvm-svn: 352270
* Revert r352255 "[SelectionDAG][X86] Don't use SEXTLOAD for promoting masked ↵Craig Topper2019-01-262-16/+4
| | | | | | | | loads in the type legalizer" This might be breaking an lldb windows buildbot. llvm-svn: 352268
* [X86] Remove GCCBuiltins from 512-bit cvt(u)qqtops, cvt(u)qqtopd, and ↵Craig Topper2019-01-262-39/+34
| | | | | | | | | | | | | | | | cvt(u)dqtops intrinsics. Add new variadic uitofp/sitofp with rounding mode intrinsics. Summary: See clang patch D56998 for a full description. Reviewers: RKSimon, spatel Reviewed By: RKSimon Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D56999 llvm-svn: 352266
* [WebAssembly][NFC] Group SIMD-related ISel configurationThomas Lively2019-01-261-59/+45
| | | | | | | | | | Reviewers: aheejin Subscribers: dschuff, sbc100, jgravelle-google, hiraditya, sunfish Differential Revision: https://reviews.llvm.org/D57263 llvm-svn: 352262
* [PowerPC] Update Vector Costs for P9Nemanja Ivanovic2019-01-265-12/+59
| | | | | | | | | | | | | For the power9 CPU, vector operations consume a pair of execution units rather than one execution unit like a scalar operation. Update the target transform cost functions to reflect the higher cost of vector operations when targeting Power9. Patch by RolandF. Differential revision: https://reviews.llvm.org/D55461 llvm-svn: 352261
* [X86] Add DAG combine to merge vzext_movl with the various fp<->int ↵Craig Topper2019-01-263-84/+26
| | | | | | | | | | | | | | | | conversion operations that only write the lower 64-bits of an xmm register and zero the rest. Summary: We have isel patterns for this, but we're missing some load patterns and all broadcast patterns. A DAG combine seems like a better fit for this. Reviewers: RKSimon, spatel Reviewed By: RKSimon Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D56971 llvm-svn: 352260
* [SelectionDAG][X86] Don't use SEXTLOAD for promoting masked loads in the ↵Craig Topper2019-01-262-4/+16
| | | | | | | | | | | | | | | | | | | | | type legalizer Summary: I'm not sure why we were using SEXTLOAD. EXTLOAD seems more appropriate since we don't care about the upper bits. This patch changes this and then modifies the X86 post legalization combine to emit a extending shuffle instead of a sign_extend_vector_inreg. Could maybe use an any_extend_vector_inreg, but I just did what we already do in LowerLoad. I think we can actually get rid of this code entirely if we switch to -x86-experimental-vector-widening-legalization. On AVX512 targets I think we might be able to use a masked vpmovzx and not have to expand this at all. Reviewers: RKSimon, spatel Reviewed By: RKSimon Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D57186 llvm-svn: 352255
* [NFC] Test commit : fix typo.Alexey Lapshin2019-01-251-1/+1
| | | | llvm-svn: 352248
* [RISCV] Add target DAG combine for bitcast fabs/fneg on RV32FDAlex Bradbury2019-01-251-3/+28
| | | | | | | | | | | | | DAGCombiner::visitBITCAST will perform: fold (bitconvert (fneg x)) -> (xor (bitconvert x), signbit) fold (bitconvert (fabs x)) -> (and (bitconvert x), (not signbit)) As shown in double-bitmanip-dagcombines.ll, this can be advantageous. But RV32FD doesn't use bitcast directly (as i64 isn't a legal type), and instead uses RISCVISD::SplitF64. This patch adds an equivalent DAG combine for SplitF64. llvm-svn: 352247
* [llvm] Opt-in flag for X86DiscriminateMemOpsMircea Trofin2019-01-252-1/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Currently, if an instruction with a memory operand has no debug information, X86DiscriminateMemOps will generate one based on the first line of the enclosing function, or the last seen debug info. This may cause confusion in certain debugging scenarios. The long term approach would be to use the line number '0' in such cases, however, that brings in challenges: the base discriminator value range is limited (4096 values). For the short term, adding an opt-in flag for this feature. See bug 40319 (https://bugs.llvm.org/show_bug.cgi?id=40319) Reviewers: dblaikie, jmorse, gbedwell Reviewed By: dblaikie Subscribers: aprantl, eraman, hiraditya Differential Revision: https://reviews.llvm.org/D57257 llvm-svn: 352246
* [GlobalISel][AArch64][NFC] Fix incorrect comment in selectUnmergeValuesJessica Paquette2019-01-251-1/+1
| | | | | | s/scalar/vector/ llvm-svn: 352243
* Revert rL352238.Alina Sbirlea2019-01-251-2/+2
| | | | llvm-svn: 352241
* [WarnMissedTransforms] Set default to 1.Alina Sbirlea2019-01-251-2/+2
| | | | | | | | | | | | | | Summary: Set default value for retrieved attributes to 1, since the check is against 1. Eliminates the warning noise generated when the attributes are not present. Reviewers: sanjoy Subscribers: jlebar, llvm-commits Differential Revision: https://reviews.llvm.org/D57253 llvm-svn: 352238
* Reapply: [RISCV] Set isAsCheapAsAMove for ADDI, ORI, XORI, LUIAna Pazos2019-01-253-3/+18
| | | | | | This reapplies commit r352010 with RISC-V test fixes. llvm-svn: 352237
* [MBP] Don't move bottom block before header if it can't reduce taken branchesGuozhi Wei2019-01-251-0/+38
| | | | | | | | | | | | | | | | | | If bottom of block BB has only one successor OldTop, in most cases it is profitable to move it before OldTop, except the following case: -->OldTop<- | . | | . | | . | ---Pred | | | BB----- Move BB before OldTop can't reduce the number of taken branches, this patch detects this case and prevent the moving. Differential Revision: https://reviews.llvm.org/D57067 llvm-svn: 352236
* [X86] Combine masked store and truncate into masked truncating stores.Craig Topper2019-01-251-5/+13
| | | | | | | | | | We also need to combine to masked truncating with saturation stores, but I'm leaving that for a future patch. This does regress some tests that used truncate wtih saturation followed by a masked store. Those now use a truncating store and use min/max to saturate. Differential Revision: https://reviews.llvm.org/D57218 llvm-svn: 352230
* [HotColdSplit] Introduce a cost model to control splitting behaviorVedant Kumar2019-01-251-36/+91
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The main goal of the model is to avoid *increasing* function size, as that would eradicate any memory locality benefits from splitting. This happens when: - There are too many inputs or outputs to the cold region. Argument materialization and reloads of outputs have a cost. - The cold region has too many distinct exit blocks, causing a large switch to be formed in the caller. - The code size cost of the split code is less than the cost of a set-up call. A secondary goal is to prevent excessive overall binary size growth. With the cost model in place, I experimented to find a splitting threshold that works well in practice. To make warm & cold code easily separable for analysis purposes, I moved split functions to a "cold" section. I experimented with thresholds between [0, 4] and set the default to the threshold which minimized geomean __text size. Experiment data from building LNT+externals for X86 (N = 639 programs, all sizes in bytes): | Configuration | __text geom size | __cold geom size | TEXT geom size | | **-Os** | 1736.3 | 0, n=0 | 10961.6 | | -Os, thresh=0 | 1740.53 | 124.482, n=134 | 11014 | | -Os, thresh=1 | 1734.79 | 57.8781, n=90 | 10978.6 | | -Os, thresh=2 | ** 1733.85 ** | 65.6604, n=61 | 10977.6 | | -Os, thresh=3 | 1733.85 | 65.3071, n=61 | 10977.6 | | -Os, thresh=4 | 1735.08 | 67.5156, n=54 | 10965.7 | | **-Oz** | 1554.4 | 0, n=0 | 10153 | | -Oz, thresh=2 | ** 1552.2 ** | 65.633, n=61 | 10176 | | **-O3** | 2563.37 | 0, n=0 | 13105.4 | | -O3, thresh=2 | ** 2559.49 ** | 71.1072, n=61 | 13162.4 | Picking thresh=2 reduces the geomean __text section size by 0.14% at -Os, -Oz, and -O3 and causes ~0.2% growth in the TEXT segment. Note that TEXT size is page-aligned, whereas section sizes are byte-aligned. Experiment data from building LNT+externals for ARM64 (N = 558 programs, all sizes in bytes): | Configuration | __text geom size | __cold geom size | TEXT geom size | | **-Os** | 1763.96 | 0, n=0 | 42934.9 | | -Os, thresh=2 | ** 1760.9 ** | 76.6755, n=61 | 42934.9 | Picking thresh=2 reduces the geomean __text section size by 0.17% at -Os and causes no growth in the TEXT segment. Measurements were done with D57082 (r352080) applied. Differential Revision: https://reviews.llvm.org/D57125 llvm-svn: 352228
* [MC] Teach the MachO object writer about N_FUNC_COLDVedant Kumar2019-01-255-0/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | N_FUNC_COLD is a new MachO symbol attribute. It's a hint to the linker to order a symbol towards the end of its section, to improve locality. Example: ``` void a1() {} __attribute__((cold)) void a2() {} void a3() {} int main() { a1(); a2(); a3(); return 0; } ``` A linker that supports N_FUNC_COLD will order _a2 to the end of the text section. From `nm -njU` output, we see: ``` _a1 _a3 _main _a2 ``` Differential Revision: https://reviews.llvm.org/D57190 llvm-svn: 352227
* [x86] simplify logic in lowerShuffleWithUndefHalf(); NFCISanjay Patel2019-01-251-7/+9
| | | | | | | | | | | This seems unnecessarily complicated because we gave names to opposite polarity bools and have code comments that don't really line up with the logic. Step 1: remove UndefUpper and assert that it is the opposite of UndefLower after the initial early exit. llvm-svn: 352217
* [DiagnosticInfo] Add support for preserving newlines in remark arguments.Florian Hahn2019-01-251-1/+23
| | | | | | | | | | | | | | | | | | | | This patch adds a new type StringBlockVal which can be used to emit a YAML block scalar, which preserves newlines in a multiline string. It also updates MappingTraits<DiagnosticInfoOptimizationBase::Argument> to use it for argument values with more than a single newline. This is helpful for remarks that want to display more in-depth information in a more structured way. Reviewers: thegameg, anemet Reviewed By: anemet Subscribers: hfinkel, hiraditya, llvm-commits Differential Revision: https://reviews.llvm.org/D57159 llvm-svn: 352216
* [TEST][COMMIT] - fix comment typo in AsmPrinter/DwarfDebug.cpp - NFCTom Weaver2019-01-251-1/+1
| | | | llvm-svn: 352214
* [X86] Simplify X86ISD::ADD/SUB if we don't use the result flagSimon Pilgrim2019-01-251-0/+18
| | | | | | | | | | | | Simplify to the generic ISD::ADD/SUB if we don't make use of the result flag. This mainly helps with ADDCARRY/SUBBORROW intrinsics which get expanded to X86ISD::ADD/SUB but could be simplified further. Noticed in some of the test cases in PR31754 Differential Revision: https://reviews.llvm.org/D57234 llvm-svn: 352210
OpenPOWER on IntegriCloud