summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
...
* [BitcodeReader] Infer the correct runtime preemption for GlobalValueSteven Wu2018-07-091-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: To allow bitcode built by old compiler to pass the current verifer, BitcodeReader needs to auto infer the correct runtime preemption from linkage and visibility for GlobalValues. Since llvm-6.0 bitcode already contains the new field but can be incorrect in some cases, the attribute needs to be recomputed all the time in BitcodeReader. This will make all the GVs has dso_local marked correctly if read from bitcode, and it should still allow the verifier to catch mistakes in optimization passes. This should fix PR38009. Reviewers: sfertile, vsk Reviewed By: vsk Subscribers: dexonsmith, llvm-commits Differential Revision: https://reviews.llvm.org/D49039 llvm-svn: 336560
* [InstCombine] generalize safe vector constant utilitySanjay Patel2018-07-093-30/+23
| | | | | | | | | | | This is almost NFC, but there could be some case where the original code had undefs in the constants (rather than just the shuffle mask), and we'll use safe constants rather than undefs now. The FIXME noted in foldShuffledBinop() is already visible in existing tests, so correcting that is the next step. llvm-svn: 336558
* [X86] Remove some patterns that include a bitcast of a floating point load ↵Craig Topper2018-07-092-10/+0
| | | | | | | | to an integer type. DAG combine should have converted the type of the load. llvm-svn: 336557
* [X86] Remove some patterns that seems to be unreachable.Craig Topper2018-07-092-12/+0
| | | | | | | | These patterns mapped (v2f64 (X86vzmovl (v2f64 (scalar_to_vector FR64:$src)))) to a MOVSD and an zeroing XOR. But the complexity of a pattern for (v2f64 (X86vzmovl (v2f64))) that selects MOVQ is artificially and hides this MOVSD pattern. Weirder still, the SSE version of the pattern was explicitly blocked on SSE41, but yet we had copied it to AVX and AVX512. llvm-svn: 336556
* [X86] Remove some seemingly unnecessary AddedComplexity lines.Craig Topper2018-07-091-8/+4
| | | | | | Looking at the generated tables this didn't seem to make an obvious difference in pattern priority. llvm-svn: 336555
* [VPlan][LV] Introduce condition bit in VPBlockBaseDiego Caballero2018-07-095-24/+70
| | | | | | | | | | | | | | | This patch introduces a VPValue in VPBlockBase to represent the condition bit that is used as successor selector when a block has multiple successors. This information wasn't necessary until now, when we are about to introduce outer loop vectorization support in VPlan code gen. Reviewers: fhahn, rengolin, mkuper, hfinkel, mssimpso Reviewed By: fhahn Differential Revision: https://reviews.llvm.org/D48814 llvm-svn: 336554
* [AArch64][SVE] Asm: Support for CNT(B|H|W|D) and CNTP instructions.Sander de Smalen2018-07-092-13/+72
| | | | | | | | | | | | | | | | | | This patch adds support for the following instructions: CNTB CNTH - Determine the number of active elements implied by CNTW CNTD the named predicate constant, multiplied by an immediate, e.g. cnth x0, vl8, #16 CNTP - Count active predicate elements, e.g. cntp x0, p0, p1.b counts the number of active elements in p1, predicated by p0, and stores the result in x0. llvm-svn: 336552
* [CVP] Handle calls with void return value. No need to create CVPLattice ↵Xin Tong2018-07-091-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | state for it. Summary: Tests: 10 Metric: compile_time Program unpatch-result patch-result diff Bullet/bullet 32.39 30.54 -5.7% SPASS/SPASS 18.14 17.25 -4.9% mafft/pairlocalalign 12.10 11.64 -3.8% ClamAV/clamscan 19.21 19.63 2.2% 7zip/7zip-benchmark 49.55 48.85 -1.4% kimwitu++/kc 15.68 15.87 1.2% lencod/lencod 21.13 21.34 1.0% consumer-typeset/consumer-typeset 13.65 13.62 -0.2% tramp3d-v4/tramp3d-v4 29.88 29.92 0.1% sqlite3/sqlite3 18.48 18.46 -0.1% unpatch-result patch-result diff count 10.000000 10.000000 10.000000 mean 23.022000 22.712400 -0.011671 std 11.362831 11.094183 0.027338 min 12.104000 11.640000 -0.057298 25% 16.299000 16.214000 -0.032282 50% 18.844000 19.048000 -0.001350 75% 27.689000 27.774000 0.007752 max 49.552000 48.852000 0.021861 I also tested only this pass by concatenating all the code from the llvm/lib/Analysis/ folder and do clang -g followed by opt. I get close to 20% speedup for the pass. I expect a majority of the gain come from skipping the dbg intrinsics. Before patch (opt -time-passes -called-value-propagation): ============ ===-------------------------------------------------------------------------=== ... Pass execution timing report ... ===-------------------------------------------------------------------------=== Total Execution Time: 3.8303 seconds (3.8279 wall clock) ---User Time--- --System Time-- --User+System-- ---Wall Time--- --- Name --- 2.0768 ( 57.3%) 0.0990 ( 48.0%) 2.1757 ( 56.8%) 2.1757 ( 56.8%) Bitcode Writer 0.8444 ( 23.3%) 0.0600 ( 29.1%) 0.9044 ( 23.6%) 0.9044 ( 23.6%) Called Value Propagation 0.7031 ( 19.4%) 0.0472 ( 22.9%) 0.7502 ( 19.6%) 0.7478 ( 19.5%) Module Verifier 3.6242 (100.0%) 0.2062 (100.0%) 3.8303 (100.0%) 3.8279 (100.0%) Total After patch (opt -time-passes -called-value-propagation): ============ ===-------------------------------------------------------------------------=== ... Pass execution timing report ... ===-------------------------------------------------------------------------=== Total Execution Time: 3.6605 seconds (3.6579 wall clock) ---User Time--- --System Time-- --User+System-- ---Wall Time--- --- Name --- 2.0716 ( 59.7%) 0.0990 ( 52.5%) 2.1705 ( 59.3%) 2.1706 ( 59.3%) Bitcode Writer 0.7144 ( 20.6%) 0.0300 ( 15.9%) 0.7444 ( 20.3%) 0.7444 ( 20.4%) Called Value Propagation 0.6859 ( 19.8%) 0.0596 ( 31.6%) 0.7455 ( 20.4%) 0.7429 ( 20.3%) Module Verifier 3.4719 (100.0%) 0.1886 (100.0%) 3.6605 (100.0%) 3.6579 (100.0%) Total Reviewers: davide, mssimpso Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D49078 llvm-svn: 336551
* [Power9] Add __float128 support for compare operationsStefan Pintilie2018-07-093-2/+75
| | | | | | | | Added handling for the select f128. Differential Revision: https://reviews.llvm.org/D48294 llvm-svn: 336548
* [AArch64][SVE] Asm: Support for remaining shift instructions.Sander de Smalen2018-07-092-26/+127
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch completes support for shifts, which include: - LSL - Logical Shift Left - LSLR - Logical Shift Left, Reversed form - LSR - Logical Shift Right - LSRR - Logical Shift Right, Reversed form - ASR - Arithmetic Shift Right - ASRR - Arithmetic Shift Right, Reversed form - ASRD - Arithmetic Shift Right for Divide In the following variants: - Predicated shift by immediate - ASR, LSL, LSR, ASRD e.g. asr z0.h, p0/m, z0.h, #1 (active lanes of z0 shifted by #1) - Unpredicated shift by immediate - ASR, LSL*, LSR* e.g. asr z0.h, z1.h, #1 (all lanes of z1 shifted by #1, stored in z0) - Predicated shift by vector - ASR, LSL*, LSR* e.g. asr z0.h, p0/m, z0.h, z1.h (active lanes of z0 shifted by z1, stored in z0) - Predicated shift by vector, reversed form - ASRR, LSLR, LSRR e.g. lslr z0.h, p0/m, z0.h, z1.h (active lanes of z1 shifted by z0, stored in z0) - Predicated shift left/right by wide vector - ASR, LSL, LSR e.g. lsl z0.h, p0/m, z0.h, z1.d (active lanes of z0 shifted by wide elements of vector z1) - Unpredicated shift left/right by wide vector - ASR, LSL, LSR e.g. lsl z0.h, z1.h, z2.d (all lanes of z1 shifted by wide elements of z2, stored in z0) *Variants added in previous patches. llvm-svn: 336547
* [InstCombine] fix shuffle-of-binops transform to avoid poison/undef Sanjay Patel2018-07-091-21/+52
| | | | | | | | | | | | | | | | | | | | As noted in D48987, there are many different ways for this transform to go wrong. In particular, the poison potential for shifts means we have to more careful with those ops. I added tests to make that behavior visible for all of the different cases that I could find. This is a partial fix. To make this review easier, I did not make changes for the single binop pattern (handled in foldSelectShuffleWith1Binop()). I also left out some potential optimizations noted with TODO comments. I'll follow-up once we're confident that things are correct here. The goal is to correct all marked FIXME tests to either avoid the shuffle transform or do it safely. Note that distinguishing when the shuffle mask contains undefs and using getBinOpIdentity() allows for some improvements to div/rem patterns, so there are wins along with the missed opportunities and fixes. Differential Revision: https://reviews.llvm.org/D49047 llvm-svn: 336546
* [mips] Addition of the [d]rem and [d]remu instructionsStefan Maksimovic2018-07-093-25/+116
| | | | | | | | | | | | | Related to http://reviews.llvm.org/D15772 Depends on http://reviews.llvm.org/D16889 Adds [D]REM[U] instructions. Patch By: Srdjan Obucina Contributions from: Simon Dardis Differential Revision: https://reviews.llvm.org/D17036 llvm-svn: 336545
* [AArch64][SVE] Asm: Support for TBL instruction.Sander de Smalen2018-07-092-0/+35
| | | | | | | | | | | Support for SVE's TBL instruction for programmable table lookup/permute using vector of element indices, e.g. tbl z0.d, { z1.d }, z2.d stores elements from z1, indexed by elements from z2, into z0. llvm-svn: 336544
* [Support] Make JSON handle doubles and int64s losslesslySam McCall2018-07-091-16/+26
| | | | | | | | | | | | | | | | | | | Summary: This patch adds a new "integer" ValueType, and renames Number -> Double. This allows us to preserve the full precision of int64_t when parsing integers from the wire, or constructing from an integer. The API is unchanged, other than giving asInteger() a clearer contract. In addition, always output doubles with enough precision that parsing will reconstruct the same double. Reviewers: simon_tatham Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D46209 llvm-svn: 336541
* [PM/Unswitch] Fix a nasty bug in the new PM's unswitch introduced inChandler Carruth2018-07-091-26/+81
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | r335553 with the non-trivial unswitching of switches. The code correctly updated most aspects of the CFG and analyses, but missed some crucial aspects: 1) When multiple cases have the same successor, we unswitch that a single time and replace the switch with a direct branch. The CFG here is correct, but the target of this direct branch may have had a PHI node with multiple entries in it. 2) When we still have to clone a successor of the switch into an unswitched copy of the loop, we'll delete potentially multiple edges entering this successor, not just one. 3) We also have to delete multiple edges entering the successors in the original loop when they have to be retained. 4) When the "retained successor" *also* occurs as a case successor, we just assert failed everywhere. This doesn't happen very easily because its always valid to simply drop the case -- the retained successor for switches is always the default successor. However, it is likely possible through some contrivance of different loop passes, unrolling, and simplifying for this to occur in practice and certainly there is nothing "invalid" about the IR so this pass needs to handle it. 5) In the case of #4, we also will replace these multiple edges with a direct branch much like in #1 and need to collapse the entries in any PHI nodes to a single enrty. All of this stems from the delightful fact that the same successor can show up in multiple parts of the switch terminator, and each of these are considered a distinct edge for the purpose of PHI nodes (and iterating the successors and predecessors) but not for unswitching itself, the dominator tree, or many other things. For the record, I intensely dislike this "feature" of the IR in large part because of the complexity it causes in passes like this. We already have a ton of logic building sets and handling duplicates, and we just had to add a bunch more. I've added a complex test case that covers all five of the above failure modes. I've also added a variation on it where #4 and #5 occur in loop exit, adding fun where we have an LCSSA PHI node with "multiple entries" despite have dedicated exits. There were no additional issues found by this, but it seems a useful corner case to cover with testing. One thing that working on all of this code has made painfully clear for me as well is how amazingly inefficient our PHI node representation is (in terms of the in-memory data structures and the APIs used to update them). This code has truly marvelous complexity bounds because every time we remove an entry from a PHI node we do a linear scan to find it and then a linear update to the data structure to remove it. We could in theory batch all of the PHI node updates into a single linear walk of the operands making this much more efficient, but the APIs fight hard against this and the fact that we have to handle duplicates in the peculiar manner we do (removing all but one in some cases) makes even implementing that very tedious and annoying. Anyways, none of this is new here or specific to loop unswitching. All code in LLVM that updates PHI node operands suffers from these problems. llvm-svn: 336536
* Lift JSON library from clang-tools-extra/clangd to llvm/Support.Sam McCall2018-07-092-0/+643
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This consists of four main parts: - an type json::Expr representing JSON values of dynamic kind, which can be composed, inspected, and modified - a JSON parser from string -> json::Expr - a JSON printer from json::Expr -> string, with optional pretty-printing - a convention for mapping json::Expr <=> native types (fromJSON/toJSON) Mapping functions are provided for primitives (e.g. int, vector) and the ObjectMapper helper helps implement fromJSON for struct/object types. Based on clangd's usage, a couple of places I'd appreciate review attention: - fromJSON returns only bool. A richer error-signaling mechanism may be useful to provide useful messages, or let recursive fromJSONs (containers/structs) do careful error recovery. - should json::obj be always explicitly written (like json::ary) - there's no streaming parse API. I suspect there are some simple wins like a callback API where the document is a long array, and each element is small. But this can probably be bolted on easily when we see the need. Reviewers: bkramer, labath Subscribers: mgorny, ilya-biryukov, ioeric, MaskRay, llvm-commits Differential Revision: https://reviews.llvm.org/D45753 llvm-svn: 336534
* [AArch64][SVE] Asm: Support for ADR instruction.Sander de Smalen2018-07-094-16/+88
| | | | | | | | | | | | | | | | | | Supporting various addressing modes: - adr z0.s, [z0.s, z0.s] - adr z0.s, [z0.s, z0.s, lsl #<shift>] - adr z0.d, [z0.d, z0.d] - adr z0.d, [z0.d, z0.d, lsl #<shift>] - adr z0.d, [z0.d, z0.d, uxtw #<shift>] - adr z0.d, [z0.d, z0.d, sxtw #<shift>] Reviewers: rengolin, fhahn, SjoerdMeijer, samparker, javed.absar Reviewed By: SjoerdMeijer Differential Revision: https://reviews.llvm.org/D48870 llvm-svn: 336533
* [AArch64][SVE] Asm: Support for UZP and TRN instructions.Sander de Smalen2018-07-091-0/+8
| | | | | | | | | | | | | | This patch adds support for: UZP1 Concatenate even elements from two vectors UZP2 Concatenate odd elements from two vectors TRN1 Interleave even elements from two vectors TRN2 Interleave odd elements from two vectors With variants for both data and predicate vectors, e.g. uzp1 z0.b, z1.b, z2.b trn2 p0.s, p1.s, p2.s llvm-svn: 336531
* [AccelTable] Provide abstraction for emitting DWARF5 accelerator tables.Jonas Devlieghere2018-07-091-18/+56
| | | | | | | | | | | | When emitting the DWARF accelerator tables from dsymutil, we don't have a DwarfDebug instance and we use a custom class to represent Dwarf compile units. This patch adds an interface AccelTableWriterInfo to abstract these from the Dwarf5AccelTableWriter, so we can have a custom implementation for this in dsymutil. Differential revision: https://reviews.llvm.org/D49031 llvm-svn: 336529
* [AccelTable] Dwarf5AccelTableEmitter -> Writer (NFC)Jonas Devlieghere2018-07-091-38/+38
| | | | | | | Renames Dwarf5AccelTableEmitter to Dwarf5AccelTableWriter as suggested in D49031. llvm-svn: 336525
* [PGOMemOPSize] Preserve the DominatorTreeChijun Sima2018-07-091-8/+29
| | | | | | | | | | | | | | | | Summary: PGOMemOPSize only modifies CFG in a couple of places; thus we can preserve the DominatorTree with little effort. When optimizing SQLite with -O3, this patch can decrease 3.8% of the numbers of nodes traversed by DFS and 5.7% of the times DominatorTreeBase::recalculation is called. Reviewers: kuhar, davide, dmgreen Reviewed By: dmgreen Subscribers: mzolotukhin, vsk, llvm-commits Differential Revision: https://reviews.llvm.org/D48914 llvm-svn: 336522
* [X86] Improve the message for some asserts. Remove an if that is guaranteed ↵Craig Topper2018-07-091-15/+16
| | | | | | | | | | true by said asserts. This replaces some asserts in lowerV2F64VectorShuffle with the similar asserts from lowerVIF64VectorShuffle which are more readable. The original asserts mentioned a blend, but there's no guarantee that it is a blend. Also remove an if that the asserts prove is always true. Mask[0] is always less than 2 and Mask[1] is always at least 2. Therefore (Mask[0] >= 2) + (Mask[1] >= 2) == 1 must wlays be true. llvm-svn: 336517
* [X86] Remove an AddedComplexity line that seems unnecessary.Craig Topper2018-07-081-4/+2
| | | | | | | | It only existed on SSE and AVX version. AVX512 version didn't have it. I checked the generated table and this didn't seem necessary to creat a match preference. llvm-svn: 336516
* [X86][Nearly NFC] Split SHLD/SHRD into their own WriteShiftDouble classRoman Lebedev2018-07-0811-2/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: {F6603964} While there is still some discrepancies within that new group, it is clearly separate from the other shifts. And Agner's tables agree, these double shifts are clearly different from the normal shifts/rotates. I'm guessing `FeatureSlowSHLD` is related. Indeed, a basic sched pair is *not* the /best/ match. But keeping it in the WriteShift is /clearly/ not ideal either. This can and likely will be fine-tuned later. This is purely mechanical change, it does not change any numbers, as the [lack of the change of] mca tests show. Reviewers: craig.topper, RKSimon, andreadb Reviewed By: craig.topper Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D49015 llvm-svn: 336515
* [X86] Enhance combineFMA to look for FNEG behind an EXTRACT_VECTOR_ELT.Craig Topper2018-07-081-1/+13
| | | | llvm-svn: 336514
* [X86][SSE] Combine v16i8 SHL by constants to multipliesSimon Pilgrim2018-07-081-1/+2
| | | | | | | | Pre-AVX512 (which can perform a quick extend/shift/truncate), extending to 2 v8i16 for the PMULLW and then truncating is more performant than relying on the generic PBLENDVB vXi8 shift path and uses a similar amount of mask constant pool data. Differential Revision: https://reviews.llvm.org/D48963 llvm-svn: 336513
* [X86] Set scheduler classes to unsupported. NFCI.Simon Pilgrim2018-07-081-53/+53
| | | | | | While looking at PR36895 I noticed how much of the atom model was still setting schedules for unsupported SSE4+ instructions. llvm-svn: 336512
* [X86][Basically NFC] Sched: split WriteBitScan into WriteBSF/WriteBSR.Roman Lebedev2018-07-0811-46/+56
| | | | | | | | | | | | | | | | | | Summary: Motivation: {F6597954} This only does the mechanical splitting, does not actually change any numbers, as the tests added in previous revision show. Reviewers: craig.topper, RKSimon, courbet Reviewed By: craig.topper Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D48998 llvm-svn: 336511
* [LoopIdiomRecognize] Support for converting loops that use LSHR to CTLZ.Craig Topper2018-07-081-13/+21
| | | | | | | | | | | | | | | | | | | | | In the 'detectCTLZIdiom' function support for loops that use LSHR instruction instead of ASHR has been added. This supports creating ctlz from the following code. int lzcnt(int x) { int count = 0; while (x > 0) { count++; x = x >> 1; } return count; } Patch by Olga Moldovanova Differential Revision: https://reviews.llvm.org/D48354 llvm-svn: 336509
* [X86] Add back some intrinsic table entries lost in r336506.Craig Topper2018-07-081-0/+6
| | | | llvm-svn: 336508
* [X86] Add new scalar fma intrinsics with rounding mode that use f32/f64 types.Craig Topper2018-07-083-56/+189
| | | | | | | | | | | | | | This allows us to handle masking in a very similar way to the default rounding version that uses llvm.fma. I had to add new rounding mode CodeGenOnly instructions to support isel when we can't find a movss to grab the upper bits from to use the b_Int instruction. Fast-isel tests have been updated to match new clang codegen. We are currently having trouble folding fneg into the new intrinsic. I'm going to correct that in a follow up patch to keep the size of this one down. A future patch will also remove the old intrinsics. llvm-svn: 336506
* [SelectionDAG] Split float and integer isKnownNeverZero testsSimon Pilgrim2018-07-072-7/+21
| | | | | | | | | | Splits off isKnownNeverZeroFloat to handle +/- 0 float cases. This will make it easier to be more aggressive with the integer isKnownNeverZero tests (similar to ValueTracking), use computeKnownBits etc. Differential Revision: https://reviews.llvm.org/D48969 llvm-svn: 336492
* Use const APInt& to avoid extra copy. NFCI.Simon Pilgrim2018-07-071-1/+1
| | | | | | As discussed on D48825. llvm-svn: 336491
* [DAGCombiner] Add EXTRACT_SUBVECTOR to SimplifyDemandedVectorEltsSimon Pilgrim2018-07-072-0/+22
| | | | | | | | As discussed on PR37989, this patch adds EXTRACT_SUBVECTOR handling to TargetLowering::SimplifyDemandedVectorElts and calls it from DAGCombiner::visitEXTRACT_SUBVECTOR. Differential Revision: https://reviews.llvm.org/D48825 llvm-svn: 336490
* [CostModel][X86] Add SREM/UREM general and constant costs (PR38056)Simon Pilgrim2018-07-071-3/+31
| | | | | | | | | | We penalize general SDIV/UDIV costs but don't do the same for SREM/UREM. This patch makes general vector SREM/UREM x20 as costly as scalar, the same approach as we do for SDIV/UDIV. The patch also extends the existing SDIV/UDIV constant costs for SREM/UREM - at the moment this means the additional cost of a MUL+SUB (see D48975). Differential Revision: https://reviews.llvm.org/D48980 llvm-svn: 336486
* [MachineOutliner] Assert that Liveness tracking is accurate (NFC)Yvan Roux2018-07-071-0/+2
| | | | | | | | | | The checking is done deeper inside MachineBasicBlock, but this will hopefully help to find issues when porting the machine outliner to a target where Liveness tracking is broken (like ARM). Differential Revision: https://reviews.llvm.org/D49023 llvm-svn: 336481
* [PM/LoopUnswitch] Fix PR37889, producing the correct loop nest structureChandler Carruth2018-07-071-2/+81
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | after trivial unswitching. This PR illustrates that a fundamental analysis update was not performed with the new loop unswitch. This update is also somewhat fundamental to the core idea of the new loop unswitch -- we actually *update* the CFG based on the unswitching. In order to do that, we need to update the loop nest in addition to the domtree. For some reason, when writing trivial unswitching, I thought that the loop nest structure cannot be changed by the transformation. But the PR helps illustrate that it clearly can. I've expanded this to a number of different test cases that try to cover the different cases of this. When we unswitch, we move an exit edge of a loop out of the loop. If this exit edge changes which loop reached by an exit is the innermost loop, it changes the parent of the loop. Essentially, this transformation may hoist the inner loop up the nest. I've added the simple logic to handle this reliably in the trivial unswitching case. This just requires updating LoopInfo and rebuilding LCSSA on the impacted loops. In the trivial case, we don't even need to handle dedicated exits because we're only hoisting the one loop and we just split its preheader. I've also ported all of these tests to non-trivial unswitching and verified that the logic already there correctly handles the loop nest updates necessary. Differential Revision: https://reviews.llvm.org/D48851 llvm-svn: 336477
* [X86] Merge INTR_TYPE_3OP_RM with INTR_TYPE_3OP. Remove unused INTR_TYPE_1OP_RM.Craig Topper2018-07-072-40/+21
| | | | llvm-svn: 336476
* Revert "[SCEV] Strengthen StrengthenNoWrapFlags (reapply r334428)."Tim Shen2018-07-061-20/+7
| | | | | | This reverts commit r336140. Our tests shows that LSR assert fails with it. llvm-svn: 336473
* [PDB] memicmp only exists on Windows, use StringRef::compare_lower insteadBenjamin Kramer2018-07-061-2/+2
| | | | llvm-svn: 336469
* Fix DIExpression::ExprOperand::appendToVectorVedant Kumar2018-07-061-6/+2
| | | | | | | | | | | | appendToVector used the wrong overload of SmallVector::append, resulting in it appending the same element to a vector `getSize()` times. This did not cause a problem when initially committed because appendToVector was only used to append 1-element operands. This changes appendToVector to use the correct overload of append(). Testing: ./unittests/IR/IRTests --gtest_filter='*DIExpressionTest*' llvm-svn: 336466
* Remove a redundant null-check in DIExpression::prepend, NFCVedant Kumar2018-07-061-13/+14
| | | | | | | Code outside of an `if (Expr)` block dereferenced `Expr`, so the null check was redundant. llvm-svn: 336465
* [PDB] One more fix for hasing GSI records.Zachary Turner2018-07-061-8/+27
| | | | | | | | | | | | | | | | The reference implementation uses a case-insensitive string comparison for strings of equal length. This will cause the string "tEo" to compare less than "VUo". However we were using a case sensitive comparison, which would generate the opposite outcome. Switch to a case insensitive comparison. Also, when one of the strings contains non-ascii characters, fallback to a straight memcmp. The only way to really test this is with a DIA test. Before this patch, the test will fail (but succeed if link.exe is used instead of lld-link). After the patch, it succeeds even with lld-link. llvm-svn: 336464
* Use Type::isIntOrPtrTy where possible, NFCVedant Kumar2018-07-068-37/+22
| | | | | | | | | | | It's a bit neater to write T.isIntOrPtrTy() over `T.isIntegerTy() || T.isPointerTy()`. I used Python's re.sub with this regex to update users: r'([\w.\->()]+)isIntegerTy\(\)\s*\|\|\s*\1isPointerTy\(\)' llvm-svn: 336462
* [IR] Fix inconsistent declaration parameter nameFangrui Song2018-07-062-2/+2
| | | | llvm-svn: 336459
* [X86] Remove patterns for MOVLPD/MOVLPS nodes with integer types.Craig Topper2018-07-061-8/+0
| | | | | | Lowering shouldn't generate these. If we need to use them for integer types, it should use a bitcast. llvm-svn: 336458
* [X86] Add more FMA3 memory folding patterns. Remove patterns that are no ↵Craig Topper2018-07-062-53/+49
| | | | | | | | longer needed. We've removed the legacy FMA3 intrinsics and are now using llvm.fma and extractelement/insertelement. So we don't need patterns for the nodes that could only be created by the old intrinscis. Those ISD opcodes still exist because we haven't dropped the AVX512 intrinsics yet, but those should go to EVEX instructions. llvm-svn: 336457
* Revert 336426 (and follow-ups 428, 440), it very likely caused PR38084.Nico Weber2018-07-061-105/+0
| | | | llvm-svn: 336453
* [Local] replaceAllDbgUsesWith: Update debug values before RAUWVedant Kumar2018-07-064-46/+234
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The replaceAllDbgUsesWith utility helps passes preserve debug info when replacing one value with another. This improves upon the existing insertReplacementDbgValues API by: - Updating debug intrinsics in-place, while preventing use-before-def of the replacement value. - Falling back to salvageDebugInfo when a replacement can't be made. - Moving the responsibiliy for rewriting llvm.dbg.* DIExpressions into common utility code. Along with the API change, this teaches replaceAllDbgUsesWith how to create DIExpressions for three basic integer and pointer conversions: - The no-op conversion. Applies when the values have the same width, or have bit-for-bit compatible pointer representations. - Truncation. Applies when the new value is wider than the old one. - Zero/sign extension. Applies when the new value is narrower than the old one. Testing: - check-llvm, check-clang, a stage2 `-g -O3` build of clang, regression/unit testing. - This resolves a number of mis-sized dbg.value diagnostics from Debugify. Differential Revision: https://reviews.llvm.org/D48676 llvm-svn: 336451
* AMDGPU: Fix UBSan error caused by r335942Tom Stellard2018-07-063-24/+21
| | | | | | | | | | | | | | Summary: Fixes PR38071. Reviewers: arsenm, dstenb Reviewed By: arsenm Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, llvm-commits Differential Revision: https://reviews.llvm.org/D48979 llvm-svn: 336448
OpenPOWER on IntegriCloud