summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
* GlobalISel: implement simple function calls on AArch64.Tim Northover2016-08-107-45/+160
| | | | | | | We're still limited in the arguments we support, but this at least handles the basic cases. llvm-svn: 278293
* AMDGPU/SI: Implement amdgcn image intrinsics with samplerChangpeng Fang2016-08-101-3/+133
| | | | | | | | | | | | | | | | | | | | | | Summary: This patch define and implement amdgcn image intrinsics with sampler. 1. define vdata type to be llvm_anyfloat_ty, address type to be llvm_anyfloat_ty, and rsrc type to be llvm_anyint_ty. As a result, we expect the intrinsics name to have three suffixes to overload each of these three types; 2. D128 as well as two other flags are implied in the three types, for example, if you use v8i32 as resource type, then r128 is 0! 3. don't expose TFE flag, and other flags are exposed in the instruction order: unrm, glc, slc, lwe and da. Differential Revision: http://reviews.llvm.org/D22838 Reviewed by: arsenm and tstellarAMD llvm-svn: 278291
* Changed sign of LastCallToStaticBounsPiotr Padlewski2016-08-102-2/+2
| | | | | | | | | | | | | | | | | Summary: I think it is much better this way. When I firstly saw line: Cost += InlineConstants::LastCallToStaticBonus; I though that this is a bug, because everywhere where the cost is being reduced it is usuing -=. Reviewers: eraman, tejohnson, mehdi_amini Subscribers: llvm-commits, mehdi_amini Differential Revision: https://reviews.llvm.org/D23222 llvm-svn: 278290
* Codegen: Don't tail-duplicate blocks with un-analyzable fallthrough.Kyle Butt2016-08-101-0/+10
| | | | | | | | | | | | If AnalyzeBranch can't analyze a block and it is possible to fallthrough, then duplicating the block doesn't make sense, as only one block can be the layout predecessor for the un-analyzable fallthrough. Submitted wit a test case, but NOTE: the test case doesn't currently fail. However, the test case fails with D20505 and would have saved me some time debugging. llvm-svn: 278288
* CodeGen: If Convert blocks that would form a diamond when tail-merged.Kyle Butt2016-08-101-65/+351
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The following function currently relies on tail-merging for if conversion to succeed. The common tail of cond_true and cond_false is extracted, and this then forms a diamond pattern that can be successfully if converted. If this block does not get extracted, either because tail-merging is disabled or the threshold is higher, we should still recognize this pattern and if-convert it. Fixed a regression in the original commit. Need to un-reverse branches after reversing them, or other conversions go awry. define i32 @t2(i32 %a, i32 %b) nounwind { entry: %tmp1434 = icmp eq i32 %a, %b ; <i1> [#uses=1] br i1 %tmp1434, label %bb17, label %bb.outer bb.outer: ; preds = %cond_false, %entry %b_addr.021.0.ph = phi i32 [ %b, %entry ], [ %tmp10, %cond_false ] %a_addr.026.0.ph = phi i32 [ %a, %entry ], [ %a_addr.026.0, %cond_false ] br label %bb bb: ; preds = %cond_true, %bb.outer %indvar = phi i32 [ 0, %bb.outer ], [ %indvar.next, %cond_true ] %tmp. = sub i32 0, %b_addr.021.0.ph %tmp.40 = mul i32 %indvar, %tmp. %a_addr.026.0 = add i32 %tmp.40, %a_addr.026.0.ph %tmp3 = icmp sgt i32 %a_addr.026.0, %b_addr.021.0.ph br i1 %tmp3, label %cond_true, label %cond_false cond_true: ; preds = %bb %tmp7 = sub i32 %a_addr.026.0, %b_addr.021.0.ph %tmp1437 = icmp eq i32 %tmp7, %b_addr.021.0.ph %indvar.next = add i32 %indvar, 1 br i1 %tmp1437, label %bb17, label %bb cond_false: ; preds = %bb %tmp10 = sub i32 %b_addr.021.0.ph, %a_addr.026.0 %tmp14 = icmp eq i32 %a_addr.026.0, %tmp10 br i1 %tmp14, label %bb17, label %bb.outer bb17: ; preds = %cond_false, %cond_true, %entry %a_addr.026.1 = phi i32 [ %a, %entry ], [ %tmp7, %cond_true ], [ %a_addr.026.0, %cond_false ] ret i32 %a_addr.026.1 } Without tail-merging or diamond-tail if conversion: LBB1_1: @ %bb @ =>This Inner Loop Header: Depth=1 cmp r0, r1 ble LBB1_3 @ BB#2: @ %cond_true @ in Loop: Header=BB1_1 Depth=1 subs r0, r0, r1 cmp r1, r0 it ne cmpne r0, r1 bgt LBB1_4 LBB1_3: @ %cond_false @ in Loop: Header=BB1_1 Depth=1 subs r1, r1, r0 cmp r1, r0 bne LBB1_1 LBB1_4: @ %bb17 bx lr With diamond-tail if conversion, but without tail-merging: @ BB#0: @ %entry cmp r0, r1 it eq bxeq lr LBB1_1: @ %bb @ =>This Inner Loop Header: Depth=1 cmp r0, r1 ite le suble r1, r1, r0 subgt r0, r0, r1 cmp r1, r0 bne LBB1_1 @ BB#2: @ %bb17 bx lr llvm-svn: 278287
* Fix UB in APInt::ashrJonathan Roelofs2016-08-101-5/+1
| | | | | | | | i64 -1, whose sign bit is the 0th one, can't be left shifted without invoking UB. https://reviews.llvm.org/D23362 llvm-svn: 278280
* AMDGPU: s_setpc_b64 should be an indirect branchMatt Arsenault2016-08-101-1/+2
| | | | llvm-svn: 278278
* AMDGPU: Set sizes on control flow pseudosMatt Arsenault2016-08-101-9/+17
| | | | llvm-svn: 278276
* AMDGPU: Remove empty file commentMatt Arsenault2016-08-101-1/+0
| | | | llvm-svn: 278275
* AMDGPU: Remove unnecessary castMatt Arsenault2016-08-101-4/+2
| | | | llvm-svn: 278274
* AMDGPU: Change insertion point of si_mask_branchMatt Arsenault2016-08-102-12/+19
| | | | | | | | | | | | | Insert before the skip branch if one is created. This is a somewhat more natural placement relative to the skip branches, and makes it possible to implement analyzeBranch for skip blocks. The test changes are mostly due to a quirk where the block label is not emitted if there is a terminator that is not also a branch. llvm-svn: 278273
* AMDGPU: Use CreateStackObject instead of CreateSpillStackObjectMatt Arsenault2016-08-101-2/+2
| | | | | | | I'm not sure what the difference is, but no other target uses this for emergency spill slots. llvm-svn: 278272
* [x86, AVX] allow FP vector select folding to bitwise logic ops (PR28895)Sanjay Patel2016-08-101-4/+10
| | | | | | | | | | | | | | | This handles the case in: https://llvm.org/bugs/show_bug.cgi?id=28895 ...but we are not getting all of the possibilities yet. Eg, we use 'X86::FANDN' for scalar FP select combines. That enhancement is filed as: https://llvm.org/bugs/show_bug.cgi?id=28925 Differential Revision: https://reviews.llvm.org/D23337 llvm-svn: 278270
* [IndVarSimplify] Eliminate zext of a signed IV when the IV is known to be ↵Andrew Kaylor2016-08-101-2/+7
| | | | | | | | | | non-negative Patch by Li Huang Differential Revision: https://reviews.llvm.org/D18867 llvm-svn: 278269
* LiveIntervalAnalysis: fix a crash in repairOldRegInRangeNicolai Haehnle2016-08-101-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: See the new test case for one that was (non-deterministically) crashing on trunk and deterministically hit the assertion that I added in D23302. Basically, the machine function contains a sequence DS_WRITE_B32 %vreg4, %vreg14:sub0, ... DS_WRITE_B32 %vreg4, %vreg14:sub0, ... %vreg14:sub1<def> = COPY %vreg14:sub0 and SILoadStoreOptimizer::mergeWrite2Pair merges the two DS_WRITE_B32 instructions into one before calling repairIntervalsInRange. Now repairIntervalsInRange wants to repair %vreg14, in particular, and ends up trying to repair %vreg14:sub1 as well, but that only becomes active _after_ the range that is to be repaired, hence the crash due to LR.find(...) == LR.begin() at the start of repairOldRegInRange. I believe that just skipping those subrange is fine, but again, not too familiar with that code. Reviewers: MatzeB, kparzysz, tstellarAMD Subscribers: llvm-commits, MatzeB Differential Revision: https://reviews.llvm.org/D23303 llvm-svn: 278268
* [ValueTracking] An improvement to IR ValueTracking on Non-negative IntegersAndrew Kaylor2016-08-101-1/+37
| | | | | | | | Patch by Li Huang Differential Revision: https://reviews.llvm.org/D18777 llvm-svn: 278267
* [Hexagon] Remove unused variants of LO/HI instructionsKrzysztof Parzyszek2016-08-101-34/+0
| | | | llvm-svn: 278266
* Codegen: Tail Merge: Be less aggressive with special cases.Kyle Butt2016-08-101-4/+13
| | | | | | | | | | | | This change makes it possible for tail-duplication and tail-merging to be disjoint. By being less aggressive when merging during layout, there are no overlapping cases between tail-duplication and tail-merging, provided the thresholds are disjoint. There is a remaining TODO to benchmark the succ_size() test for non-layout tail merging. llvm-svn: 278265
* [X86][SSE] Dropped blend(insertps(x,y),zero) combine - this is now handled ↵Simon Pilgrim2016-08-101-38/+0
| | | | | | by target shuffle chain combining llvm-svn: 278260
* [Hexagon] Simplify the SplitConst32/64 passKrzysztof Parzyszek2016-08-101-65/+29
| | | | llvm-svn: 278256
* [Hexagon] Add extra patterns for single-precision min/max instructionsKrzysztof Parzyszek2016-08-101-18/+18
| | | | llvm-svn: 278252
* Fix LCSSA increased compile timeRong Xu2016-08-101-7/+7
| | | | | | | | | | | | | | | | | | | | | We are seeing r276077 drastically increasing compiler time for our larger benchmarks in PGO profile generation build (both clang based and IR based mode) -- it can be 20x slower than without the patch (like from 30 secs to 780 secs) The increased time are all in pass LCSSA. The problematic code is about PostProcessPHIs after use-rewrite. Note that the InsertedPhis from ssa_updater is accumulating (never been cleared). Since the inserted PHIs are added to the candidate for each rewrite, The earlier ones will be repeatedly added. Later when adding the new PHIs to the work-list, we don't check the duplication either. This can result in extremely long work-list that containing tons of duplicated PHIs. This patch fixes the issue by hoisting the code out of the loop. Differential Revision: http://reviews.llvm.org/D23344 llvm-svn: 278250
* [Hexagon] Fix table-gen decode conflict warnings for CONST32/64Krzysztof Parzyszek2016-08-101-7/+6
| | | | llvm-svn: 278247
* GlobalISel: avoid inserting redundant COPYs for bitcasts.Tim Northover2016-08-101-2/+5
| | | | | | | If the value produced by the bitcast hasn't been referenced yet, we can simply reuse the input register avoiding an unnecessary COPY instruction. llvm-svn: 278245
* [Hexagon] Use integer instructions for floating point immediatesKrzysztof Parzyszek2016-08-1014-239/+117
| | | | | | | | | | | | Floating point instructions use general purpose registers, so the few instructions that can put floating point immediates into registers are, in fact, integer instruction. Use them explicitly instead of having pseudo-instructions specifically for dealing with floating point values. Simplify the constant loading instructions (from sdata) to have only two: one for 32-bit values and one for 64-bit values: CONST32 and CONST64. llvm-svn: 278244
* [Coroutines] Part 6: Elide dynamic allocation of a coroutine frame when possibleGor Nishanov2016-08-104-28/+213
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: A particular coroutine usage pattern, where a coroutine is created, manipulated and destroyed by the same calling function, is common for coroutines implementing RAII idiom and is suitable for allocation elision optimization which avoid dynamic allocation by storing the coroutine frame as a static `alloca` in its caller. coro.free and coro.alloc intrinsics are used to indicate which code needs to be suppressed when dynamic allocation elision happens: ``` entry: %elide = call i8* @llvm.coro.alloc() %need.dyn.alloc = icmp ne i8* %elide, null br i1 %need.dyn.alloc, label %coro.begin, label %dyn.alloc dyn.alloc: %alloc = call i8* @CustomAlloc(i32 4) br label %coro.begin coro.begin: %phi = phi i8* [ %elide, %entry ], [ %alloc, %dyn.alloc ] %hdl = call i8* @llvm.coro.begin(i8* %phi, i32 0, i8* null, i8* bitcast ([2 x void (%f.frame*)*]* @f.resumers to i8*)) ``` and ``` %mem = call i8* @llvm.coro.free(i8* %hdl) %need.dyn.free = icmp ne i8* %mem, null br i1 %need.dyn.free, label %dyn.free, label %if.end dyn.free: call void @CustomFree(i8* %mem) br label %if.end if.end: ... ``` If heap allocation elision is performed, we replace coro.alloc with a static alloca on the caller frame and coro.free with null constant. Also, we need to make sure that if there are any tail calls referencing the coroutine frame, we need to remote tail call attribute, since now coroutine frame lives on the stack. Documentation and overview is here: http://llvm.org/docs/Coroutines.html. Upstreaming sequence (rough plan) 1.Add documentation. (https://reviews.llvm.org/D22603) 2.Add coroutine intrinsics. (https://reviews.llvm.org/D22659) 3.Add empty coroutine passes. (https://reviews.llvm.org/D22847) 4.Add coroutine devirtualization + tests. ab) Lower coro.resume and coro.destroy (https://reviews.llvm.org/D22998) c) Do devirtualization (https://reviews.llvm.org/D23229) 5.Add CGSCC restart trigger + tests. (https://reviews.llvm.org/D23234) 6.Add coroutine heap elision + tests. <= we are here 7.Add the rest of the logic (split into more patches) Reviewers: mehdi_amini, majnemer Subscribers: mehdi_amini, llvm-commits Differential Revision: https://reviews.llvm.org/D23245 llvm-svn: 278242
* Fix build break of VS 2013 debug buildsRoger Ferrer Ibanez2016-08-101-0/+3
| | | | | | | | | | In debug mode extra macros are enabled for several C++ algorithms. Some of them may cause unfortunate build failures. This commit adds a redundant operator() to work around one of those troublesome macros which was hit accidentally by change r278012. llvm-svn: 278241
* [Hexagon] Delete HexagonSelectCCInfo.tdKrzysztof Parzyszek2016-08-102-122/+0
| | | | | | | This file is not used. The location assignment of call arguments and return values is implemented directly in HexagonISelLowering. llvm-svn: 278237
* [Hexagon] Remove unneeded/unused ISD opcodes ARGEXTEND and FCONST32Krzysztof Parzyszek2016-08-106-29/+0
| | | | llvm-svn: 278236
* [LVI] Handle conditions in the form of (cond1 && cond2)Artur Pilipenko2016-08-101-8/+42
| | | | | | | | | | Teach LVI how to gather information from conditions in the form of (cond1 && cond2). Our out-of-tree front-end emits range checks in this form. Reviewed By: sanjoy Differential Revision: http://reviews.llvm.org/D23200 llvm-svn: 278231
* [X86][SSE] Add support for combining target shuffles to MOVSS/MOVSDSimon Pilgrim2016-08-101-3/+22
| | | | | | Only do this on pre-SSE41 targets where we should be lowering to BLENDPS/BLENDPD instead llvm-svn: 278228
* [LVI] NFC. Make getValueFromCondition return LVILatticeValue instead of ↵Artur Pilipenko2016-08-101-25/+16
| | | | | | | | | | changing reference argument Instead of returning bool and setting LVILatticeValue reference argument return LVILattice value. Use overdefined value to denote the case when we didn't gather any information from the condition. This change was separated from the review "[LVI] Handle conditions in the form of (cond1 && cond2)" (https://reviews.llvm.org/D23200#inline-199531). Once getValueFromCondition returns LVILatticeValue we can cache the result in Visited map. llvm-svn: 278224
* Teach CorrelatedValuePropagation to mark adds as no wrapArtur Pilipenko2016-08-101-0/+57
| | | | | | | | | | | | This is a resubmission of previously reverted r277592. It was hitting overly strong assertion in getConstantRange which was relaxed in r278217. Use LVI to prove that adds do not wrap. The change is motivated by https://llvm.org/bugs/show_bug.cgi?id=28620 bug and it's the first step to fix that problem. Reviewed By: sanjoy Differential Revision: http://reviews.llvm.org/D23059 llvm-svn: 278220
* [X86][SSE] Only treat SM_SentinelUndef as UNDEF in shuffle mask predicatesSimon Pilgrim2016-08-101-3/+3
| | | | | | | | isUndefOrEqual and isUndefOrInRange treated all -ve shuffle mask values as UNDEF, now it has to be SM_SentinelUndef (-1) We already have asserts to check that lowered SHUFFLE_VECTOR indices are in the range -1 <= index < 2*masksize (or masksize for unary shuffles) llvm-svn: 278218
* [LVI] Relax the assertion about LVILatticeVal type in getConstantRangeArtur Pilipenko2016-08-101-1/+4
| | | | | | | | | | | | | | | | | | | | The problem was triggered by my recent change in CVP (D23059). Current code expected that integer constants are represented by constantrange LVILatticeVal and never represented as LVILatticeVal with constant tag. That is true for ConstantInt constants, although ConstantExpr integer type constants are legally represented as constant LVILatticeVal. This code fails with CVP change in: @b = global i32 0, align 4 define void @test6(i32 %a) { bb: %add = add i32 %a, ptrtoint (i32* @b to i32) ret void } Currently getConstantRange code is not executed by any of the upstream passes. I'm going to add a test case to test/Transforms/CorrelatedValuePropagation/add.ll once I resubmit the CVP change. Reviewed By: sanjoy Differential Revision: http://reviews.llvm.org/D23194 llvm-svn: 278217
* [X86][SSE] Reorder shuffle mask undef helper predicates. NFCISimon Pilgrim2016-08-101-10/+10
| | | | | | To make it easier for a more complex helper to use a simpler one llvm-svn: 278216
* [DAGCombine] Avoid INSERT_SUBVECTOR reinsertions (PR28678)Simon Pilgrim2016-08-101-1/+10
| | | | | | | | | | | If the input vector to INSERT_SUBVECTOR is another INSERT_SUBVECTOR, and this inserted subvector replaces the last insertion, then insert into the common source vector. i.e. INSERT_SUBVECTOR( INSERT_SUBVECTOR( Vec, SubOld, Idx ), SubNew, Idx ) --> INSERT_SUBVECTOR( Vec, SubNew, Idx ) Differential Revision: https://reviews.llvm.org/D23330 llvm-svn: 278211
* [ARM] Improve sxta{b|h} and uxta{b|h} testsSam Parker2016-08-103-12/+33
| | | | | | | | | | | | | Created a Thumb2 predicated pattern matcher that uses Thumb2 and HasT2ExtractPack and used it to redefine the patterns for sxta{b|h} and uxta{b|h}. Also used the similar patterns to fill in isel pattern gaps for the corresponding instructions in the ARM backend. The patch is mainly changes to tests since most of this functionality appears not to have been tested. Differential Revision: https://reviews.llvm.org/D23273 llvm-svn: 278207
* [x86] Fix a bug in the auto-upgrade from r276416 where we failed to giveChandler Carruth2016-08-101-1/+1
| | | | | | | | | | | | | | | | | | | a sufficiently low alignment for the IR load created. There is no test case because we don't have any test cases for the *IR* produced by the autoupgrade, only the x86 assembly, and it happens that the x86 assembly for this intrinsic as it is tested in the autoupgrade path just happens to not produce a separate load instruction where we might have observed the alignment. I'm going to follow up on the original commit to suggest getting IR-level testing in addition to the asm level testing here so that we can see and test these kinds of issues. We might never get an x86 instruction out with an alignment constraint, but we could stil miscompile code by folding against the alignment marked on (or inferred for in this case) the load. llvm-svn: 278203
* [SimplifyLibCalls] Restore the old behaviour, emit a libcall.Davide Italiano2016-08-101-3/+5
| | | | | | | | Hal pointed out that the semantic of our intrinsic and the libc call are slightly different. Add a comment while I'm here to explain why we can't emit an intrinsic. Thanks Hal! llvm-svn: 278200
* Do not directly use inline threshold cl options in cost analysis.Easwaran Raman2016-08-102-71/+106
| | | | | | | | This adds an InlineParams struct which is populated from the command line options by getInlineParams and passed to getInlineCost for the call analyzer to use. Differential revision: https://reviews.llvm.org/D22120 llvm-svn: 278189
* [Inliner,OptDiag] Add hotness attribute to opt diagnosticsAdam Nemet2016-08-102-35/+64
| | | | | | | | | | | | | | | | | | | | Summary: The inliner not being a function pass requires the work-around of generating the OptimizationRemarkEmitter and in turn BFI on demand. This will go away after the new PM is ready. BFI is only computed inside ORE if the user has requested hotness information for optimization diagnostitics (-pass-remark-with-hotness at the 'opt' level). Thus there is no additional overhead without the flag. Reviewers: hfinkel, davidxl, eraman Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D22694 llvm-svn: 278185
* [IR] Remove some unused #includes (NFC)Vedant Kumar2016-08-091-4/+0
| | | | | | | | | | | | I needed a reader-writer lock for a downstream project and noticed that llvm has one. Function.cpp is the only file in-tree that refers to it. To anyone reading this: are you using RWMutex in out-of-tree code? Maybe it's not worth keeping around any more... Since we're not actually using RWMutex *here*, remove the #include (and a few other stale headers while we're at it). llvm-svn: 278178
* GlobalISel: support 'undef' constant.Tim Northover2016-08-091-4/+6
| | | | llvm-svn: 278174
* [LoopSimplify] Rebuild LCSSA for the inner loop after separating nested loops.Michael Zolotukhin2016-08-091-32/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This hopefully fixes PR28825. The problem now was that a value from the original loop was used in a subloop, which became a sibling after separation. While a subloop doesn't need an lcssa phi node, a sibling does, and that's where we broke LCSSA. The most natural way to fix this now is to simply call formLCSSA on the original loop: it'll do what we've been doing before plus it'll cover situations described above. I think we don't need to run formLCSSARecursively here, and we have an assert to verify this (I've tried testing it on LLVM testsuite + SPECs). I'd be happy to be corrected here though. I also changed a run line in the test from '-lcssa -loop-unroll' to '-lcssa -loop-simplify -indvars', because it exercises LCSSA preservation to the same extent, but also makes less unrelated transformation on the CFG, which makes it easier to verify. Reviewers: chandlerc, sanjoy, silvas Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D23288 llvm-svn: 278173
* [ValueTracking] Improve ValueTracking on left shift with nsw flagAndrew Kaylor2016-08-091-4/+13
| | | | | | | | Patch by Li Huang Differential Revison: https://reviews.llvm.org/D23296 llvm-svn: 278172
* [WebAssembly] Add -emscripten-cxx-exceptions-whitelist optionDerek Schuff2016-08-091-2/+15
| | | | | | | | | | | | | | | This patch adds -emscripten-cxx-exceptions-whitelist option to WebAssemblyLowerEmscriptenExceptions pass. This options is the list of function names in which Emscripten-style exception handling is enabled. This is to support emscripten's EXCEPTION_CATCHING_WHITELIST which exists because of the performance impact of emscripten's non-zero-cost EH method. Patch by Heejin Ahn Differential Revision: https://reviews.llvm.org/D23292 llvm-svn: 278171
* GlobalISel: first translation support for Constants.Tim Northover2016-08-091-1/+23
| | | | | | | | | For now put them all in the entry block. This should be correct but may give poor runtime performance. Hopefully MachineSinking combined with isReMaterializable can solve those issues, but if not the interface is sound enough to support alternatives. llvm-svn: 278168
* Fix the runtime error caused by "Use ValueOffsetPair to enhance value reuse ↵Wei Mi2016-08-092-10/+21
| | | | | | | | | | | | | | during SCEV expansion". The patch is to fix the bug in PR28705. It was caused by setting wrong return value for SCEVExpander::findExistingExpansion. The return values of findExistingExpansion have different meanings when the function is used in different ways so it is easy to make mistake. The fix creates two new interfaces to replace SCEVExpander::findExistingExpansion, and specifies where each interface is expected to be used. Differential Revision: https://reviews.llvm.org/D22942 llvm-svn: 278161
* Recommit "Use ValueOffsetPair to enhance value reuse during SCEV expansion".Wei Mi2016-08-092-33/+79
| | | | | | | | | | | | | | | | | | | | | | | | | The fix for PR28705 will be committed consecutively. In D12090, the ExprValueMap was added to reuse existing value during SCEV expansion. However, const folding and sext/zext distribution can make the reuse still difficult. A simplified case is: suppose we know S1 expands to V1 in ExprValueMap, and S1 = S2 + C_a S3 = S2 + C_b where C_a and C_b are different SCEVConstants. Then we'd like to expand S3 as V1 - C_a + C_b instead of expanding S2 literally. It is helpful when S2 is a complex SCEV expr and S2 has no entry in ExprValueMap, which is usually caused by the fact that S3 is generated from S1 after const folding. In order to do that, we represent ExprValueMap as a mapping from SCEV to ValueOffsetPair. We will save both S1->{V1, 0} and S2->{V1, C_a} into the ExprValueMap when we create SCEV for V1. When S3 is expanded, it will first expand S2 to V1 - C_a because of S2->{V1, C_a} in the map, then expand S3 to V1 - C_a + C_b. Differential Revision: https://reviews.llvm.org/D21313 llvm-svn: 278160
OpenPOWER on IntegriCloud