summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/Utils
Commit message (Collapse)AuthorAgeFilesLines
* Revert "[LoopUnroll] Properly update loop-info when cloning prologues and ↵Michael Zolotukhin2016-09-081-54/+11
| | | | | | | | | | epilogues." This reverts commit r280901. This caused a bunch of failures, reverting it until I investigate them. llvm-svn: 280905
* [LoopUnroll] Properly update loop-info when cloning prologues and epilogues.Michael Zolotukhin2016-09-081-11/+54
| | | | | | | | | | | | | | | | | | | Summary: When cloning blocks for prologue/epilogue we need to replicate the loop structure from the original loop. It wasn't a problem for the innermost loops, but it led to an incorrect loop info when we unrolled a loop with a child loop - in this case created prologue-loop had a child loop, but loop info didn't reflect that. This fixes PR28888. Reviewers: chandlerc, sanjoy, hfinkel Subscribers: llvm-commits, silvas Differential Revision: https://reviews.llvm.org/D24203 llvm-svn: 280901
* IR: Remove Value::intersectOptionalDataWith, replace all calls with calls to ↵Peter Collingbourne2016-09-071-1/+1
| | | | | | | | | | Instruction::andIRFlags. The two functions are functionally equivalent. Differential Revision: https://reviews.llvm.org/D22830 llvm-svn: 280884
* [SimplifyCFG] Don't try to create metadata-valued PHIsHal Finkel2016-09-071-0/+4
| | | | | | | | | | | | | | | | We can't create metadata-valued PHIs; don't try to do so when sinking. I created a test case for this using the @llvm.type.test intrinsic, because it takes a metadata parameter and does not have severe side effects (thus SimplifyCFG is willing to otherwise sink it). Previously, running the test case would crash with: Invalid use of metadata! %.sink = select i1 %flag, metadata <...>, metadata <0x4e45dc0> LLVM ERROR: Broken function found, compilation aborted! llvm-svn: 280866
* [SimplifyCFG] Followup fix to r280790James Molloy2016-09-071-1/+3
| | | | | | In failure cases it's not guaranteed that the PHI we're inspecting is actually in the successor block! In this case we need to bail out early, and never query getIncomingValueForBlock() as that will cause an assert. llvm-svn: 280794
* [SimplifyCFG] Update workaround for PR30188 to also include loadsJames Molloy2016-09-071-2/+7
| | | | | | | | I should have realised this the first time around, but if we're avoiding sinking stores where the operands come from allocas so they don't create selects, we also have to do the same for loads because SROA will be just as defective looking at loads of selected addresses as stores. Fixes PR30188 (again). llvm-svn: 280792
* [SimplifyCFG] Check PHI uses more accuratelyJames Molloy2016-09-071-1/+3
| | | | | | | | PR30292 showed a case where our PHI checking wasn't correct. We were checking that all values were used by the same PHI before deciding to sink, but we weren't checking that the incoming values for that PHI were what we expected. As a result, we had to bail out after block splitting which caused us to never reach a steady state in SimplifyCFG. Fixes PR30292. llvm-svn: 280790
* Explicitly require DominatorTreeAnalysis pass for instsimplify pass.Dehao Chen2016-09-061-5/+6
| | | | | | | | | | | | Summary: DominatorTreeAnalysis is always required by instsimplify. Reviewers: danielcdh, davidxl Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D24173 llvm-svn: 280760
* Fix inliner funclet unwind memoizationJoseph Tremoulet2016-09-041-7/+79
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: The inliner may need to determine where a given funclet unwinds to, and this determination may depend on other funclets throughout the funclet tree. The code that performs this walk in getUnwindDestToken memoizes results to avoid redundant computations. In the case that a funclet's unwind destination is derived from its ancestor, there's code to walk back down the tree from the ancestor updating the memo map of its descendants to record the unwind destination. This change fixes that code to account for the case that some descendant has a different unwind destination, which can happen if that unwind dest is a descendant of the EHPad being queried and thus didn't determine its unwind destination. Also update test inline-funclets.ll, which is supposed to cover such scenarios, to include a case that fails an assertion without this fix but passes with it. Fixes PR29151. Reviewers: majnemer Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D24117 llvm-svn: 280610
* [SimplifyCFG] Add a workaround to fix PR30188James Molloy2016-09-021-0/+10
| | | | | | | | We're sinking stores, which is a good thing, but in the process creating selects for the store address operand, which SROA/Mem2Reg can't look through, which caused serious regressions. The real fix is in SROA, which I'll be looking into. llvm-svn: 280470
* revert r280432:Dehao Chen2016-09-021-6/+5
| | | | | | | | | r280432 | dehao | 2016-09-01 16:51:37 -0700 (Thu, 01 Sep 2016) | 9 lines Explicitly require DominatorTreeAnalysis pass for instsimplify pass. Summary: DominatorTreeAnalysis is always required by instsimplify. llvm-svn: 280452
* Explicitly require DominatorTreeAnalysis pass for instsimplify pass.Dehao Chen2016-09-011-5/+6
| | | | | | | | | | | | Summary: DominatorTreeAnalysis is always required by instsimplify. Reviewers: davidxl, danielcdh Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D24173 llvm-svn: 280432
* Refactor replaceDominatedUsesWith to have a flag to control whether to ↵Dehao Chen2016-09-011-2/+4
| | | | | | | | | | | | | | replace uses in BB itself. Summary: This is in preparation for LoopSink pass which calls replaceDominatedUsesWith to update after sinking. Reviewers: chandlerc, davidxl, danielcdh Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D24170 llvm-svn: 280427
* [SimplifyCFG] Handle tail-sinking of more than 2 incoming branchesJames Molloy2016-09-011-28/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This was a real restriction in the original version of SinkIfThenCodeToEnd. Now it's been rewritten, the restriction can be lifted. As part of this, we handle a very common and useful case where one of the incoming branches is actually conditional. Consider: if (a) x(1); else if (b) x(2); This produces the following CFG: [if] / \ [x(1)] [if] | | \ | | \ | [x(2)] | \ | / [ end ] [end] has two unconditional predecessor arcs and one conditional. The conditional refers to the implicit empty 'else' arc. This same pattern can also be caused by an empty default block in a switch. We can't sink the call to x() down to end because no call to x() happens on the third incoming arc (assume that x() has sideeffects for the sake of argument; if something is safe to speculate we could indeed sink nevertheless but this cannot happen in the general case and causes many extra selects). We are now able to detect this case and split off the unconditional arcs to a common successor: [if] / \ [x(1)] [if] | | \ | | \ | [x(2)] | \ / | [sink.split] | \ / [ end ] Now we can sink the call to x() into %sink.split. This can cause significant code simplification in many testcases. llvm-svn: 280364
* [SimplifyCFG] Change the algorithm in SinkThenElseCodeToEndJames Molloy2016-09-011-90/+149
| | | | | | | | | | | | | | | | | | | | | | | | | r279460 rewrote this function to be able to handle more than two incoming edges and took pains to ensure this didn't regress anything. This time we change the logic for determining if an instruction should be sunk. Previously we used a single pass greedy algorithm - sink instructions until one requires more than one PHI node or we run out of instructions to sink. This had the problem that sinking instructions that had non-identical but trivially the same operands needed extra logic so we sunk them aggressively. For example: %a = load i32* %b %d = load i32* %b %c = gep i32* %a, i32 0 %e = gep i32* %d, i32 1 Sinking %c and %e would naively require two PHI merges as %a != %d. But the loads are obviously equivalent (and maybe can't be hoisted because there is no common predecessor). This is why we implemented the fairly complex function areValuesTriviallySame(), to look through trivial differences like this. However it's just not clever enough. Instead, throw areValuesTriviallySame away, use pointer equality to check equivalence of operands and switch to a two-stage algorithm. In the "scan" stage, we look at every sinkable instruction in isolation from end of block to front. If it's sinkable, we keep track of all operands that required PHI merging. In the "sink" stage, we iteratively sink the last non-terminator in the source blocks. But when calculating how many PHIs are actually required to be inserted (to work out if we should stop or not) we remove any values that have already been sunk from the set of PHI-merges required, which allows us to be more aggressive. This turns an algorithm with potentially recursive lookahead (looking through GEPs, casts, loads and any other instruction potentially not CSE'd) to two linear scans. llvm-svn: 280351
* [SimplifyCFG] Fix nondeterministic iteration orderJames Molloy2016-09-011-2/+2
| | | | | | | | We iterate over the result from SafeToMergeTerminators, so make it a SmallSetVector instead of a SmallPtrSet. Should fix stage3 convergence builds. llvm-svn: 280342
* [SimplifyCFG] Improve FoldValueComparisonIntoPredecessors to handle more casesJames Molloy2016-09-011-6/+21
| | | | | | | | | | | | | | | | | | | A very important case is not handled here: multiple arcs to a single block with a PHI. Consider: a: %1 = icmp %b, 1 br %1, label %c, label %e c: %2 = icmp %b, 2 br %2, label %d, label %e d: br %e e: phi [0, %a], [1, %c], [2, %d] FoldValueComparisonIntoPredecessors will refuse to fold this, as it doesn't know how to deal with two arcs to a common destination with different PHI values. The answer is obvious - just split all conflicting arcs. llvm-svn: 280338
* Revert "[SimplifyCFG] Improve FoldValueComparisonIntoPredecessors to handle ↵James Molloy2016-08-311-20/+6
| | | | | | | | more cases" This reverts commit r280218. This *also* causes buildbot errors. Sigh. Not a successful day all around! llvm-svn: 280239
* Revert "[SimplifyCFG] Change the algorithm in SinkThenElseCodeToEnd"James Molloy2016-08-311-127/+86
| | | | | | This reverts commit r280216 - it caused buildbot failures. llvm-svn: 280234
* Revert "[SimplifyCFG] Handle tail-sinking of more than 2 incoming branches"James Molloy2016-08-311-89/+27
| | | | | | This reverts commit r280217. r280216 caused buildbot failures - backing out the entire chain. llvm-svn: 280233
* Revert "[SimplifyCFG] Add a workaround to fix PR30188"James Molloy2016-08-311-10/+0
| | | | | | This reverts commit r280219. r280216 caused buildbot failures - backing out the entire chain. llvm-svn: 280232
* Revert "[SimplifyCFG] Fix bootstrap failure after r280220"James Molloy2016-08-311-21/+5
| | | | | | This reverts commit r280228. r280216 caused buildbot failures - backing out the entire sequence. llvm-svn: 280231
* [SimplifyCFG] Fix bootstrap failure after r280220James Molloy2016-08-311-5/+21
| | | | | | We check that a sinking candidate is used by only one PHI node during our legality checks. However for instructions that are used by other sinking candidates our heuristic is less conservative. This can result in a candidate actually being illegal when we come to sink it because of how we sunk a predecessor. Do the used-by-only-one-PHI checks again during sinking to ensure we don't crash. llvm-svn: 280228
* [SimplifyCFG] Add a workaround to fix PR30188James Molloy2016-08-311-0/+10
| | | | | | | | We're sinking stores, which is a good thing, but in the process creating selects for the store address operand, which SROA/Mem2Reg can't look through, which caused serious regressions. The real fix is in SROA, which I'll be looking into. llvm-svn: 280219
* [SimplifyCFG] Improve FoldValueComparisonIntoPredecessors to handle more casesJames Molloy2016-08-311-6/+20
| | | | | | | | | | | | | | | | | | | A very important case is not handled here: multiple arcs to a single block with a PHI. Consider: a: %1 = icmp %b, 1 br %1, label %c, label %e c: %2 = icmp %b, 2 br %2, label %d, label %e d: br %e e: phi [0, %a], [1, %c], [2, %d] FoldValueComparisonIntoPredecessors will refuse to fold this, as it doesn't know how to deal with two arcs to a common destination with different PHI values. The answer is obvious - just split all conflicting arcs. llvm-svn: 280218
* [SimplifyCFG] Handle tail-sinking of more than 2 incoming branchesJames Molloy2016-08-311-27/+89
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This was a real restriction in the original version of SinkIfThenCodeToEnd. Now it's been rewritten, the restriction can be lifted. As part of this, we handle a very common and useful case where one of the incoming branches is actually conditional. Consider: if (a) x(1); else if (b) x(2); This produces the following CFG: [if] / \ [x(1)] [if] | | \ | | \ | [x(2)] | \ | / [ end ] [end] has two unconditional predecessor arcs and one conditional. The conditional refers to the implicit empty 'else' arc. This same pattern can also be caused by an empty default block in a switch. We can't sink the call to x() down to end because no call to x() happens on the third incoming arc (assume that x() has sideeffects for the sake of argument; if something is safe to speculate we could indeed sink nevertheless but this cannot happen in the general case and causes many extra selects). We are now able to detect this case and split off the unconditional arcs to a common successor: [if] / \ [x(1)] [if] | | \ | | \ | [x(2)] | \ / | [sink.split] | \ / [ end ] Now we can sink the call to x() into %sink.split. This can cause significant code simplification in many testcases. llvm-svn: 280217
* [SimplifyCFG] Change the algorithm in SinkThenElseCodeToEndJames Molloy2016-08-311-86/+127
| | | | | | | | | | | | | | | | | | | | | | | | | r279460 rewrote this function to be able to handle more than two incoming edges and took pains to ensure this didn't regress anything. This time we change the logic for determining if an instruction should be sunk. Previously we used a single pass greedy algorithm - sink instructions until one requires more than one PHI node or we run out of instructions to sink. This had the problem that sinking instructions that had non-identical but trivially the same operands needed extra logic so we sunk them aggressively. For example: %a = load i32* %b %d = load i32* %b %c = gep i32* %a, i32 0 %e = gep i32* %d, i32 1 Sinking %c and %e would naively require two PHI merges as %a != %d. But the loads are obviously equivalent (and maybe can't be hoisted because there is no common predecessor). This is why we implemented the fairly complex function areValuesTriviallySame(), to look through trivial differences like this. However it's just not clever enough. Instead, throw areValuesTriviallySame away, use pointer equality to check equivalence of operands and switch to a two-stage algorithm. In the "scan" stage, we look at every sinkable instruction in isolation from end of block to front. If it's sinkable, we keep track of all operands that required PHI merging. In the "sink" stage, we iteratively sink the last non-terminator in the source blocks. But when calculating how many PHIs are actually required to be inserted (to work out if we should stop or not) we remove any values that have already been sunk from the set of PHI-merges required, which allows us to be more aggressive. This turns an algorithm with potentially recursive lookahead (looking through GEPs, casts, loads and any other instruction potentially not CSE'd) to two linear scans. llvm-svn: 280216
* [SimplifyCFG] Tail-merge calls with sideeffectsJames Molloy2016-08-311-7/+3
| | | | | | | | | | | | | This was deliberately disabled during my rewrite of SinkIfThenToEnd to keep behaviour at least vaguely consistent with the previous version and keep it as close to NFC as I could. There's no real reason not to merge sideeffect calls though, so let's do it! Small fixup along the way to ensure we don't create indirect calls. Should fix PR28964. llvm-svn: 280215
* [SimplifyCFG] Properly CSE metadata in SinkThenElseCodeToEndJames Molloy2016-08-301-0/+5
| | | | | | This was missing, meaning the metadata in sunk instructions was potentially bogus and could cause miscompiles. llvm-svn: 280072
* ASan: remove variable only used in assertions buildTim Northover2016-08-291-2/+1
| | | | llvm-svn: 279990
* [asan] Separate calculation of ShadowBytes from calculating ASanStackFrameLayoutVitaly Buka2016-08-291-29/+51
| | | | | | | | | | | | Summary: No functional changes, just refactoring to make D23947 simpler. Reviewers: eugenis Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D23954 llvm-svn: 279982
* [SimplifyCFG] Hoisting invalidates metadataDavid Majnemer2016-08-291-2/+8
| | | | | | | | | We forgot to remove optimization metadata when performing hosting during FoldTwoEntryPHINode. This fixes PR29163. llvm-svn: 279980
* [LoopUnroll] Use OptimizationRemarkEmitter directly not via the analysis passAdam Nemet2016-08-261-4/+0
| | | | | | | | | | | | | | | | We can't mark ORE (a function pass) preserved as required by the loop passes because that is how we ensure that the required passes like LazyBFI are all available any time ORE is used. See the new comments in the patch. Instead we use it directly just like the inliner does in D22694. As expected there is some additional overhead after removing the caching provided by analysis passes. The worst case, I measured was LNT/CINT2006_ref/401.bzip2 which regresses by 12%. As before, this only affects -Rpass-with-hotness and not default compilation. llvm-svn: 279829
* [UNROLL] Postpone ScalarEvolution::forgetLoop after TripCountSC is expandedWei Mi2016-08-251-5/+6
| | | | | | | | | | | | | | when unroll runtime iteration loop. In llvm::UnrollRuntimeLoopRemainder, if the loop to be unrolled is the inner loop inside a loop nest, the scalar evolution needs to be dropped for its parent loop which is done by ScalarEvolution::forgetLoop. However, we can postpone forgetLoop to the end of UnrollRuntimeLoopRemainder so TripCountSC expansion can still reuse existing value. Differential Revision: https://reviews.llvm.org/D23572 llvm-svn: 279748
* [MemorySSA] Remove unused field. NFC.George Burgess IV2016-08-221-6/+1
| | | | | | | | Given that we're not currently using blocker info, and whether or not we will end up using it it is unclear, don't waste 8 (or 4) bytes of memory per path node. llvm-svn: 279493
* MSSA: Factor out phi node placementDaniel Berlin2016-08-221-17/+22
| | | | llvm-svn: 279462
* MSSA: Only rename accesses whose defining access is nullptrDaniel Berlin2016-08-221-14/+6
| | | | llvm-svn: 279461
* [SimplifyCFG] Rewrite SinkThenElseCodeToEndJames Molloy2016-08-221-150/+236
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [Recommitting now an unrelated assertion in SROA is sorted out] The new version has several advantages: 1) IMSHO it's more readable and neater 2) It handles loads and stores properly 3) It can handle any number of incoming blocks rather than just two. I'll be taking advantage of this in a followup patch. With this change we can now finally sink load-modify-store idioms such as: if (a) return *b += 3; else return *b += 4; => %z = load i32, i32* %y %.sink = select i1 %a, i32 5, i32 7 %b = add i32 %z, %.sink store i32 %b, i32* %y ret i32 %b When this works for switches it'll be even more powerful. Round 4. This time we should handle all instructions correctly, and not replace any operands that need to be constant with variables. This was really hard to determine safely, so the helper function should be put into the Instruction API. I'll do that as a followup. llvm-svn: 279460
* Revert "[SimplifyCFG] Rewrite SinkThenElseCodeToEnd"James Molloy2016-08-221-236/+150
| | | | | | This reverts commit r279443. It caused buildbot failures. llvm-svn: 279447
* [SimplifyCFG] Rewrite SinkThenElseCodeToEndJames Molloy2016-08-221-150/+236
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The new version has several advantages: 1) IMSHO it's more readable and neater 2) It handles loads and stores properly 3) It can handle any number of incoming blocks rather than just two. I'll be taking advantage of this in a followup patch. With this change we can now finally sink load-modify-store idioms such as: if (a) return *b += 3; else return *b += 4; => %z = load i32, i32* %y %.sink = select i1 %a, i32 5, i32 7 %b = add i32 %z, %.sink store i32 %b, i32* %y ret i32 %b When this works for switches it'll be even more powerful. Round 4. This time we should handle all instructions correctly, and not replace any operands that need to be constant with variables. This was really hard to determine safely, so the helper function should be put into the Instruction API. I'll do that as a followup. llvm-svn: 279443
* [asan] Add support of lifetime poisoning into ComputeASanStackFrameLayoutVitaly Buka2016-08-201-3/+10
| | | | | | | | | | | | | | | Summary: We are going to combine poisoning of red zones and scope poisoning. PR27453 Reviewers: kcc, eugenis Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D23623 llvm-svn: 279373
* Revert "[asan] Add support of lifetime poisoning into ↵Vitaly Buka2016-08-191-10/+3
| | | | | | | | | | ComputeASanStackFrameLayout" This reverts commit r279020. Speculative revert in hope to fix asan test on arm. llvm-svn: 279332
* Revert "[SimplifyCFG] Rewrite SinkThenElseCodeToEnd"Reid Kleckner2016-08-191-210/+150
| | | | | | | This reverts commit r279229. It breaks intrinsic function calls in diamonds. llvm-svn: 279313
* [CloneFunction] Don't remove unrelated nodes from the CGSSCDavid Majnemer2016-08-191-0/+6
| | | | | | | | CGSCC use a WeakVH to track call sites. RAUW a call within a function can result in that WeakVH getting confused about whether or not the call site is still around. llvm-svn: 279268
* [SimplifyCFG] Rewrite SinkThenElseCodeToEndJames Molloy2016-08-191-150/+210
| | | | | | | | | | | | | | | | | | | | | | | | | | The new version has several advantages: 1) IMSHO it's more readable and neater 2) It handles loads and stores properly 3) It can handle any number of incoming blocks rather than just two. I'll be taking advantage of this in a followup patch. With this change we can now finally sink load-modify-store idioms such as: if (a) return *b += 3; else return *b += 4; => %z = load i32, i32* %y %.sink = select i1 %a, i32 5, i32 7 %b = add i32 %z, %.sink store i32 %b, i32* %y ret i32 %b When this works for switches it'll be even more powerful. llvm-svn: 279229
* [asan] Add support of lifetime poisoning into ComputeASanStackFrameLayoutVitaly Buka2016-08-181-3/+10
| | | | | | | | | | | | | | | Summary: We are going to combine poisoning of red zones and scope poisoning. PR27453 Reviewers: kcc, eugenis Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D23623 llvm-svn: 279020
* [PM] Port the always inliner to the new pass manager in a much moreChandler Carruth2016-08-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | minimal and boring form than the old pass manager's version. This pass does the very minimal amount of work necessary to inline functions declared as always-inline. It doesn't support a wide array of things that the legacy pass manager did support, but is alse ... about 20 lines of code. So it has that going for it. Notably things this doesn't support: - Array alloca merging - To support the above, bottom-up inlining with careful history tracking and call graph updates - DCE of the functions that become dead after this inlining. - Inlining through call instructions with the always_inline attribute. Instead, it focuses on inlining functions with that attribute. The first I've omitted because I'm hoping to just turn it off for the primary pass manager. If that doesn't pan out, I can add it here but it will be reasonably expensive to do so. The second should really be handled by running global-dce after the inliner. I don't want to re-implement the non-trivial logic necessary to do comdat-correct DCE of functions. This means the -O0 pipeline will have to be at least 'always-inline,global-dce', but that seems reasonable to me. If others are seriously worried about this I'd like to hear about it and understand why. Again, this is all solveable by factoring that logic into a utility and calling it here, but I'd like to wait to do that until there is a clear reason why the existing pass-based factoring won't work. The final point is a serious one. I can fairly easily add support for this, but it seems both costly and a confusing construct for the use case of the always inliner running at -O0. This attribute can of course still impact the normal inliner easily (although I find that a questionable re-use of the same attribute). I've started a discussion to sort out what semantics we want here and based on that can figure out if it makes sense ta have this complexity at O0 or not. One other advantage of this design is that it should be quite a bit faster due to checking for whether the function is a viable candidate for inlining exactly once per function instead of doing it for each call site. Anyways, hopefully a reasonable starting point for this pass. Differential Revision: https://reviews.llvm.org/D23299 llvm-svn: 278896
* SimplifyCFG: Avoid dereferencing end()Duncan P. N. Exon Smith2016-08-161-1/+4
| | | | | | | | When comparing a User* to a BasicBlock::iterator in passingValueIsAlwaysUndefined, don't dereference the iterator in case it is end(). llvm-svn: 278872
* Preserve the assumption cache more oftenDavid Majnemer2016-08-161-12/+22
| | | | | | | We were clearing it out in LoopUnswitch and InlineFunction instead of attempting to preserve it. llvm-svn: 278860
* [LoopUnroll] Don't clear out the AssumptionCache on each loopDavid Majnemer2016-08-161-6/+8
| | | | | | | | | | | Clearing out the AssumptionCache can cause us to rescan the entire function for assumes. If there are many loops, then we are scanning over the entire function many times. Instead of clearing out the AssumptionCache, register all cloned assumes. llvm-svn: 278854
OpenPOWER on IntegriCloud