summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/Scalar/LoopUnswitch.cpp
Commit message (Collapse)AuthorAgeFilesLines
...
* Fix use after freeXin Tong2017-01-061-1/+1
| | | | | | | | | | | | Summary: Fix use after free in LoopUnswitch Reviewers: chenli, atrick, hfinkel, mzolotukhin Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D28412 llvm-svn: 291288
* Revert @llvm.assume with operator bundles (r289755-r289757)Daniel Jasper2016-12-191-7/+20
| | | | | | | This creates non-linear behavior in the inliner (see more details in r289755's commit thread). llvm-svn: 290086
* Remove the AssumptionCacheHal Finkel2016-12-151-20/+7
| | | | | | | | | After r289755, the AssumptionCache is no longer needed. Variables affected by assumptions are now found by using the new operand-bundle-based scheme. This new scheme is more computationally efficient, and also we need much less code... llvm-svn: 289756
* [Loop Unswitch] Patch to selective unswitch only the reachable branch ↵Abhilash Bhandari2016-11-251-1/+36
| | | | | | | | | | | | | | | | instructions. Summary: The iterative algorithm for Loop Unswitching may render some of the branches unreachable in the unswitched loops. Given the exponential nature of the algorithm, this is quite an overhead. This patch fixes this problem by selectively unswitching only those branches within a loop that are reachable from the loop header. Reviewers: Michael Zolothukin, Anna Thomas, Weiming Zhao. Subscribers: llvm-commits. Differential Revision: http://reviews.llvm.org/D26299 llvm-svn: 287925
* Cleanup : Use metadata preserving API for branch creationXinliang David Li2016-09-031-9/+4
| | | | | | | Use the wrapper API in IRBuilder that does meta data copy to create new branch in LoopUnswitch. llvm-svn: 280602
* Possible fix of test failures on win bots Xinliang David Li2016-08-231-3/+3
| | | | llvm-svn: 279542
* [Profile] refactor meta data copying/swapping codeXinliang David Li2016-08-231-37/+8
| | | | | | Differential Revision: http://reviews.llvm.org/D23619 llvm-svn: 279523
* Replace "fallthrough" comments with LLVM_FALLTHROUGHJustin Bogner2016-08-171-1/+1
| | | | | | | This is a mechanical change of comments in switches like fallthrough, fall-through, or fall-thru to use the LLVM_FALLTHROUGH macro instead. llvm-svn: 278902
* Preserve the assumption cache more oftenDavid Majnemer2016-08-161-6/+7
| | | | | | | We were clearing it out in LoopUnswitch and InlineFunction instead of attempting to preserve it. llvm-svn: 278860
* Apply clang-tidy's modernize-loop-convert to most of lib/Transforms.Benjamin Kramer2016-06-261-10/+8
| | | | | | Only minor manual fixes. No functionality change intended. llvm-svn: 273808
* [LoopUnswitch] Unswitch on conditions feeding into guardsSanjoy Das2016-06-261-7/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This is a straightforward extension of what LoopUnswitch does to branches to guards. That is, we unswitch ``` for (;;) { ... guard(loop_invariant_cond); ... } ``` into ``` if (loop_invariant_cond) { for (;;) { ... // There is no need to emit guard(true) ... } } else { for (;;) { ... guard(false); // SimplifyCFG will clean this up by adding an // unreachable after the guard(false) ... } } ``` Reviewers: majnemer Subscribers: mcrosier, llvm-commits, mzolotukhin Differential Revision: http://reviews.llvm.org/D21725 llvm-svn: 273801
* [LoopUnswitch] Avoid exponential behaviorSanjoy Das2016-06-251-4/+22
| | | | | | | | | | | | Summary: (No semantic change intended). Reviewers: majnemer, bogner, mzolotukhin Subscribers: mcrosier, llvm-commits, mzolotukhin Differential Revision: http://reviews.llvm.org/D21707 llvm-svn: 273763
* Disable MSan-hostile loop unswitching.Evgeniy Stepanov2016-06-101-0/+18
| | | | | | | | | | | | | Loop unswitching may cause MSan false positive when the unswitch condition is not guaranteed to execute. This is very similar to ASan and TSan special case in llvm::isSafeToSpeculativelyExecute (they don't like speculative loads and stores), but for branch instructions. This is a workaround for PR28054. llvm-svn: 272421
* [LoopUnroll] Unroll loops which have exit blocks to EH padsDavid Majnemer2016-05-031-0/+5
| | | | | | | | | | | | | We were overly cautious in our analysis of loops which have invokes which unwind to EH pads. The loop unroll transform is safe because it only clones blocks in the loop body, it does not try to split critical edges involving EH pads. Instead, move the necessary safety check to LoopUnswitch. N.B. The safety check for loop unswitch is covered by an existing test which fails without it. llvm-svn: 268357
* Re-commit optimization bisect support (r267022) without new pass manager ↵Andrew Kaylor2016-04-221-1/+1
| | | | | | | | | | support. The original commit was reverted because of a buildbot problem with LazyCallGraph::SCC handling (not related to the OptBisect handling). Differential Revision: http://reviews.llvm.org/D19172 llvm-svn: 267231
* Revert "Initial implementation of optimization bisect support."Vedant Kumar2016-04-221-1/+1
| | | | | | | | This reverts commit r267022, due to an ASan failure: http://lab.llvm.org:8080/green/job/clang-stage2-cmake-RgSan_check/1549 llvm-svn: 267115
* Initial implementation of optimization bisect support.Andrew Kaylor2016-04-211-1/+1
| | | | | | | | | | | | This patch implements a optimization bisect feature, which will allow optimizations to be selectively disabled at compile time in order to track down test failures that are caused by incorrect optimizations. The bisection is enabled using a new command line option (-opt-bisect-limit). Individual passes that may be skipped call the OptBisect object (via an LLVMContext) to see if they should be skipped based on the bisect limit. A finer level of control (disabling individual transformations) can be managed through an addition OptBisect method, but this is not yet used. The skip checking in this implementation is based on (and replaces) the skipOptnoneFunction check. Where that check was being called, a new call has been inserted in its place which checks the bisect limit and the optnone attribute. A new function call has been added for module and SCC passes that behaves in a similar way. Differential Revision: http://reviews.llvm.org/D19172 llvm-svn: 267022
* IR: RF_IgnoreMissingValues => RF_IgnoreMissingLocals, NFCDuncan P. N. Exon Smith2016-04-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | Clarify what this RemapFlag actually means. - Change the flag name to match its intended behaviour. - Clearly document that it's not supposed to affect globals. - Add a host of FIXMEs to indicate how to fix the behaviour to match the intent of the flag. RF_IgnoreMissingLocals should only affect the behaviour of RemapInstruction for function-local operands; namely, for operands of type Argument, Instruction, and BasicBlock. Currently, it is *only* passed into RemapInstruction calls (and the transitive MapValue calls that it makes). When I split Metadata from Value I didn't understand the flag, and I used it in a bunch of places for "global" metadata. This commit doesn't have any functionality change, but prepares to cleanup MapMetadata and MapValue. llvm-svn: 265628
* [LPM] Factor all of the loop analysis usage updates into a common helperChandler Carruth2016-02-191-14/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | routine. We were getting this wrong in small ways and generally being very inconsistent about it across loop passes. Instead, let's have a common place where we do this. One minor downside is that this will require some analyses like SCEV in more places than they are strictly needed. However, this seems benign as these analyses are complete no-ops, and without this consistency we can in many cases end up with the legacy pass manager scheduling deciding to split up a loop pass pipeline in order to run the function analysis half-way through. It is very, very annoying to fix these without just being very pedantic across the board. The only loop passes I've not updated here are ones that use AU.setPreservesAll() such as IVUsers (an analysis) and the pass printer. They seemed less relevant. With this patch, almost all of the problems in PR24804 around loop pass pipelines are fixed. The one remaining issue is that we run simplify-cfg and instcombine in the middle of the loop pass pipeline. We've recently added some loop variants of these passes that would seem substantially cleaner to use, but this at least gets us much closer to the previous state. Notably, the seven loop pass managers is down to three. I've not updated the loop passes using LoopAccessAnalysis because that analysis hasn't been fully wired into LoopSimplify/LCSSA, and it isn't clear that those transforms want to support those forms anyways. They all run late anyways, so this is harmless. Similarly, LSR is left alone because it already carefully manages its forms and doesn't need to get fused into a single loop pass manager with a bunch of other loop passes. LoopReroll didn't use loop simplified form previously, and I've updated the test case to match the trivially different output. Finally, I've also factored all the pass initialization for the passes that use this technique as well, so that should be done regularly and reliably. Thanks to James for the help reviewing and thinking about this stuff, and Ben for help thinking about it as well! Differential Revision: http://reviews.llvm.org/D17435 llvm-svn: 261316
* LoopPass: Simplify the API for adding a new loop. NFCJustin Bogner2015-10-221-5/+4
| | | | | | | The insertLoop() API is only used to add new loops, and has confusing ownership semantics. Simplify it by replacing it with addLoop(). llvm-svn: 251064
* [LoopUnswitch] Correct misleading comments.Chen Li2015-10-141-2/+1
| | | | | | | | | | Reviewers: reames Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D13738 llvm-svn: 250317
* Scalar: Remove remaining ilist iterator implicit conversionsDuncan P. N. Exon Smith2015-10-131-9/+11
| | | | | | | | | | | | | | | | | | | Remove remaining `ilist_iterator` implicit conversions from LLVMScalarOpts. This change exposed some scary behaviour in lib/Transforms/Scalar/SCCP.cpp around line 1770. This patch changes a call from `Function::begin()` to `&Function::front()`, since the return was immediately being passed into another function that takes a `Function*`. `Function::front()` started to assert, since the function was empty. Note that `Function::end()` does not point at a legal `Function*` -- it points at an `ilist_half_node` -- so the other function was getting garbage before. (I added the missing check for `Function::isDeclaration()`.) Otherwise, no functionality change intended. llvm-svn: 250211
* Generalize convergent check to handle invokes as well as calls.Owen Anderson2015-10-091-4/+4
| | | | llvm-svn: 249892
* Teach LoopUnswitch not to perform non-trivial unswitching on loops ↵Owen Anderson2015-10-091-0/+14
| | | | | | | | | | | containing convergent operations. Doing so could cause the post-unswitching convergent ops to be control-dependent on the unswitch condition where they were not before. This check could be refined to allow unswitching where the convergent operation was already control-dependent on the unswitch condition. llvm-svn: 249874
* [LoopUnswitch] Add block frequency analysis to recognize hot/cold regionsChen Li2015-09-291-0/+48
| | | | | | | | | | | | Summary: This patch adds block frequency analysis to LoopUnswitch pass to recognize hot/cold regions. For cold regions the pass only performs trivial unswitches since they do not increase code size, and for hot regions everything works as before. This helps to minimize code growth in cold regions and be more aggressive in hot regions. Currently the default cold regions are blocks with frequencies below 20% of function entry frequency, and it can be adjusted via -loop-unswitch-cold-block-frequency flag. The entire feature is controlled via -loop-unswitch-with-block-frequency flag and it is off by default. Reviewers: broune, silvas, dnovillo, reames Subscribers: davidxl, llvm-commits Differential Revision: http://reviews.llvm.org/D11605 llvm-svn: 248777
* [LoopUnswitch] Require DominatorTree info.Michael Zolotukhin2015-09-221-11/+7
| | | | | | | | | | | | | | | | | | | | | Summary: We should either require the DT info to be available, or check if it's available in every place we use DT (and we already miss such check in one place, which causes failures in some cases). As other loop passes preserve DT and it's usually available, it makes sense to just require it here. There is no regression test, because the bug only shows up if pass manager decides to clean DT info right before LoopUnswitch. If loop-unswitch is run separately, DT is available, so bug isn't exposed. Reviewers: chandlerc, hfinkel Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D13036 llvm-svn: 248230
* Fix UB: can't bind a reference to nullptr (NFC)Mehdi Amini2015-09-211-1/+1
| | | | | From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 248213
* Add GlobalsAA as preserved to a bunch of transformsJames Molloy2015-09-101-0/+2
| | | | | | GlobalsAA must by definition be preserved in function passes, but the passmanager doesn't know that. Make each pass explicitly preserve GlobalsAA. llvm-svn: 247263
* [PM] Port ScalarEvolution to the new pass manager.Chandler Carruth2015-08-171-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change makes ScalarEvolution a stand-alone object and just produces one from a pass as needed. Making this work well requires making the object movable, using references instead of overwritten pointers in a number of places, and other refactorings. I've also wired it up to the new pass manager and added a RUN line to a test to exercise it under the new pass manager. This includes basic printing support much like with other analyses. But there is a big and somewhat scary change here. Prior to this patch ScalarEvolution was never *actually* invalidated!!! Re-running the pass just re-wired up the various other analyses and didn't remove any of the existing entries in the SCEV caches or clear out anything at all. This might seem OK as everything in SCEV that can uses ValueHandles to track updates to the values that serve as SCEV keys. However, this still means that as we ran SCEV over each function in the module, we kept accumulating more and more SCEVs into the cache. At the end, we would have a SCEV cache with every value that we ever needed a SCEV for in the entire module!!! Yowzers. The releaseMemory routine would dump all of this, but that isn't realy called during normal runs of the pipeline as far as I can see. To make matters worse, there *is* actually a key that we don't update with value handles -- there is a map keyed off of Loop*s. Because LoopInfo *does* release its memory from run to run, it is entirely possible to run SCEV over one function, then over another function, and then lookup a Loop* from the second function but find an entry inserted for the first function! Ouch. To make matters still worse, there are plenty of updates that *don't* trip a value handle. It seems incredibly unlikely that today GVN or another pass that invalidates SCEV can update values in *just* such a way that a subsequent run of SCEV will incorrectly find lookups in a cache, but it is theoretically possible and would be a nightmare to debug. With this refactoring, I've fixed all this by actually destroying and recreating the ScalarEvolution object from run to run. Technically, this could increase the amount of malloc traffic we see, but then again it is also technically correct. ;] I don't actually think we're suffering from tons of malloc traffic from SCEV because if we were, the fact that we never clear the memory would seem more likely to have come up as an actual problem before now. So, I've made the simple fix here. If in fact there are serious issues with too much allocation and deallocation, I can work on a clever fix that preserves the allocations (while clearing the data) between each run, but I'd prefer to do that kind of optimization with a test case / benchmark that shows why we need such cleverness (and that can test that we actually make it faster). It's possible that this will make some things faster by making the SCEV caches have higher locality (due to being significantly smaller) so until there is a clear benchmark, I think the simple change is best. Differential Revision: http://reviews.llvm.org/D12063 llvm-svn: 245193
* [LoopUnswitch] Check OptimizeForSize before traversing over all basic blocks ↵Chen Li2015-08-131-7/+6
| | | | | | | | | | | | | | in current loop Summary: This patch moves the check of OptimizeForSize before traversing over all basic blocks in current loop. If OptimizeForSize is set to true, no non-trivial unswitch is ever allowed. Therefore, the early exit will help reduce compilation time. This patch should be NFC. Reviewers: reames, weimingz, broune Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11997 llvm-svn: 244868
* don't repeat function names in comments; NFCSanjay Patel2015-08-111-39/+34
| | | | llvm-svn: 244672
* fix 80-cols; NFCSanjay Patel2015-08-111-19/+22
| | | | llvm-svn: 244668
* [LoopUnswitch] Preserve make.implicit metadata for unswitched conditionsChen Li2015-08-051-0/+1
| | | | | | | | | | | | Summary: This patch adds support to preserve make.implicit metadata for unswitched conditions in loop pre-header. Reviewers: sanjoy, weimingz Subscribers: mcrosier, llvm-commits Differential Revision: http://reviews.llvm.org/D11769 llvm-svn: 244132
* wrap OptSize and MinSize attributes for easier and consistent access (NFCI)Sanjay Patel2015-08-041-0/+1
| | | | | | | | | | | | | | | | | Create wrapper methods in the Function class for the OptimizeForSize and MinSize attributes. We want to hide the logic of "or'ing" them together when optimizing just for size (-Os). Currently, we are not consistent about this and rely on a front-end to always set OptimizeForSize (-Os) if MinSize (-Oz) is on. Thus, there are 18 FIXME changes here that should be added as follow-on patches with regression tests. This patch is NFC-intended: it just replaces existing direct accesses of the attributes by the equivalent wrapper call. Differential Revision: http://reviews.llvm.org/D11734 llvm-svn: 243994
* [LoopUnswitch] Improve loop unswitch pass to find trivial unswitch ↵Chen Li2015-07-251-20/+60
| | | | | | | | | | | | | | | | | conditions more effectively Summary: This patch improves trivial loop unswitch. The current trivial loop unswitch only checks if loop header's terminator contains a trivial unswitch condition. But if the loop header only has one reachable successor (due to intentionally or unintentionally missed code simplification), we should consider the successor as part of the loop header. Therefore, instead of stopping at loop header's terminator, we should keep traversing its successors within loop until reach a *real* conditional branch or switch (whose condition can not be constant folded). This change will enable a single -loop-unswitch pass to unswitch multiple trivial conditions (unswitch one trivial condition could open opportunity to unswitch another one in the same loop), while the old implementation can unswitch only one per pass. Reviewers: reames, broune Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11481 llvm-svn: 243203
* [PM/AA] Remove all of the dead AliasAnalysis pointers being threadedChandler Carruth2015-07-221-2/+1
| | | | | | | | | | through APIs that are no longer necessary now that the update API has been removed. This will make changes to the AA interfaces significantly less disruptive (I hope). Either way, it seems like a really nice cleanup. llvm-svn: 242882
* [LoopUnswitch] Code refactoring to separate trivial loop unswitch and ↵Chen Li2015-07-221-96/+112
| | | | | | | | | | | | | | non-trivial loop unswitch in processCurrentLoop() Summary: The current code in LoopUnswtich::processCurrentLoop() mixes trivial loop unswitch and non-trivial loop unswitch together. It goes over all basic blocks in the loop and checks if a condition is trivial or non-trivial unswitch condition. However, trivial unswitch condition can only occur in the loop header basic block (where it controls whether or not the loop does something at all). This refactoring separate trivial loop unswitch and non-trivial loop unswitch. Before going over all basic blocks in the loop, it checks if the loop header contains a trivial unswitch condition. If so, unswitch it. Otherwise, go over all blocks like before but don't check trivial condition any more since they are not possible to be in the other blocks. This code has no functionality change. Reviewers: meheff, reames, broune Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11276 llvm-svn: 242873
* [LoopUnswitch] Add an else clause to IsTrivialUnswitchCondition() when ↵Chen Li2015-07-151-1/+2
| | | | | | | | | | | | | | | | | checking HeaderTerm instruction type Summary: This is a trivial code change with no functionality effect. When LoopUnswitch determines trivial unswitch condition, it checks whether the loop header's terminator instruction is a branch instruction or switch instruction since trivial unswitch condition can only apply to these two instruction types. The current code does not fail the check directly on other instruction types, but check the nullness of LoopExitBB variable instead. The added else clause makes the check fail immediately on other instruction types and makes the code more obvious. Reviewers: reames Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11239 llvm-svn: 242345
* This change fixes three bugs in loop unswitching. This change causes an 81% ↵Mark Heffernan2015-06-231-40/+65
| | | | | | | | | | | | | | | speed-up on a benchmark that is based on EigenConvolutionKernel2D from Eigen3, where the lack of loop unswitching blocks hoisting of loads out of a nested loop (see bug 23816 for how loop unswitching and load hoisting are related). Change 1: Unswitching on trivial conditions should always happen regardless of the computed unswitching cost, as really the cost is zero. While there is code to make that happen, the logic that checks the unswitching cost against a threshold was moved to an earlier point (revision 147935) than the point where trivial unswitching is detected, so trivial unswitching is currently blocked by the cost threshold. This change fixes that. Change 2: Before revision 147935 (from 2012-01-11), the threshold parameter was a per-loop threshold. So an unswitching happened only if the cost of the unswitching was less than the threshold. In an indirect way (and I believe unintentionally), the logic for this since then has been that the threshold is an over-all budget across all loops for all loop unswitching done by a given LoopUnswitch loop pass object. So if an unswitching with cost 100 happens in one function, that in effect reduces the threshold from 100 to 0 for the loops even in another function. This persists for the lifetime of that loop pass object. This makes no difference for most small examples but it is important for large examples. This revision fixes that. Change 3: The cost is currently calculated as std::min(NumInstructions, 5 * NumBlocks). So a loop with 2 blocks and a million instructions will have an unswitching cost of 10. I changed this to just NumInstructions, as it were before revision 147935, though I'm open to e.g. instead replacing std::min with std::max. I've tried to make the change minimally invasive while staying with what I think was the original intent of the code. Submitted on behalf of broune@. llvm-svn: 240438
* Revert r240137 (Fixed/added namespace ending comments using clang-tidy. NFC)Alexander Kornienko2015-06-231-1/+1
| | | | | | Apparently, the style needs to be agreed upon first. llvm-svn: 240390
* Fix PR13851: Preserve metadata for the unswitched branchWeiming Zhao2015-06-231-20/+67
| | | | | | | This patch copies the metadata of the unswitched branch to the newly crreated branch in loop unswitch pass. llvm-svn: 240378
* Fixed/added namespace ending comments using clang-tidy. NFCAlexander Kornienko2015-06-191-1/+1
| | | | | | | | | | | | | The patch is generated using this command: tools/clang/tools/extra/clang-tidy/tool/run-clang-tidy.py -fix \ -checks=-*,llvm-namespace-comment -header-filter='llvm/.*|clang/.*' \ llvm/lib/ Thanks to Eugene Kosov for the original patch! llvm-svn: 240137
* DataLayout is mandatory, update the API to reflect it with references.Mehdi Amini2015-03-101-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Now that the DataLayout is a mandatory part of the module, let's start cleaning the codebase. This patch is a first attempt at doing that. This patch is not exactly NFC as for instance some places were passing a nullptr instead of the DataLayout, possibly just because there was a default value on the DataLayout argument to many functions in the API. Even though it is not purely NFC, there is no change in the validation. I turned as many pointer to DataLayout to references, this helped figuring out all the places where a nullptr could come up. I had initially a local version of this patch broken into over 30 independant, commits but some later commit were cleaning the API and touching part of the code modified in the previous commits, so it seemed cleaner without the intermediate state. Test Plan: Reviewers: echristo Subscribers: llvm-commits From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 231740
* Transforms: Canonicalize access to function attributes, NFCDuncan P. N. Exon Smith2015-02-141-3/+1
| | | | | | | | | | | | Canonicalize access to function attributes to use the simpler API. getAttributes().getAttribute(AttributeSet::FunctionIndex, Kind) => getFnAttribute(Kind) getAttributes().hasAttribute(AttributeSet::FunctionIndex, Kind) => hasFnAttribute(Kind) llvm-svn: 229202
* [multiversion] Thread a function argument through all the callers of theChandler Carruth2015-02-011-1/+2
| | | | | | | | | | | | | | getTTI method used to get an actual TTI object. No functionality changed. This just threads the argument and ensures code like the inliner can correctly look up the callee's TTI rather than using a fixed one. The next change will use this to implement per-function subtarget usage by TTI. The changes after that should eliminate the need for FTTI as that will have become the default. llvm-svn: 227730
* [PM] Change the core design of the TTI analysis to use a polymorphicChandler Carruth2015-01-311-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | type erased interface and a single analysis pass rather than an extremely complex analysis group. The end result is that the TTI analysis can contain a type erased implementation that supports the polymorphic TTI interface. We can build one from a target-specific implementation or from a dummy one in the IR. I've also factored all of the code into "mix-in"-able base classes, including CRTP base classes to facilitate calling back up to the most specialized form when delegating horizontally across the surface. These aren't as clean as I would like and I'm planning to work on cleaning some of this up, but I wanted to start by putting into the right form. There are a number of reasons for this change, and this particular design. The first and foremost reason is that an analysis group is complete overkill, and the chaining delegation strategy was so opaque, confusing, and high overhead that TTI was suffering greatly for it. Several of the TTI functions had failed to be implemented in all places because of the chaining-based delegation making there be no checking of this. A few other functions were implemented with incorrect delegation. The message to me was very clear working on this -- the delegation and analysis group structure was too confusing to be useful here. The other reason of course is that this is *much* more natural fit for the new pass manager. This will lay the ground work for a type-erased per-function info object that can look up the correct subtarget and even cache it. Yet another benefit is that this will significantly simplify the interaction of the pass managers and the TargetMachine. See the future work below. The downside of this change is that it is very, very verbose. I'm going to work to improve that, but it is somewhat an implementation necessity in C++ to do type erasure. =/ I discussed this design really extensively with Eric and Hal prior to going down this path, and afterward showed them the result. No one was really thrilled with it, but there doesn't seem to be a substantially better alternative. Using a base class and virtual method dispatch would make the code much shorter, but as discussed in the update to the programmer's manual and elsewhere, a polymorphic interface feels like the more principled approach even if this is perhaps the least compelling example of it. ;] Ultimately, there is still a lot more to be done here, but this was the huge chunk that I couldn't really split things out of because this was the interface change to TTI. I've tried to minimize all the other parts of this. The follow up work should include at least: 1) Improving the TargetMachine interface by having it directly return a TTI object. Because we have a non-pass object with value semantics and an internal type erasure mechanism, we can narrow the interface of the TargetMachine to *just* do what we need: build and return a TTI object that we can then insert into the pass pipeline. 2) Make the TTI object be fully specialized for a particular function. This will include splitting off a minimal form of it which is sufficient for the inliner and the old pass manager. 3) Add a new pass manager analysis which produces TTI objects from the target machine for each function. This may actually be done as part of #2 in order to use the new analysis to implement #2. 4) Work on narrowing the API between TTI and the targets so that it is easier to understand and less verbose to type erase. 5) Work on narrowing the API between TTI and its clients so that it is easier to understand and less verbose to forward. 6) Try to improve the CRTP-based delegation. I feel like this code is just a bit messy and exacerbating the complexity of implementing the TTI in each target. Many thanks to Eric and Hal for their help here. I ended up blocked on this somewhat more abruptly than I expected, and so I appreciate getting it sorted out very quickly. Differential Revision: http://reviews.llvm.org/D7293 llvm-svn: 227669
* Teach SplitBlockPredecessors how to handle landingpad blocks.Philip Reames2015-01-281-10/+3
| | | | | | | | | | Patch by: Igor Laevsky <igor@azulsystems.com> "Currently SplitBlockPredecessors generates incorrect code in case if basic block we are going to split has a landingpad. Also seems like it is fairly common case among it's users to conditionally call either SplitBlockPredecessors or SplitLandingPadPredecessors. Because of this I think it is reasonable to add this condition directly into SplitBlockPredecessors." Differential Revision: http://reviews.llvm.org/D7157 llvm-svn: 227390
* [PM] Replace the Pass argument to SplitEdge with specific analyses usedChandler Carruth2015-01-191-3/+3
| | | | | | | | | | | | | | | and updated. This may appear to remove handling for things like alias analysis when splitting critical edges here, but in fact no callers of SplitEdge relied on this. Similarly, all of them wanted to preserve LCSSA if there was any update of the loop info. That makes the interface much simpler. With this, all of BasicBlockUtils.h is free of Pass arguments and prepared for the new pass manager. This is tho majority of utilities that relied on pass arguments. llvm-svn: 226459
* [PM] Cleanup a dead option to critical edge splitting that I noticedChandler Carruth2015-01-191-3/+1
| | | | | | | | | while refactoring this API for the new pass manager. No functionality changed here, the code didn't actually support this option. llvm-svn: 226457
* [PM] Remove the Pass argument from all of the critical edge splittingChandler Carruth2015-01-191-2/+5
| | | | | | | | | | | | | | | | | | | APIs and replace it and numerous booleans with an option struct. The critical edge splitting API has a really large surface of flags and so it seems worth burning a small option struct / builder. This struct can be constructed with the various preserved analyses and then flags can be flipped in a builder style. The various users are now responsible for directly passing along their analysis information. This should be enough for the critical edge splitting to work cleanly with the new pass manager as well. This API is still pretty crufty and could be cleaned up a lot, but I've focused on this change just threading an option struct rather than a pass through the API. llvm-svn: 226456
OpenPOWER on IntegriCloud