summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/Scalar
Commit message (Collapse)AuthorAgeFilesLines
* NewGVN: Sort Dominator Tree in RPO order, and use that for generating order.Daniel Berlin2016-12-291-4/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: The optimal iteration order for this problem is RPO order. We want to process as many preds of a backedge as we can before we process the backedge. At the same time, as we add predicate handling, we want to be able to touch instructions that are dominated by a given block by ranges (because a change in value numbering a predicate possibly affects all users we dominate that are using that predicate). If we don't do it this way, we can't do value inference over backedges (the paper covers this in depth). The newgvn branch currently overshoots the last part, and guarantees that it will touch *at least* the right set of instructions, but it does touch more. This is because the bitvector instruction ranges are currently generated in RPO order (so we take the max and the min of the ranges of dominated blocks, which means there are some in the middle we didn't have to touch that we did). We can do better by sorting the dominator tree, and then just using dominator tree order. As a preliminary, the dominator tree has some RPO guarantees, but not enough. It guarantees that for a given node, your idom must come before you in the RPO ordering. It guarantees no relative RPO ordering for siblings. We add siblings in whatever order they appear in the module. So that is what we fix. We sort the children array of the domtree into RPO order, and then use the dominator tree for ordering, instead of RPO, since the dominator tree is now a valid RPO ordering. Note: This would help any other pass that iterates a forward problem in dominator tree order. Most of them are single pass. It will still maximize whatever result they compute. We could also build the dominator tree in this order, but our incremental updates would still put it out of sort order, and recomputing the sort order is almost as hard as general incremental updates of the domtree. Also note that the sorting does not affect any tests, etc. Nothing depends on domtree order, including the verifier, the equals functions for domtree nodes, etc. How much could this matter, you ask? Here are the current numbers. This is generated by running NewGVN over all files in LLVM. Note that once we propagate equalities, the differences go up by an order of magnitude or two (IE instead of 29, the max ends up in the thousands, since the worst case we add a factor of N, where N is the number of branch predicates). So while it doesn't look that stark for the default ordering, it gets *much much* worse. There are also programs in the wild where the difference is already pretty stark (2 iterations vs hundreds). RPO ordering: 759040 Number of iterations is 1 112908 Number of iterations is 2 Default dominator tree ordering: 755081 Number of iterations is 1 116234 Number of iterations is 2 603 Number of iterations is 3 27 Number of iterations is 4 2 Number of iterations is 5 1 Number of iterations is 7 Dominator tree sorted: 759040 Number of iterations is 1 112908 Number of iterations is 2 <yay!> Really bad ordering (sort domtree siblings in postorder. not quite the worst possible, but yeah): 754008 Number of iterations is 1 21 Number of iterations is 10 8 Number of iterations is 11 6 Number of iterations is 12 5 Number of iterations is 13 2 Number of iterations is 14 2 Number of iterations is 15 3 Number of iterations is 16 1 Number of iterations is 17 2 Number of iterations is 18 96642 Number of iterations is 2 1 Number of iterations is 20 2 Number of iterations is 21 1 Number of iterations is 22 1 Number of iterations is 29 17266 Number of iterations is 3 2598 Number of iterations is 4 798 Number of iterations is 5 273 Number of iterations is 6 186 Number of iterations is 7 80 Number of iterations is 8 42 Number of iterations is 9 Reviewers: chandlerc, davide Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D28129 llvm-svn: 290699
* Update equalsStoreHelper for the fact that only one branch can be trueDaniel Berlin2016-12-291-4/+5
| | | | llvm-svn: 290697
* Revert "[NewGVN] replace emplace_back with push_back"Piotr Padlewski2016-12-281-7/+7
| | | | llvm-svn: 290692
* [NewGVN] replace emplace_back with push_backPiotr Padlewski2016-12-281-7/+7
| | | | | | | | emplace_back is not faster if it is equivalent to push_back. In this cases emplaced value had the same type that the one stored in container. It is ugly and it might be even slower (see Scott Meyers presentation about emplacement). llvm-svn: 290685
* [NewGVN] Simplyfy loop NFCPiotr Padlewski2016-12-281-4/+1
| | | | llvm-svn: 290683
* [NewGVN] replace typedefs with usingsPiotr Padlewski2016-12-281-2/+2
| | | | llvm-svn: 290680
* [NewGVN] NFC fixesPiotr Padlewski2016-12-281-40/+36
| | | | llvm-svn: 290679
* [NewGVN] Global sweep replacing NULL with nullptr. NFCI.Davide Italiano2016-12-281-10/+10
| | | | llvm-svn: 290670
* [NewGVN] Remove redundant code. NFCI.Davide Italiano2016-12-281-2/+0
| | | | llvm-svn: 290669
* [NewGVN] equals() for loads/stores is the same. Unify.Davide Italiano2016-12-281-23/+13
| | | | | | Differential Revision: https://reviews.llvm.org/D28116 llvm-svn: 290667
* [NewGVN] Simplify a bit removing else after return. NFCI.Davide Italiano2016-12-271-3/+3
| | | | llvm-svn: 290615
* [MemCpyOpt] Don't sink LoadInst below possible clobber.Bryant Wong2016-12-271-5/+11
| | | | | | Differential Revision: https://reviews.llvm.org/D26811 llvm-svn: 290611
* Change a std::vector to SmallVector in NewGVNDaniel Berlin2016-12-271-1/+1
| | | | llvm-svn: 290596
* clang-format NewGVN filesDaniel Berlin2016-12-261-67/+60
| | | | llvm-svn: 290551
* Misc cleanups and simplifications for NewGVN.Daniel Berlin2016-12-261-48/+54
| | | | | | | | | | | | | Mostly use a bit more idiomatic C++ where we can, so we can combine some things later. Reviewers: davide Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D28111 llvm-svn: 290550
* Don't use our own incorrect version of isTriviallyDeadInstruction in NewGVN. ↵Daniel Berlin2016-12-261-3/+2
| | | | | | Fixes PR/31472 llvm-svn: 290549
* [NewGVN] Fold lookupOperandLeader() when there's only one use. NFCI.Davide Italiano2016-12-261-4/+2
| | | | llvm-svn: 290543
* Value number stores and memory states so we can detect when memory states ↵Daniel Berlin2016-12-251-20/+139
| | | | | | | | | | | | are equivalent (IE store of same value to memory). Reviewers: davide Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D28084 llvm-svn: 290525
* Rename GVNExpression *ops_ members to *op_* to match conventions in the rest ↵Daniel Berlin2016-12-251-11/+11
| | | | | | of LLVM llvm-svn: 290524
* [NewGVN] Prefer `auto` to explicit type when the latter is obvious.Davide Italiano2016-12-241-1/+1
| | | | llvm-svn: 290499
* Mark isOnlyReachableViaThisEdge as constDaniel Berlin2016-12-241-2/+2
| | | | llvm-svn: 290468
* [LICM] Plug a leak freeing the ASTs before clearing the map.Davide Italiano2016-12-231-0/+2
| | | | llvm-svn: 290433
* [LICM] Work around LICM needs to maintain state across loops.Davide Italiano2016-12-231-1/+6
| | | | | | | | | | | | | | | The pass creates some state which expects to be cleaned up by a later instance of the same pass. opt-bisect happens to expose this not ideal design because calling skipLoop() will result in this state not being cleaned up at times and an assertion firing in `doFinalization()`. Chandler tells me the new pass manager will give us options to avoid these design traps, but until it's not ready, we need a workaround for the current pass infrastructure. Fix provided by Andy Kaylor, see the review for a complete discussion. Differential Revision: https://reviews.llvm.org/D25848 llvm-svn: 290427
* [NewGVN] Remove (for now) unused code. NFCI.Davide Italiano2016-12-231-12/+0
| | | | llvm-svn: 290420
* Enable '-Wstring-conversion' and fix some bad asserts that it helpedChandler Carruth2016-12-231-1/+1
| | | | | | | | find. Notable is the assert in NewGVN which had no effect because of the bug. llvm-svn: 290400
* [GVN] Initial check-in of a new global value numbering algorithm.Davide Italiano2016-12-223-0/+1859
| | | | | | | | | | | | | | | | | | | | | The code have been developed by Daniel Berlin over the years, and the new implementation goal is that of addressing shortcomings of the current GVN infrastructure, i.e. long compile time for large testcases, lack of phi predication, no load/store value numbering etc... The current code just implements the "core" GVN algorithm, although other pieces (load coercion, phi handling, predicate system) are already implemented in a branch out of tree. Once the core is stable, we'll start adding pieces on top of the base framework. The test currently living in test/Transform/NewGVN are a copy of the ones in GVN, with proper `XFAIL` (missing features in NewGVN). A flag will be added in a future commit to enable NewGVN, so that interested parties can exercise this code easily. Differential Revision: https://reviews.llvm.org/D26224 llvm-svn: 290346
* [PM] Introduce a reasonable port of the main per-module pass pipelineChandler Carruth2016-12-221-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | from the old pass manager in the new one. I'm not trying to support (initially) the numerous options that are currently available to customize the pass pipeline. If we end up really wanting them, we can add them later, but I suspect many are no longer interesting. The simplicity of omitting them will help a lot as we sort out what the pipeline should look like in the new PM. I've also documented to the best of my ability *why* each pass or group of passes is used so that reading the pipeline is more helpful. In many cases I think we have some questionable choices of ordering and I've left FIXME comments in place so we know what to come back and revisit going forward. But for now, I've left it as similar to the current pipeline as I could. Lastly, I've had to comment out several places where passes are not ported to the new pass manager or where the loop pass infrastructure is not yet ready. I did at least fix a few bugs in the loop pass infrastructure uncovered by running the full pipeline, but I didn't want to go too far in this patch -- I'll come back and re-enable these as the infrastructure comes online. But I'd like to keep the comments in place because I don't want to lose track of which passes need to be enabled and where they go. One thing that seemed like a significant API improvement was to require that we don't build pipelines for O0. It seems to have no real benefit. I've also switched back to returning pass managers by value as at this API layer it feels much more natural to me for composition. But if others disagree, I'm happy to go back to an output parameter. I'm not 100% happy with the testing strategy currently, but it seems at least OK. I may come back and try to refactor or otherwise improve this in subsequent patches but I wanted to at least get a good starting point in place. Differential Revision: https://reviews.llvm.org/D28042 llvm-svn: 290325
* Refactor the DIExpression fragment query interface (NFC)Adrian Prantl2016-12-221-4/+4
| | | | | | ... so it becomes available to DIExpressionCursor. llvm-svn: 290322
* [LDist] Match behavior between invoking via optimization pipeline or opt ↵Adam Nemet2016-12-211-31/+8
| | | | | | | | | | | | | | | | | | | | | | | | -loop-distribute In r267672, where the loop distribution pragma was introduced, I tried it hard to keep the old behavior for opt: when opt is invoked with -loop-distribute, it should distribute the loop (it's off by default when ran via the optimization pipeline). As MichaelZ has discovered this has the unintended consequence of breaking a very common developer work-flow to reproduce compilations using opt: First you print the pass pipeline of clang with -debug-pass=Arguments and then invoking opt with the returned arguments. clang -debug-pass will include -loop-distribute but the pass is invoked with default=off so nothing happens unless the loop carries the pragma. While through opt (default=on) we will try to distribute all loops. This changes opt's default to off as well to match clang. The tests are modified to explicitly enable the transformation. llvm-svn: 290235
* [LoopVersioning] Require loop-simplify form for loop versioning.Florian Hahn2016-12-193-11/+14
| | | | | | | | | | | | | | | | Summary: Requiring loop-simplify form for loop versioning ensures that the runtime check block always dominates the exit block. This patch closes #30958 (https://llvm.org/bugs/show_bug.cgi?id=30958). Reviewers: silviu.baranga, hfinkel, anemet, ashutosh.nema Subscribers: ashutosh.nema, mzolotukhin, efriedma, hfinkel, llvm-commits Differential Revision: https://reviews.llvm.org/D27469 llvm-svn: 290116
* Revert @llvm.assume with operator bundles (r289755-r289757)Daniel Jasper2016-12-1917-75/+170
| | | | | | | This creates non-linear behavior in the inliner (see more details in r289755's commit thread). llvm-svn: 290086
* Remove the AssumptionCacheHal Finkel2016-12-1517-167/+69
| | | | | | | | | After r289755, the AssumptionCache is no longer needed. Variables affected by assumptions are now found by using the new operand-bundle-based scheme. This new scheme is more computationally efficient, and also we need much less code... llvm-svn: 289756
* Make processing @llvm.assume more efficient by using operand bundlesHal Finkel2016-12-151-3/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | There was an efficiency problem with how we processed @llvm.assume in ValueTracking (and other places). The AssumptionCache tracked all of the assumptions in a given function. In order to find assumptions relevant to computing known bits, etc. we searched every assumption in the function. For ValueTracking, that means that we did O(#assumes * #values) work in InstCombine and other passes (with a constant factor that can be quite large because we'd repeat this search at every level of recursion of the analysis). Several of us discussed this situation at the last developers' meeting, and this implements the discussed solution: Make the values that an assume might affect operands of the assume itself. To avoid exposing this detail to frontends and passes that need not worry about it, I've used the new operand-bundle feature to add these extra call "operands" in a way that does not affect the intrinsic's signature. I think this solution is relatively clean. InstCombine adds these extra operands based on what ValueTracking, LVI, etc. will need and then those passes need only search the users of the values under consideration. This should fix the computational-complexity problem. At this point, no passes depend on the AssumptionCache, and so I'll remove that as a follow-up change. Differential Revision: https://reviews.llvm.org/D27259 llvm-svn: 289755
* [IRCE] Avoid loop optimizations on pre and post loopsAnna Thomas2016-12-131-0/+34
| | | | | | | | | | | | | | | | | | | | | Summary: This patch will add loop metadata on the pre and post loops generated by IRCE. Currently, we have metadata for disabling optimizations such as vectorization, unrolling, loop distribution and LICM versioning (and confirmed that these optimizations check for the metadata before proceeding with the transformation). The pre and post loops generated by IRCE need not go through loop opts (since these are slow paths). Added two test cases as well. Reviewers: sanjoy, reames Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D26806 llvm-svn: 289588
* [ADCE] Add code to remove dead branchesDavid Callahan2016-12-131-54/+227
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This is last in of a series of patches to evolve ADCE.cpp to support removing of unnecessary control flow. This patch adds the code to update the control and data flow graphs to remove the dead control flow. Also update unit tests to test the capability to remove dead, may-be-infinite loop which is enabled by the switch -adce-remove-loops. Previous patches: D23824 [ADCE] Add handling of PHI nodes when removing control flow D23559 [ADCE] Add control dependence computation D23225 [ADCE] Modify data structures to support removing control flow D23065 [ADCE] Refactor anticipating new functionality (NFC) D23102 [ADCE] Refactoring for new functionality (NFC) Reviewers: dberlin, majnemer, nadav, mehdi_amini Subscribers: llvm-commits, david2050, freik, twoh Differential Revision: https://reviews.llvm.org/D24918 llvm-svn: 289548
* [SCCP] Debug diagnostic goes under DEBUG(). NFCI.Davide Italiano2016-12-131-1/+1
| | | | llvm-svn: 289519
* [SCCP] Use the appropriate helper function. NFCI.Davide Italiano2016-12-111-2/+2
| | | | llvm-svn: 289406
* [SCCP] Teach the pass about `mul %x 0` even if %x is overdefined.Davide Italiano2016-12-091-2/+5
| | | | | | | | | | | | | | | | | | | | | | | The motivating example is: extern int patatino; int goo() { int x = 0; for (int i = 0; i < 1000000; ++i) { x *= patatino; } return x; } Currently SCCP will not realize that this function returns always zero, therefore will try to unroll and vectorize the loop at -O3 producing an awful lot of (useless) code. With this change, it will just produce: 0000000000000000 <g>: xor %eax,%eax retq llvm-svn: 289175
* [SCCP] Make sure SCCP and ConstantFolding agree on undef >> a.Davide Italiano2016-12-081-2/+2
| | | | | | | Currently SCCP folds the value to -1, while ConstantProp folds to 0. This changes SCCP to do what ConstantFolding does. llvm-svn: 289147
* [BDCE] Skip metadata while replacing uses.Davide Italiano2016-12-071-2/+3
| | | | | | | | | | | | The fix committed in r288851 doesn't cover all the cases. In particular, if we have an instruction with side effects which has a no non-dbg use not depending on the bits, we still perform RAUW destroying the dbg.value's first argument. Prevent metadata from being replaced here to avoid the issue. Differential Revision: https://reviews.llvm.org/D27534 llvm-svn: 288987
* [GVNHoist] Invalidate MemDep when an instruction is moved.Eli Friedman2016-12-071-0/+1
| | | | | | | | | | See also r279907. Fixes https://llvm.org/bugs/show_bug.cgi?id=30991 . Differential Revision: https://reviews.llvm.org/D27493 llvm-svn: 288968
* When GVN removes a redundant load, it should not modify the debug location ↵Andrea Di Biagio2016-12-071-1/+4
| | | | | | | | | | | | | | | of the dominating load. In the case of a fully redundant load LI dominated by an equivalent load V, GVN should always preserve the original debug location of V. Otherwise, we risk to introduce an incorrect stepping. If V has debug info, then clearly it should not be modified. If V has a null debugloc, then it is still potentially incorrect to propagate LI's debugloc because LI may not post-dominate V. Differential Revision: https://reviews.llvm.org/D27468 llvm-svn: 288903
* [BDCE/DebugInfo] Preserve llvm.dbg.value's argument.Davide Italiano2016-12-061-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | BDCE has two phases: 1. It asks SimplifyDemandedBits if all the bits of an instruction are dead, and if so, replaces all its uses with the constant zero. 2. Then, it asks SimplifyDemandedBits again if the instruction is really dead (no side effects etc..) and if so, eliminates it. Now, in 1) if all the bits of an instruction are dead, we may end up replacing a dbg use: %call = tail call i32 (...) @g() #4, !dbg !15 tail call void @llvm.dbg.value(metadata i32 %call, i64 0, metadata !8, metadata !16), !dbg !17 -> %call = tail call i32 (...) @g() #4, !dbg !15 tail call void @llvm.dbg.value(metadata i32 0, i64 0, metadata !8, metadata !16), !dbg !17 but not eliminating the call because it may have arbitrary side effects. In other words, we lose some debug informations. This patch fixes the problem making sure that BDCE does nothing with the instruction if it has side effects and no non-dbg uses. Differential Revision: https://reviews.llvm.org/D27471 llvm-svn: 288851
* Revert "[SCCP] Remove manual folding of terminator instructions."Davide Italiano2016-12-061-2/+27
| | | | | | This reverts commit r288725 as it broke a bot. llvm-svn: 288759
* [SCCP] Remove manual folding of terminator instructions.Davide Italiano2016-12-051-27/+2
| | | | | | | | | | | | | There are two cases handled here: 1) a branch on undef 2) a switch with an undef condition. Both cases are currently handled by ResolvedUndefsIn. If we have a branch on undef, we force its value to false (which is trivially foldable). If we have a switch on undef, we force to the first constant (which is also foldable). llvm-svn: 288725
* [DIExpression] Introduce a dedicated DW_OP_LLVM_fragment operationAdrian Prantl2016-12-051-20/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | so we can stop using DW_OP_bit_piece with the wrong semantics. The entire back story can be found here: http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20161114/405934.html The gist is that in LLVM we've been misinterpreting DW_OP_bit_piece's offset field to mean the offset into the source variable rather than the offset into the location at the top the DWARF expression stack. In order to be able to fix this in a subsequent patch, this patch introduces a dedicated DW_OP_LLVM_fragment operation with the semantics that we used to apply to DW_OP_bit_piece, which is what we actually need while inside of LLVM. This patch is complete with a bitcode upgrade for expressions using the old format. It does not yet fix the DWARF backend to use DW_OP_bit_piece correctly. Implementation note: We discussed several options for implementing this, including reserving a dedicated field in DIExpression for the fragment size and offset, but using an custom operator at the end of the expression works just fine and is more efficient because we then only pay for it when we need it. Differential Revision: https://reviews.llvm.org/D27361 rdar://problem/29335809 llvm-svn: 288683
* IR: Move NumElements field from {Array,Vector}Type to SequentialType.Peter Collingbourne2016-12-021-7/+2
| | | | | | | | | | Now that PointerType is no longer a SequentialType, all SequentialTypes have an associated number of elements, so we can move that information to the base class, allowing for a number of simplifications. Differential Revision: https://reviews.llvm.org/D27122 llvm-svn: 288464
* Change LoopUnrollPass cost from int to unsigned to make it consistent. (NFC)Dehao Chen2016-12-021-5/+5
| | | | llvm-svn: 288463
* IR: Change PointerType to derive from Type rather than SequentialType.Peter Collingbourne2016-12-021-4/+0
| | | | | | | | | | | | | | | | | | | As proposed on llvm-dev: http://lists.llvm.org/pipermail/llvm-dev/2016-October/106640.html This is for a couple of reasons: - Values of type PointerType are unlike the other SequentialTypes (arrays and vectors) in that they do not hold values of the element type. By moving PointerType we can unify certain aspects of how the other SequentialTypes are handled. - PointerType will have no place in the SequentialType hierarchy once pointee types are removed, so this is a necessary step towards removing pointee types. Differential Revision: https://reviews.llvm.org/D26595 llvm-svn: 288462
* IR: Change the gep_type_iterator API to avoid always exposing the "current" ↵Peter Collingbourne2016-12-025-14/+15
| | | | | | | | | | | | | type. Instead, expose whether the current type is an array or a struct, if an array what the upper bound is, and if a struct the struct type itself. This is in preparation for a later change which will make PointerType derive from Type rather than SequentialType. Differential Revision: https://reviews.llvm.org/D26594 llvm-svn: 288458
OpenPOWER on IntegriCloud