summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms
Commit message (Collapse)AuthorAgeFilesLines
* Make sure total loop body weight is preserved in loop peelingXin Tong2017-01-021-8/+17
| | | | | | | | | | | | | | | Summary: Regardless how the loop body weight is distributed, we should preserve total loop body weight. i.e. we should have same weight reaching the body of the loop or its duplicates in peeled and unpeeled case. Reviewers: mkuper, davidxl, anemet Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D28179 llvm-svn: 290833
* NewGVN: Clean up after removing possibility of null expressions.Daniel Berlin2017-01-021-17/+14
| | | | llvm-svn: 290828
* fix typo; NFCSanjay Patel2017-01-021-1/+1
| | | | llvm-svn: 290827
* [NewGVN] Fold single-use variable inside the assertion.Davide Italiano2017-01-021-5/+3
| | | | | | | It placates some bots which complain because they compile the assertion out and think the variable is unused. llvm-svn: 290825
* [NewGVN] Restore old code to placate buildbots.Davide Italiano2017-01-021-2/+6
| | | | | | | | | Apparently my suggestion of using ternary doesn't really work as clang complains about incompatible types on LHS and RHS. Some GCC versions happen to accept the code but clang behaviour is correct here. llvm-svn: 290822
* NewGVN: Fix some formatting and comment issuesDaniel Berlin2017-01-021-18/+8
| | | | llvm-svn: 290820
* NewGVN: Add UnknownExpression and create them for things we can't symbolize. ↵Daniel Berlin2017-01-021-19/+19
| | | | | | | | | | | | | | | | | Kill fragile machinery for handling null expressions. Summary: This avoids the very fragile code for null expressions. We could also use a denseset that tracks which things have null expressions instead, but that seems pretty fragile and premature optimization. This resolves a number of infinite loop cases, test reductions coming. Reviewers: davide Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D28193 llvm-svn: 290816
* NewGVN: Fix PR31480, PR31483, PR31499, by rewriting how memory congruence ↵Daniel Berlin2017-01-021-20/+144
| | | | | | | | | | | | | | handling works. Summary: Previously, we tried to fix up the equivalences during symbolic evaluation. This does not work. Now, we change the equivalences during congruence finding, where it belongs. We also initialize the equivalence table to give a maximal answer. Reviewers: davide Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D28192 llvm-svn: 290815
* [PMBuilder] Remove RunFloat2Int cl::opt.Davide Italiano2017-01-021-6/+1
| | | | | | The pass has been on by default for a long time without problems. llvm-svn: 290814
* [Inliner] remove unnecessary null checks from AddAlignmentAssumptions(); NFCISanjay Patel2016-12-311-5/+3
| | | | | | | We bail out on the 1st line if the assumption cache is not set, so there's no need to check it after that. llvm-svn: 290787
* [InstCombine][AVX-512] Teach InstCombine that llvm.x86.avx512.vcomi.sd and ↵Craig Topper2016-12-311-0/+2
| | | | | | | | llvm.x86.avx512.vcomi.ss don't use the upper elements of their input. This was already done for the SSE/SSE2 version of the intrinsics. llvm-svn: 290776
* [InstCombine][AVX-512] When turning intrinsics with masking into native IR, ↵Craig Topper2016-12-301-9/+20
| | | | | | | | don't emit a select if the mask is known to be all ones. This saves InstCombine the burden of having to optimize the select later. llvm-svn: 290774
* Add a comment for a todo in LoopUnroll post cleanupPhilip Reames2016-12-301-0/+5
| | | | llvm-svn: 290769
* [CVP] Adjust iteration order to reduce the amount of work requiredPhilip Reames2016-12-301-3/+8
| | | | | | | | CVP doesn't care about the order of blocks visited, but by using a pre-order traversal over the graph we can a) not visit unreachable blocks and b) optimize as we go so that analysis of later blocks produce slightly more precise results. I noticed this via inspection and don't have a concrete example which points to the issue. llvm-svn: 290760
* [NewGVN] Remove unneeded newline from assertion message.Davide Italiano2016-12-301-1/+1
| | | | llvm-svn: 290755
* [InstCombine] Address post-commit feedbackDavid Majnemer2016-12-302-2/+4
| | | | llvm-svn: 290741
* [LICM] When promoting scalars, allow inserting stores to thread-local allocas.Michael Kuperstein2016-12-301-1/+2
| | | | | | | | | This is similar to the allocfn case - if an alloca is not captured, then it's necessarily thread-local. Differential Revision: https://reviews.llvm.org/D28170 llvm-svn: 290738
* Use continuous boosting factor for complete unroll.Dehao Chen2016-12-301-75/+32
| | | | | | | | | | | | | | | | | | | | Summary: The current loop complete unroll algorithm checks if unrolling complete will reduce the runtime by a certain percentage. If yes, it will apply a fixed boosting factor to the threshold (by discounting cost). The problem for this approach is that the threshold abruptly. This patch makes the boosting factor a function of runtime reduction percentage, capped by a fixed threshold. In this way, the threshold changes continuously. The patch also simplified the code by reducing one parameter in UP. The patch only affects code-gen of two speccpu2006 benchmark: 445.gobmk binary size decreases 0.08%, no performance change. 464.h264ref binary size increases 0.24%, no performance change. Reviewers: mzolotukhin, chandlerc Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D26989 llvm-svn: 290737
* [LICM] Remove unneeded tracking of whether changes were made. NFC.Michael Kuperstein2016-12-301-9/+7
| | | | | | | | "Changed" doesn't actually change within the loop, so there's no reason to keep track of it - we always return false during analysis and true after the transformation is made. llvm-svn: 290735
* [LICM] Make logic in promoteLoopAccessesToScalars easier to follow. NFC.Michael Kuperstein2016-12-301-40/+47
| | | | llvm-svn: 290734
* [InstCombine] More thoroughly canonicalize the position of zextsDavid Majnemer2016-12-302-9/+120
| | | | | | | | We correctly canonicalized (add (sext x), (sext y)) to (sext (add x, y)) where possible. However, we didn't perform the same canonicalization for zexts or for muls. llvm-svn: 290733
* [LICM] Compute exit blocks for promotion eagerly. NFC.Michael Kuperstein2016-12-291-35/+36
| | | | | | | | | | | This moves the exit block and insertion point computation to be eager, instead of after seeing the first scalar we can promote. The cost is relatively small (the computation happens anyway, see discussion on D28147), and the code is easier to follow, and can bail out earlier if there's a catchswitch present. llvm-svn: 290729
* [LICM] Don't try to promote in loops where we have no chance to promote. NFC.Michael Kuperstein2016-12-291-10/+6
| | | | | | | | | | We would check whether we have a prehader *or* dedicated exit blocks, and go into the promotion loop. Then, for each alias set we'd check if we have a preheader *and* dedicated exit blocks, and bail if not. Instead, bail immediately if we don't have both. llvm-svn: 290728
* [LICM] Only recompute LCSSA when we actually promoted something.Michael Kuperstein2016-12-291-3/+6
| | | | | | | | | | | | We want to recompute LCSSA only when we actually promoted a value. This means we only need to look at changes made by promotion when deciding whether to recompute it or not, not at regular sinking/hoisting. (This was what the code was documented as doing, just not what it did) Hopefully NFC. llvm-svn: 290726
* NewGVN: Fix PR 31491 by ensuring that we touch the right instructions. ↵Daniel Berlin2016-12-291-11/+21
| | | | | | Change to one based numbering so we can assert we don't cause the same bug again. llvm-svn: 290724
* [InstCombine] Use getVectorNumElements instead of explicitly casting to ↵Craig Topper2016-12-291-8/+7
| | | | | | VectorType and calling getNumElements. NFC llvm-svn: 290707
* [InstCombine] Fix typo in comment. NFCCraig Topper2016-12-291-1/+1
| | | | llvm-svn: 290706
* [InstCombine] Use a 32-bits instead of 64-bits for storing the number of ↵Craig Topper2016-12-291-2/+2
| | | | | | elements in VectorType for a ShuffleVector. While there getVectorNumElements to avoid an explicit cast. NFC llvm-svn: 290705
* [InstCombine][X86] If the lowest element of a scalar intrinsic isn't used ↵Craig Topper2016-12-291-6/+18
| | | | | | | | make sure we add it to the worklist so we can DCE it sooner. We bypassed the intrinsic and returned the passthru operand, but we should also add the intrinsic to the worklist since its now dead. This can allow DCE to find it sooner and remove it. Similar was done for InsertElement when the inserted element isn't demanded. llvm-svn: 290704
* NewGVN: Sort Dominator Tree in RPO order, and use that for generating order.Daniel Berlin2016-12-291-4/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: The optimal iteration order for this problem is RPO order. We want to process as many preds of a backedge as we can before we process the backedge. At the same time, as we add predicate handling, we want to be able to touch instructions that are dominated by a given block by ranges (because a change in value numbering a predicate possibly affects all users we dominate that are using that predicate). If we don't do it this way, we can't do value inference over backedges (the paper covers this in depth). The newgvn branch currently overshoots the last part, and guarantees that it will touch *at least* the right set of instructions, but it does touch more. This is because the bitvector instruction ranges are currently generated in RPO order (so we take the max and the min of the ranges of dominated blocks, which means there are some in the middle we didn't have to touch that we did). We can do better by sorting the dominator tree, and then just using dominator tree order. As a preliminary, the dominator tree has some RPO guarantees, but not enough. It guarantees that for a given node, your idom must come before you in the RPO ordering. It guarantees no relative RPO ordering for siblings. We add siblings in whatever order they appear in the module. So that is what we fix. We sort the children array of the domtree into RPO order, and then use the dominator tree for ordering, instead of RPO, since the dominator tree is now a valid RPO ordering. Note: This would help any other pass that iterates a forward problem in dominator tree order. Most of them are single pass. It will still maximize whatever result they compute. We could also build the dominator tree in this order, but our incremental updates would still put it out of sort order, and recomputing the sort order is almost as hard as general incremental updates of the domtree. Also note that the sorting does not affect any tests, etc. Nothing depends on domtree order, including the verifier, the equals functions for domtree nodes, etc. How much could this matter, you ask? Here are the current numbers. This is generated by running NewGVN over all files in LLVM. Note that once we propagate equalities, the differences go up by an order of magnitude or two (IE instead of 29, the max ends up in the thousands, since the worst case we add a factor of N, where N is the number of branch predicates). So while it doesn't look that stark for the default ordering, it gets *much much* worse. There are also programs in the wild where the difference is already pretty stark (2 iterations vs hundreds). RPO ordering: 759040 Number of iterations is 1 112908 Number of iterations is 2 Default dominator tree ordering: 755081 Number of iterations is 1 116234 Number of iterations is 2 603 Number of iterations is 3 27 Number of iterations is 4 2 Number of iterations is 5 1 Number of iterations is 7 Dominator tree sorted: 759040 Number of iterations is 1 112908 Number of iterations is 2 <yay!> Really bad ordering (sort domtree siblings in postorder. not quite the worst possible, but yeah): 754008 Number of iterations is 1 21 Number of iterations is 10 8 Number of iterations is 11 6 Number of iterations is 12 5 Number of iterations is 13 2 Number of iterations is 14 2 Number of iterations is 15 3 Number of iterations is 16 1 Number of iterations is 17 2 Number of iterations is 18 96642 Number of iterations is 2 1 Number of iterations is 20 2 Number of iterations is 21 1 Number of iterations is 22 1 Number of iterations is 29 17266 Number of iterations is 3 2598 Number of iterations is 4 798 Number of iterations is 5 273 Number of iterations is 6 186 Number of iterations is 7 80 Number of iterations is 8 42 Number of iterations is 9 Reviewers: chandlerc, davide Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D28129 llvm-svn: 290699
* Update equalsStoreHelper for the fact that only one branch can be trueDaniel Berlin2016-12-291-4/+5
| | | | llvm-svn: 290697
* Revert "[NewGVN] replace emplace_back with push_back"Piotr Padlewski2016-12-281-7/+7
| | | | llvm-svn: 290692
* [NewGVN] replace emplace_back with push_backPiotr Padlewski2016-12-281-7/+7
| | | | | | | | emplace_back is not faster if it is equivalent to push_back. In this cases emplaced value had the same type that the one stored in container. It is ugly and it might be even slower (see Scott Meyers presentation about emplacement). llvm-svn: 290685
* [NewGVN] Simplyfy loop NFCPiotr Padlewski2016-12-281-4/+1
| | | | llvm-svn: 290683
* [NewGVN] replace typedefs with usingsPiotr Padlewski2016-12-281-2/+2
| | | | llvm-svn: 290680
* [NewGVN] NFC fixesPiotr Padlewski2016-12-281-40/+36
| | | | llvm-svn: 290679
* [NewGVN] Global sweep replacing NULL with nullptr. NFCI.Davide Italiano2016-12-281-10/+10
| | | | llvm-svn: 290670
* [NewGVN] Remove redundant code. NFCI.Davide Italiano2016-12-281-2/+0
| | | | llvm-svn: 290669
* [NewGVN] equals() for loads/stores is the same. Unify.Davide Italiano2016-12-281-23/+13
| | | | | | Differential Revision: https://reviews.llvm.org/D28116 llvm-svn: 290667
* [PM] Teach the inliner's call graph update to handle inserting new edgesChandler Carruth2016-12-281-7/+9
| | | | | | | | | | | | | | | | | when they are call edges at the leaf but may (transitively) be reached via ref edges. It turns out there is a simple rule: insert everything as a ref edge which is a safe conservative default. Then we let the existing update logic handle promoting some of those to call edges. Note that it would be fairly cheap to make these call edges right away if that is desirable by testing whether there is some existing call path from the source to the target. It just seemed like slightly more complexity in this code path that isn't strictly necessary. If anyone feels strongly about handling this differently I'm happy to change it. llvm-svn: 290649
* [InstCombine] Remove a piece of a comment that said that InstCombiner ↵Craig Topper2016-12-281-2/+1
| | | | | | contains pass infrastructure. That hasn't been true since r226618. NFC llvm-svn: 290648
* [InstCombine] Canonicalize insert splat sequences into an insert + shuffleMichael Kuperstein2016-12-281-0/+57
| | | | | | | | | | | This adds a combine that canonicalizes a chain of inserts which broadcasts a value into a single insert + a splat shufflevector. This fixes PR31286. Differential Revision: https://reviews.llvm.org/D27992 llvm-svn: 290641
* [sanitizer-coverage] sort the switch casesKostya Serebryany2016-12-271-0/+5
| | | | llvm-svn: 290628
* [NewGVN] Simplify a bit removing else after return. NFCI.Davide Italiano2016-12-271-3/+3
| | | | llvm-svn: 290615
* [MemCpyOpt] Don't sink LoadInst below possible clobber.Bryant Wong2016-12-271-5/+11
| | | | | | Differential Revision: https://reviews.llvm.org/D26811 llvm-svn: 290611
* Change a std::vector to SmallVector in NewGVNDaniel Berlin2016-12-271-1/+1
| | | | llvm-svn: 290596
* [PM] Add one of the features left out of the initial inliner patch:Chandler Carruth2016-12-271-7/+23
| | | | | | | | | | | | | | | | | skipping indirectly recursive inline chains. To do this, we implicitly build an inline stack for each callsite and check prior to inlining that doing so would not form a cycle. This uses the exact same technique and even shares some code with the legacy PM inliner. This solution remains deeply unsatisfying to me because it means we cannot actually iterate the inliner externally. Doing so would not be able to easily detect and avoid such cycles. Some day I would very much like to have a solution that works without this internal state to detect cycles, but this is not that day. llvm-svn: 290590
* [InstCombine][X86] Add DemandedElts support for 512-bit PMULDQ/PMULUDQ ↵Craig Topper2016-12-272-2/+6
| | | | | | | | | | instructions PMULDQ/PMULUDQ vXi64 instructions only use the even numbered v2Xi32 input elements which SimplifyDemandedVectorElts should try and use. This builds on r290554 which added supported for 128 and 256-bit. llvm-svn: 290582
* [PM] Teach the inliner in the new PM to merge attributes after inlining.Chandler Carruth2016-12-271-0/+3
| | | | | | | Also enable the new PM in the attributes test case which caught this issue. llvm-svn: 290572
* [AVX-512][InstCombine] Teach InstCombine to turn masked scalar ↵Craig Topper2016-12-271-37/+42
| | | | | | | | add/sub/mul/div with rounding intrinsics into normal IR operations if the rounding mode is CUR_DIRECTION. An earlier commit added support for unmasked scalar operations. At that time isel wouldn't generate an optimal sequence for masked operations, but that has now been fixed. llvm-svn: 290566
OpenPOWER on IntegriCloud