summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/Vectorize
Commit message (Collapse)AuthorAgeFilesLines
...
* SLPVectorizer: limit the scheduling region size per basic block.Erik Eckstein2015-09-301-9/+56
| | | | | | | | | | | Usually large blocks are not a problem. But if a large block (> 10k instructions) contains many (potential) chains of vector instructions, and those chains are spread over a wide range of instructions, then scheduling becomes a compile time problem. This change introduces a limit for the accumulate scheduling region size of a block. For real-world functions this limit will never be exceeded (it's about 10x larger than the maximum value seen in the test-suite and external test suite). llvm-svn: 248917
* [SCEV] Introduce ScalarEvolution::getOne and getZero.Sanjoy Das2015-09-231-3/+2
| | | | | | | | | | | | | | | | | | Summary: It is fairly common to call SE->getConstant(Ty, 0) or SE->getConstant(Ty, 1); this change makes such uses a little bit briefer. I've refactored the call sites I could find easily to use getZero / getOne. Reviewers: hfinkel, majnemer, reames Subscribers: sanjoy, llvm-commits Differential Revision: http://reviews.llvm.org/D12947 llvm-svn: 248362
* [LoopUtils,LV] Propagate fast-math flags on generated FCmp instructionsJames Molloy2015-09-211-2/+4
| | | | | | | | | We're currently losing any fast-math flags when synthesizing fcmps for min/max reductions. In LV, make sure we copy over the scalar inst's flags. In LoopUtils, we know we only ever match patterns with hasUnsafeAlgebra, so apply that to any synthesized ops. llvm-svn: 248201
* [PM/AA] Rebuild LLVM's alias analysis infrastructure in a way compatibleChandler Carruth2015-09-093-12/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | with the new pass manager, and no longer relying on analysis groups. This builds essentially a ground-up new AA infrastructure stack for LLVM. The core ideas are the same that are used throughout the new pass manager: type erased polymorphism and direct composition. The design is as follows: - FunctionAAResults is a type-erasing alias analysis results aggregation interface to walk a single query across a range of results from different alias analyses. Currently this is function-specific as we always assume that aliasing queries are *within* a function. - AAResultBase is a CRTP utility providing stub implementations of various parts of the alias analysis result concept, notably in several cases in terms of other more general parts of the interface. This can be used to implement only a narrow part of the interface rather than the entire interface. This isn't really ideal, this logic should be hoisted into FunctionAAResults as currently it will cause a significant amount of redundant work, but it faithfully models the behavior of the prior infrastructure. - All the alias analysis passes are ported to be wrapper passes for the legacy PM and new-style analysis passes for the new PM with a shared result object. In some cases (most notably CFL), this is an extremely naive approach that we should revisit when we can specialize for the new pass manager. - BasicAA has been restructured to reflect that it is much more fundamentally a function analysis because it uses dominator trees and loop info that need to be constructed for each function. All of the references to getting alias analysis results have been updated to use the new aggregation interface. All the preservation and other pass management code has been updated accordingly. The way the FunctionAAResultsWrapperPass works is to detect the available alias analyses when run, and add them to the results object. This means that we should be able to continue to respect when various passes are added to the pipeline, for example adding CFL or adding TBAA passes should just cause their results to be available and to get folded into this. The exception to this rule is BasicAA which really needs to be a function pass due to using dominator trees and loop info. As a consequence, the FunctionAAResultsWrapperPass directly depends on BasicAA and always includes it in the aggregation. This has significant implications for preserving analyses. Generally, most passes shouldn't bother preserving FunctionAAResultsWrapperPass because rebuilding the results just updates the set of known AA passes. The exception to this rule are LoopPass instances which need to preserve all the function analyses that the loop pass manager will end up needing. This means preserving both BasicAAWrapperPass and the aggregating FunctionAAResultsWrapperPass. Now, when preserving an alias analysis, you do so by directly preserving that analysis. This is only necessary for non-immutable-pass-provided alias analyses though, and there are only three of interest: BasicAA, GlobalsAA (formerly GlobalsModRef), and SCEVAA. Usually BasicAA is preserved when needed because it (like DominatorTree and LoopInfo) is marked as a CFG-only pass. I've expanded GlobalsAA into the preserved set everywhere we previously were preserving all of AliasAnalysis, and I've added SCEVAA in the intersection of that with where we preserve SCEV itself. One significant challenge to all of this is that the CGSCC passes were actually using the alias analysis implementations by taking advantage of a pretty amazing set of loop holes in the old pass manager's analysis management code which allowed analysis groups to slide through in many cases. Moving away from analysis groups makes this problem much more obvious. To fix it, I've leveraged the flexibility the design of the new PM components provides to just directly construct the relevant alias analyses for the relevant functions in the IPO passes that need them. This is a bit hacky, but should go away with the new pass manager, and is already in many ways cleaner than the prior state. Another significant challenge is that various facilities of the old alias analysis infrastructure just don't fit any more. The most significant of these is the alias analysis 'counter' pass. That pass relied on the ability to snoop on AA queries at different points in the analysis group chain. Instead, I'm planning to build printing functionality directly into the aggregation layer. I've not included that in this patch merely to keep it smaller. Note that all of this needs a nearly complete rewrite of the AA documentation. I'm planning to do that, but I'd like to make sure the new design settles, and to flesh out a bit more of what it looks like in the new pass manager first. Differential Revision: http://reviews.llvm.org/D12080 llvm-svn: 247167
* Rename ExitCount to BackedgeTakenCount, because that's what it is.James Molloy2015-09-091-8/+9
| | | | | | We called a variable ExitCount, stored the backedge count in it, then redefined it to be the exit count again. llvm-svn: 247140
* Delay predication of stores until near the end of vector code generationJames Molloy2015-09-091-56/+28
| | | | | | | | Predicating stores requires creating extra blocks. It's much cleaner if we do this in one pass instead of mutating the CFG while writing vector instructions. Besides which we can make use of helper functions to update domtree for us, reducing the work we need to do. llvm-svn: 247139
* [LV] Don't bail to MiddleBlock if a runtime check fails, bail to ScalarPH ↵James Molloy2015-09-021-60/+27
| | | | | | | | | | instead We were bailing to two places if our runtime checks failed. If the initial overflow check failed, we'd go to ScalarPH. If any other check failed, we'd go to MiddleBlock. This caused us to have to have an extra PHI per induction and reduction as the vector loop's exit block was not dominated by its latch. There's no need to have this behavior - if we just always go to ScalarPH we can get rid of a bunch of complexity. llvm-svn: 246637
* [LV] Move some code around slightly to make the intent of the function more ↵James Molloy2015-09-021-3/+1
| | | | | | | | clear. NFC. llvm-svn: 246636
* [LV] Cleanup: Sink an IRBuilder closer to its uses.James Molloy2015-09-021-10/+5
| | | | | | NFC. llvm-svn: 246635
* [LV] Refactor all runtime check emissions into helper functions.James Molloy2015-09-021-100/+126
| | | | | | | | This reduces the complexity of createEmptyBlock() and will open the door to further refactoring. The test change is simply because we're now constant folding a trivial test. llvm-svn: 246634
* [LV] Pull creation of trip counts into a helper function.James Molloy2015-09-021-63/+101
| | | | | | ... and do a tad of tidyup while we're at it. Because StartIdx must now be zero, there's no difference between Count and EndIdx. llvm-svn: 246633
* [LV] Factor the creation of the loop induction variable out of createEmptyLoop()James Molloy2015-09-021-19/+43
| | | | | | It makes things easier to understand if this is in a helper method. This is part of my ongoing spaghetti-removal operation on createEmptyLoop. llvm-svn: 246632
* [LV] Never widen an induction variable.James Molloy2015-09-021-115/+49
| | | | | | | | | | There's no need to widen canonical induction variables. It's just as efficient to create a *new*, wide, induction variable. Consider, if we widen an indvar, then we'll have to truncate it before its uses anyway (1 trunc). If we create a new indvar instead, we'll have to truncate that instead (1 trunc) [besides which IndVars should go and clean up our mess after us anyway on principle]. This lets us remove a ton of special-casing code. llvm-svn: 246631
* [LV] Switch to using canonical induction variables.James Molloy2015-09-021-14/+8
| | | | | | | | | | Vectorized loops only ever have one induction variable. All induction PHIs from the scalar loop are rewritten to be in terms of this single indvar. We were trying very hard to pick an indvar that already existed, even if that indvar wasn't canonical (didn't start at zero). But trying so hard is really fruitless - creating a new, canonical, indvar only results in one extra add in the worst case and that add is trivially easy to push through the PHI out of the loop by instcombine. If we try and be less clever here and instead let instcombine clean up our mess (as we do in many other places in LV), we can remove unneeded complexity. llvm-svn: 246630
* Improve vectorization diagnostic messages and extend vectorize(enable) pragma.Tyler Nowicki2015-08-271-15/+25
| | | | | | | | | | | | | | | | This patch changes the analysis diagnostics produced when loops with floating-point recurrences or memory operations are identified. The new messages say "cannot prove it is safe to reorder * operations; allow reordering by specifying #pragma clang loop vectorize(enable)". Depending on the type of diagnostic the message will include additional options such as ffast-math or __restrict__. This patch also allows the vectorize(enable) pragma to override the low pointer memory check threshold. When the hint is given a higher threshold is used. See the clang patch for the options produced for each diagnostic. llvm-svn: 246187
* [LoopVectorize] Add Support for Small Size Reductions.Chad Rosier2015-08-271-18/+60
| | | | | | | | | | | | | | | | | | | | | | | Unlike scalar operations, we can perform vector operations on element types that are smaller than the native integer types. We type-promote scalar operations if they are smaller than a native type (e.g., i8 arithmetic is promoted to i32 arithmetic on Arm targets). This patch detects and removes type-promotions within the reduction detection framework, enabling the vectorization of small size reductions. In the legality phase, we look through the ANDs and extensions that InstCombine creates during promotion, keeping track of the smaller type. In the profitability phase, we use the smaller type and ignore the ANDs and extensions in the cost model. Finally, in the code generation phase, we truncate the result of the reduction to allow InstCombine to rewrite the entire expression in the smaller type. This fixes PR21369. http://reviews.llvm.org/D12202 Patch by Matt Simpson <mssimpso@codeaurora.org>! llvm-svn: 246149
* [LoopVectorize] Extract InductionInfo into a helper class...James Molloy2015-08-271-128/+29
| | | | | | | | ... and move it into LoopUtils where it can be used by other passes, just like ReductionDescriptor. The API is very similar to ReductionDescriptor - that is, not very nice at all. Sorting these both out will come in a followup. NFC llvm-svn: 246145
* Improved printing of analysis diagnostics in the loop vectorizer.Tyler Nowicki2015-08-271-18/+26
| | | | | | | | This patch ensures that every analysis diagnostic produced by the vectorizer will be printed if the loop has a vectorization hint on it. The condition has also been improved to prevent printing when a disabling hint is specified. llvm-svn: 246132
* The patch replace the overflow check in loop vectorization with the minimum ↵Wei Mi2015-08-251-22/+25
| | | | | | | | | | | loop iterations check. The loop minimum iterations check below ensures the loop has enough trip count so the generated vector loop will likely be executed, and it covers the overflow check. Differential Revision: http://reviews.llvm.org/D12107. llvm-svn: 245952
* Standardized 'failed' to 'Failed' in LoopVectorizationRequirements.Tyler Nowicki2015-08-211-4/+4
| | | | llvm-svn: 245759
* [SLP] Propagate 'nontemporal' attribute into vectorized instructions.Michael Zolotukhin2015-08-201-0/+3
| | | | llvm-svn: 245633
* [LoopVectorize] Propagate 'nontemporal' attribute into vectorized instructions.Michael Zolotukhin2015-08-201-1/+2
| | | | llvm-svn: 245632
* Rename Instruction::dropUnknownMetadata() to dropUnknownNonDebugMetadata()Adrian Prantl2015-08-201-1/+0
| | | | | | | | | | | and make it always preserve debug locations, since all callers wanted this behavior anyway. This is addressing a post-commit review feedback for r245589. NFC (inside the LLVM tree). llvm-svn: 245622
* Fix a bug that caused SimplifyCFG to drop DebugLocs.Adrian Prantl2015-08-201-0/+1
| | | | | | | | | | | Instruction::dropUnknownMetadata(KnownSet) is supposed to preserve all metadata in KnownSet, but the condition for DebugLocs was inverted. Most users of dropUnknownMetadata() actually worked around this by not adding LLVMContext::MD_dbg to their list of KnowIDs. This is now made explicit. llvm-svn: 245589
* [PM] Port ScalarEvolution to the new pass manager.Chandler Carruth2015-08-173-11/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change makes ScalarEvolution a stand-alone object and just produces one from a pass as needed. Making this work well requires making the object movable, using references instead of overwritten pointers in a number of places, and other refactorings. I've also wired it up to the new pass manager and added a RUN line to a test to exercise it under the new pass manager. This includes basic printing support much like with other analyses. But there is a big and somewhat scary change here. Prior to this patch ScalarEvolution was never *actually* invalidated!!! Re-running the pass just re-wired up the various other analyses and didn't remove any of the existing entries in the SCEV caches or clear out anything at all. This might seem OK as everything in SCEV that can uses ValueHandles to track updates to the values that serve as SCEV keys. However, this still means that as we ran SCEV over each function in the module, we kept accumulating more and more SCEVs into the cache. At the end, we would have a SCEV cache with every value that we ever needed a SCEV for in the entire module!!! Yowzers. The releaseMemory routine would dump all of this, but that isn't realy called during normal runs of the pipeline as far as I can see. To make matters worse, there *is* actually a key that we don't update with value handles -- there is a map keyed off of Loop*s. Because LoopInfo *does* release its memory from run to run, it is entirely possible to run SCEV over one function, then over another function, and then lookup a Loop* from the second function but find an entry inserted for the first function! Ouch. To make matters still worse, there are plenty of updates that *don't* trip a value handle. It seems incredibly unlikely that today GVN or another pass that invalidates SCEV can update values in *just* such a way that a subsequent run of SCEV will incorrectly find lookups in a cache, but it is theoretically possible and would be a nightmare to debug. With this refactoring, I've fixed all this by actually destroying and recreating the ScalarEvolution object from run to run. Technically, this could increase the amount of malloc traffic we see, but then again it is also technically correct. ;] I don't actually think we're suffering from tons of malloc traffic from SCEV because if we were, the fact that we never clear the memory would seem more likely to have come up as an actual problem before now. So, I've made the simple fix here. If in fact there are serious issues with too much allocation and deallocation, I can work on a clever fix that preserves the allocations (while clearing the data) between each run, but I'd prefer to do that kind of optimization with a test case / benchmark that shows why we need such cleverness (and that can test that we actually make it faster). It's possible that this will make some things faster by making the SCEV caches have higher locality (due to being significantly smaller) so until there is a clear benchmark, I think the simple change is best. Differential Revision: http://reviews.llvm.org/D12063 llvm-svn: 245193
* [PM/AA] Explicitly depend on TLI rather than getting it out of theChandler Carruth2015-08-121-1/+7
| | | | | | | | | | AliasAnalysis. Same as the other commits, the TLI access from an alias analysis is going away and isn't very clean -- it is better to explicitly mark the dependencies. llvm-svn: 244785
* fix minsize detection: minsize attribute implies optimizing for sizeSanjay Patel2015-08-111-7/+7
| | | | llvm-svn: 244617
* fix code that was accidentally commented out in previous commitSanjay Patel2015-08-111-2/+2
| | | | llvm-svn: 244610
* fix typos in comments; NFCSanjay Patel2015-08-111-5/+5
| | | | llvm-svn: 244609
* fix typo in comment; NFCSanjay Patel2015-08-111-1/+1
| | | | llvm-svn: 244607
* Print vectorization analysis when loop hint is specified.Tyler Nowicki2015-08-111-16/+34
| | | | | | This patch and a relatec clang patch solve the problem of having to explicitly enable analysis when specifying a loop hint pragma to get the diagnostics. Passing AlwasyPrint as the pass name (see below) causes the front-end to print the diagnostic if the user has specified '-Rpass-analysis' without an '=<target-pass>’. Users of loop hints can pass that compiler option without having to specify the pass and they will get diagnostics for only those loops with loop hints. llvm-svn: 244555
* Moved LoopVectorizeHints and related functions before ↵Tyler Nowicki2015-08-111-270/+270
| | | | | | LoopVectorizationLegality and LoopVectorizationCostModel. llvm-svn: 244552
* Simplify processLoop() by moving loop hint verification into ↵Tyler Nowicki2015-08-111-26/+35
| | | | | | Hints::allowVectorization(). llvm-svn: 244550
* [LAA] Change name from addRuntimeCheck to addRuntimeChecks, NFCAdam Nemet2015-08-111-1/+1
| | | | | | This was requested by Hal in D11205. llvm-svn: 244540
* Extend late diagnostics to include late test for runtime pointer checks.Tyler Nowicki2015-08-101-14/+29
| | | | | | This patch moves checking the threshold of runtime pointer checks to the vectorization requirements (late diagnostics) and emits a diagnostic that infroms the user the loop would be vectorized if not for exceeding the pointer-check threshold. Clang will also append the options that can be used to allow vectorization. llvm-svn: 244523
* Late evaluation of the fast-math vectorization requirement.Tyler Nowicki2015-08-101-3/+62
| | | | | | This patch moves the verification of fast-math to just before vectorization is done. This way we can tell clang to append the command line options would that allow floating-point commutativity. Specifically those are enableing fast-math or specifying a loop hint. llvm-svn: 244489
* Modify diagnostic messages to clearly indicate the why interleaving wasn't done.Tyler Nowicki2015-08-101-22/+69
| | | | | | Sometimes interleaving is not beneficial, as determined by the cost-model and sometimes it is disabled by a loop hint (by the user). This patch modifies the diagnostic messages to make it clear why interleaving wasn't done. llvm-svn: 244485
* [TTI] Add a hook for specifying per-target defaults for Interleaved AccessesSilviu Baranga2015-08-101-2/+8
| | | | | | | | | | | | | | | Summary: This adds a hook to TTI which enables us to selectively turn on by default interleaved access vectorization for targets on which we have have performed the required benchmarking. Reviewers: rengolin Subscribers: rengolin, llvm-commits Differential Revision: http://reviews.llvm.org/D11901 llvm-svn: 244449
* Fix some comment typos.Benjamin Kramer2015-08-081-7/+7
| | | | llvm-svn: 244402
* wrap OptSize and MinSize attributes for easier and consistent access (NFCI)Sanjay Patel2015-08-041-0/+1
| | | | | | | | | | | | | | | | | Create wrapper methods in the Function class for the OptimizeForSize and MinSize attributes. We want to hide the logic of "or'ing" them together when optimizing just for size (-Os). Currently, we are not consistent about this and rely on a front-end to always set OptimizeForSize (-Os) if MinSize (-Oz) is on. Thus, there are 18 FIXME changes here that should be added as follow-on patches with regression tests. This patch is NFC-intended: it just replaces existing direct accesses of the attributes by the equivalent wrapper call. Differential Revision: http://reviews.llvm.org/D11734 llvm-svn: 243994
* [SLP vectorizer]: Choose the best consecutive candidate to pair with a store ↵Wei Mi2015-07-301-7/+18
| | | | | | | | | | | | | instruction. The patch changes the SLPVectorizer::vectorizeStores to choose the immediate succeeding or preceding candidate for a store instruction when it has multiple consecutive candidates. In this way it has better chance to find more slp vectorization opportunities. Differential Revision: http://reviews.llvm.org/D10445 llvm-svn: 243666
* Fix -Wextra-semi warnings.Hans Wennborg2015-07-221-1/+1
| | | | | | | | Patch by Eugene Zelenko! Differential Revision: http://reviews.llvm.org/D11400 llvm-svn: 242930
* [PM/AA] Remove the last of the legacy update API from AliasAnalysis asChandler Carruth2015-07-221-6/+1
| | | | | | | | | | | | | part of simplifying its interface and usage in preparation for porting to work with the new pass manager. Note that this will likely expose that we have dead arguments, members, and maybe even pass requirements for AA. I'll be cleaning those up in seperate patches. This just zaps the actual update API. Differential Revision: http://reviews.llvm.org/D11325 llvm-svn: 242881
* [PM/AA] Switch to an early-exit. NFC. This was split out of anotherChandler Carruth2015-07-221-36/+35
| | | | | | | | | change because the diff is *useless*. I assure you, I just switched to early-return in this function. Cleanup in preparation for my next commit, as requested in code review! llvm-svn: 242880
* Create a wrapper pass for BlockFrequencyInfo.Wei Mi2015-07-141-3/+3
| | | | | | | | | | | | This is useful when we want to do block frequency analysis conditionally (e.g. only in PGO mode) but don't want to add one more pass dependence. Patch by congh. Approved by dexonsmith. Differential Revision: http://reviews.llvm.org/D11196 llvm-svn: 242248
* [LAA] Lift RuntimePointerCheck out of LoopAccessInfo, NFCAdam Nemet2015-07-141-8/+9
| | | | | | | | | I am planning to add more nested classes inside RuntimePointerCheck so all these triple-nesting would be hard to follow. Also rename it to RuntimePointerChecking (i.e. append 'ing'). llvm-svn: 242218
* Avoid using Loop::getSubLoopsVector.Benjamin Kramer2015-07-131-1/+1
| | | | | | | Passes should never modify it, just use the const version. While there reduce copying in LoopInterchange. No functional change intended. llvm-svn: 242041
* Move getStrideFromPointer and friends from LoopVectorize to VectorUtilsHal Finkel2015-07-111-137/+0
| | | | | | | | | | | | | | | | The following functions are moved from the LoopVectorizer to VectorUtils: - getGEPInductionOperand - stripGetElementPtr - getUniqueCastUse - getStrideFromPointer These used to be static functions in LoopVectorize, but will also be used by the upcoming loop versioning LICM transformation. Patch by Ashutosh Nema! llvm-svn: 241980
* Renamed some uses of unroll to interleave in the vectorizer.Tyler Nowicki2015-07-111-91/+98
| | | | llvm-svn: 241971
* [TTI] BasicTTIImpl assumes no vector registersJingyue Wu2015-07-101-3/+8
| | | | | | | | | | | | | | | | | | | | | | | Summary: Following the discussion on r241884, it's more reasonable to assume that a target has no vector registers by default instead of letting every such target overrides getNumberOfRegisters. Therefore, this patch modifies BasicTTIImpl::getNumberOfRegisters to return 0 when Vector is true, and partially reverts r241884 which modifies NVPTXTTIImpl::getNumberOfRegisters. It also fixes a performance bug in LoopVectorizer. Even if a target has no vector registers, vectorization may still help ILP. So, we need both checks to be false before disabling loop vectorization all together. Reviewers: hfinkel Subscribers: llvm-commits, jholewinski Differential Revision: http://reviews.llvm.org/D11108 llvm-svn: 241942
OpenPOWER on IntegriCloud