summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Analysis
Commit message (Collapse)AuthorAgeFilesLines
* [SLP] Load sorting should not try to sort things that aren't loads.Michael Kuperstein2017-02-271-0/+5
| | | | | | | We may get a VL where the first element is a load, but the others aren't. Trying to sort such VLs can only lead to sorrow. llvm-svn: 296411
* [ValueTracking] Don't do an unchecked shift in ComputeNumSignBitsSanjoy Das2017-02-251-2/+21
| | | | | | | | | | | | | | | | | | | | | | Summary: Previously we used to return a bogus result, 0, for IR like `ashr %val, -1`. I've also added an assert checking that `ComputeNumSignBits` at least returns 1. That assert found an already checked in test case where we were returning a bad result for `ashr %val, -1`. Fixes PR32045. Reviewers: spatel, majnemer Reviewed By: spatel, majnemer Subscribers: efriedma, mcrosier, llvm-commits Differential Revision: https://reviews.llvm.org/D30311 llvm-svn: 296273
* [InlineCost] Move the code in isGEPOffsetConstant to a lambda.Easwaran Raman2017-02-251-13/+9
| | | | | | Differential revision: https://reviews.llvm.org/D30112 llvm-svn: 296208
* Fix Indentation. NFCIXin Tong2017-02-241-2/+2
| | | | llvm-svn: 296169
* [ORE] Remove ORE.emit{{.+}} functionsAdam Nemet2017-02-231-66/+0
| | | | | | | Last use was killed in my previous patch. The preferred way is now to construct the remark, pipe things to it and pass it to ORE.emit. llvm-svn: 296019
* [LAA] Remove unused LoopAccessReportAdam Nemet2017-02-231-15/+0
| | | | | | | The need for this removed when I converted everything to use the opt-remark classes directly with the streaming interface. llvm-svn: 296017
* [ModuleSummaryAnalysis] Don't crash when referencing unnamed globals.Davide Italiano2017-02-221-0/+6
| | | | | | | Instead, just be conservative as these are unfrequent enough. Thanks to Peter Collingbourne for the discussion about this on IRC. llvm-svn: 295861
* OptDiag: Add const to some interfaces that don't modify anything. NFCJustin Bogner2017-02-221-3/+3
| | | | | | | | This needed a const_cast for the dominator tree recalculation in OptimizationRemarkEmitter, but we do that all over the place already and it's safe. llvm-svn: 295812
* [ValueTracking] Make poison propagation more aggressiveSanjoy Das2017-02-221-49/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Motivation: fix PR31181 without regression (the actual fix is still in progress). However, the actual content of PR31181 is not relevant here. This change makes poison propagation more aggressive in the following cases: 1. poision * Val == poison, for any Val. In particular, this changes existing intentional and documented behavior in these two cases: a. Val is 0 b. Val is 2^k * N 2. poison << Val == poison, for any Val 3. getelementptr is poison if any input is poison I think all of these are justified (and are axiomatically true in the new poison / undef model): 1a: we need poison * 0 to be poison to allow transforms like these: A * (B + C) ==> A * B + A * C If poison * 0 were 0 then the above transform could not be allowed since e.g. we could have A = poison, B = 1, C = -1, making the LHS poison * (1 + -1) = poison * 0 = 0 and the RHS poison * 1 + poison * -1 = poison + poison = poison 1b: we need e.g. poison * 4 to be poison since we want to allow A * 4 ==> A + A + A + A If poison * 4 were a value with all of their bits poison except the last four; then we'd not be able to do this transform since then if A were poison the LHS would only be "partially" poison while the RHS would be "full" poison. 2: Same reasoning as (1b), we'd like have the following kinds transforms be legal: A << 1 ==> A + A Reviewers: majnemer, efriedma Subscribers: mcrosier, llvm-commits Differential Revision: https://reviews.llvm.org/D30185 llvm-svn: 295809
* [ValueTracking] clang-format a section I'm about to touch; NFCSanjoy Das2017-02-211-64/+64
| | | | | | (Whitespace only change) llvm-svn: 295690
* [InstSimplify] add nsw/nuw (xor X, signbit), signbit --> XSanjay Patel2017-02-181-1/+11
| | | | | | | | | The change to InstCombine in: https://reviews.llvm.org/D29729 ...exposes this missing fold in InstSimplify, so adding this first to avoid a regression. llvm-svn: 295573
* Refactor instruction simplification code in visitors. NFC.Easwaran Raman2017-02-181-88/+67
| | | | | | | | | | | | | | Several visitors check if operands to the instruction are constants, either as it is or after looking up SimplifiedValues, check if the result is a constant and update the SimplifiedValues map. This refactoring splits it into a common function that does the checking of whether the operands are constants and updating of the SimplifiedValues table, and an instruction specific part that is implemented by each instruction visitor as a lambda and passed to the common function. Differential revision: https://reviews.llvm.org/D30104 llvm-svn: 295552
* OptDiag: Decouple backend diagnostics from debug info metadataJustin Bogner2017-02-181-9/+8
| | | | | | | | | This creates and uses a DiagnosticLocation type rather than using DebugLoc for this purpose in the backend diagnostics. This is NFC for now, but will allow us to create locations for diagnostics without having to create new metadata nodes when we don't have a DILocation. llvm-svn: 295519
* [LAA] Remove unused code (NFC)Matthew Simpson2017-02-171-5/+0
| | | | llvm-svn: 295493
* AssumptionCache: Disable the verifier by default, move it behind a hidden ↵Peter Collingbourne2017-02-151-5/+15
| | | | | | | | | | | | | | cl::opt and verify from releaseMemory(). This is a short term solution to the problem that many passes currently fail to update the assumption cache. In the long term the verifier should not be controllable with a flag. We should either fix all passes to correctly update the assumption cache and enable the verifier unconditionally or somehow arrange for the assumption list to be updated automatically by passes. Differential Revision: https://reviews.llvm.org/D30003 llvm-svn: 295236
* [LazyBFI] Fix typosAdam Nemet2017-02-141-1/+1
| | | | llvm-svn: 295073
* [SCEV] Cache results during GetMinTrailingZeros queryIgor Laevsky2017-02-141-8/+22
| | | | | | Differential Revision: https://reviews.llvm.org/D29759 llvm-svn: 295060
* [ValueTracking] use nonnull argument attribute to eliminate null checksSanjay Patel2017-02-121-5/+17
| | | | | | | | | | | Enhancing value tracking's analysis of null-ness was suggested in D27855, so here's a first attempt at that. This is part of solving: https://llvm.org/bugs/show_bug.cgi?id=28430 Differential Revision: https://reviews.llvm.org/D28204 llvm-svn: 294897
* [LV/LoopAccess] Check statically if an unknown dependence distance can be Dorit Nuzman2017-02-121-6/+78
| | | | | | | | | | | | | | | | | | | | | | | proven larger than the loop-count This fixes PR31098: Try to resolve statically data-dependences whose compile-time-unknown distance can be proven larger than the loop-count, instead of resorting to runtime dependence checking (which are not always possible). For vectorization it is sufficient to prove that the dependence distance is >= VF; But in some cases we can prune unknown dependence distances early, and even before selecting the VF, and without a runtime test, by comparing the distance against the loop iteration count. Since the vectorized code will be executed only if LoopCount >= VF, proving distance >= LoopCount also guarantees that distance >= VF. This check is also equivalent to the Strong SIV Test. Reviewers: mkuper, anemet, sanjoy Differential Revision: https://reviews.llvm.org/D28044 llvm-svn: 294892
* IR: Function summary extensions for whole-program devirtualization pass.Peter Collingbourne2017-02-101-21/+104
| | | | | | | | | | The summary information includes all uses of llvm.type.test and llvm.type.checked.load intrinsics that can be used to devirtualize calls, including any constant arguments for virtual constant propagation. Differential Revision: https://reviews.llvm.org/D29734 llvm-svn: 294795
* [PM/LCG] Teach LCG to support spurious reference edges.Chandler Carruth2017-02-091-1/+8
| | | | | | | | | | | | | | | | | | Somewhat amazingly, this only requires teaching it to clean them up when deleting a dead function from the graph. And we already have exactly the necessary data structures to do that in the parent RefSCCs. This allows ArgPromote to work in a much simpler way be merely letting reference edges linger in the graph after the causing IR is deleted. We will clean up these edges when we run any function pass over the IR, but don't remove them eagerly. This avoids all of the quadratic update issues both in the current pass manager and in my previous attempt with the new pass manager. Differential Revision: https://reviews.llvm.org/D29579 llvm-svn: 294663
* [PM/LCG] Teach the LazyCallGraph how to replace a function withoutChandler Carruth2017-02-092-194/+216
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | disturbing the graph or having to update edges. This is motivated by porting argument promotion to the new pass manager. Because of how LLVM IR Function objects work, in order to change their signature a new object needs to be created. This is efficient and straight forward in the IR but previously was very hard to implement in LCG. We could easily replace the function a node in the graph represents. The challenging part is how to handle updating the edges in the graph. LCG previously used an edge to a raw function to represent a node that had not yet been scanned for calls and references. This was the core of its laziness. However, that model causes this kind of update to be very hard: 1) The keys to lookup an edge need to be `Function*`s that would all need to be updated when we update the node. 2) There will be some unknown number of edges that haven't transitioned from `Function*` edges to `Node*` edges. All of this complexity isn't necessary. Instead, we can always build a node around any function, always pointing edges at it and always using it as the key to lookup an edge. To maintain the laziness, we need to sink the *edges* of a node into a secondary object and explicitly model transitioning a node from empty to populated by scanning the function. This design seems much cleaner in a number of ways, but importantly there is now exactly *one* place where the `Function*` has to be updated! Some other cleanups that fall out of this include having something to model the *entry* edges more accurately. Rather than hand rolling parts of the node in the graph itself, we have an explicit `EdgeSequence` object that gives us exactly the functionality needed. We also have a consistent place to define the edge iterators and can use them for both the entry edges and the internal edges of the graph. The API used to model the separation between a node and its edges is intentionally very thin as most clients are expected to deal with nodes that have populated edges. We model this exactly as an optional does with an additional method to populate the edges when that is a reasonable thing for a client to do. This is based on API design suggestions from Richard Smith and David Blaikie, credit goes to them for helping pick how to model this without it being either too explicit or too implicit. The patch is somewhat noisy due to shifting around iterator types and new syntax for walking the edges of a node, but most of the functionality change is in the `Edge`, `EdgeSequence`, and `Node` types. Differential Revision: https://reviews.llvm.org/D29577 llvm-svn: 294653
* Drop graph_ prefixDaniel Berlin2017-02-091-1/+1
| | | | llvm-svn: 294621
* GraphTraits: Add range versions of graph traits functions (graph_nodes, ↵Daniel Berlin2017-02-091-4/+1
| | | | | | | | | | | | | | | | | | | | | | | | | graph_children, inverse_graph_nodes, inverse_graph_children). Summary: Convert all obvious node_begin/node_end and child_begin/child_end pairs to range based for. Sending for review in case someone has a good idea how to make graph_children able to be inferred. It looks like it would require changing GraphTraits to be two argument or something. I presume inference does not happen because it would have to check every GraphTraits in the world to see if the noderef types matched. Note: This change was 3-staged with clang as well, which uses Dominators/etc from LLVM. Reviewers: chandlerc, tstellarAMD, dblaikie, rsmith Subscribers: arsenm, llvm-commits, nhaehnle Differential Revision: https://reviews.llvm.org/D29767 llvm-svn: 294620
* LVI: Fix use-of-uninitialized-value after r294463Vitaly Buka2017-02-091-1/+1
| | | | | | BlockValueStack can be reallocated making reference e invalid. llvm-svn: 294572
* LVI: Add a per-value worklist limit to LazyValueInfo.Daniel Berlin2017-02-081-6/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: LVI is now depth first, which is optimal for iteration strategy in terms of work per call. However, the way the results get cached means it can still go very badly N^2 or worse right now. The overdefined cache is per-block, because LVI wants to try to get different results for the same name in different blocks (IE solve the problem PredicateInfo solves). This means even if we discover a value is overdefined after going very deep, it doesn't cache this information, causing it to end up trying to rediscover it again and again. The same is true for values along the way. In practice, overdefined anywhere should mean overdefined everywhere (this is how, for example, SCCP works). Until we get around to reworking the overdefined cache, we need to limit the worklist size we process. Note that permanently reverting the DFS strategy exploration seems the wrong strategy (temporarily seems fine if we really want). BFS is clearly the wrong approach, it just gets luckier on some testcases. It's also very hard to design an effective throttle for BFS. For DFS, the throttle is directly related to the depth of the CFG. So really deep CFGs will get cutoff, smaller ones will not. As the CFG simplifies, you get better results. In BFS, the limit is it's related to the fan-out times average block size, which is harder to reason about or make good choices for. Bug being filed about the overdefined cache, but it will require major surgery to fix it (plumbing predicateinfo through CVP or LVI). Note: I did not make this number configurable because i'm not sure anyone really needs to tweak this knob. We run CVP 3 times. On the testcases i have the slow ones happen in the middle, where CVP is doing cleanup work other things are effective at. Over the course of 3 runs, we don't see to have any real loss of performance. I haven't gotten a minimized testcase yet, but just imagine in your head a testcase where, going *up* the CFG, you have branches, one of which leads 50000 blocks deep, and the other, to something where the answer is overdefined immediately. BFS would discover the overdefined faster than DFS, but do more work to do so. In practice, the right answer is "once DFS discovers overdefined for a value, stop trying to get more info about that value" (and so, DFS would normally cache the overdefined results for every value it passed through in those 50k blocks, and never do that work again. But it don't, because of the naming problem) Reviewers: chandlerc, djasper Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D29715 llvm-svn: 294463
* Revert r293017 and fix the actual underlying issue.Chandler Carruth2017-02-072-2/+3
| | | | | | | | | | | | | | | | | | | | | The patch committed in r293017, as discussed on the list, doesn't really make sense but was causing an actual issue to go away. The issue turns out to be that in one place the extra template arguments were dropped from the OuterAnalysisManagerProxy. This in turn caused the types used in one set of places to access the key to be completely different from the types used in another set of places for both Loop and CGSCC cases where there are extra arguments. I have literally no idea how anything seemed to work with this bug in place. It blows my mind. But it did except for mingw64 in a DLL build. I've added a really handy static assert that helps ensure we don't break this in the future. It immediately diagnoses the issue with a compile failure and a very clear error message. Much better that staring at backtraces on a build bot. =] llvm-svn: 294267
* [LVI] Switch from BFS to DFS exploration orderPhilip Reames2017-02-071-14/+16
| | | | | | | | | | | | | | This patch changes the order in which LVI explores previously unexplored paths. Previously, the code used an BFS strategy where each unexplored input was added to the search queue before any of them were explored. This has the effect of causing all inputs to be explored before returning to re-evaluate the merge point (non-local or phi node). This has the unfortunate property of doing redundant work if one of the inputs to the merge is found to be overdefined (i.e. unanalysable). If any input is overdefined, the result of the merge will be too; regardless of the values of other inputs. The new code uses a DFS strategy where we re-evaluate the merge after evaluating each input. If we discover an overdefined input, we immediately return without exploring other inputs. We have reports of large (4-10x) improvements of compile time with this patch and some reports of more precise analysis results as well. See the review discussion for details. The original motivating case was pr10584. Differential Revision: https://reviews.llvm.org/D28190 llvm-svn: 294264
* [PM/LCG] Fix the no-asserts build after r294227. Sorry for the noise.Chandler Carruth2017-02-061-0/+2
| | | | llvm-svn: 294235
* [PM/LCG] Remove the lazy RefSCC formation from the LazyCallGraph duringChandler Carruth2017-02-062-171/+118
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | iteration. The lazy formation of RefSCCs isn't really the most important part of the laziness here -- that has to do with walking the functions themselves -- and isn't essential to maintain. Originally, there were incremental update algorithms that relied on updates happening predominantly near the most recent RefSCC formed, but those have been replaced with ones that have much tighter general case bounds at this point. We do still perform asserts that only scale well due to this incrementality, but those are easy to place behind EXPENSIVE_CHECKS. Removing this simplifies the entire analysis by having a single up-front step that builds all of the RefSCCs in a direct Tarjan walk. We can even easily replace this with other or better algorithms at will and with much less confusion now that there is no iterator-based incremental logic involved. This removes a lot of complexity from LCG. Another advantage of moving in this direction is that it simplifies testing the system substantially as we no longer have to worry about observing and mutating the graph half-way through the RefSCC formation. We still need a somewhat special iterator for RefSCCs because we want the iterator to remain stable in the face of graph updates. However, this now merely involves relative indexing to the current RefSCC's position in the sequence which isn't too hard. Differential Revision: https://reviews.llvm.org/D29381 llvm-svn: 294227
* [ValueTracking] emit a remark when we detect a conflicting assumption (PR31809)Sanjay Patel2017-02-062-14/+27
| | | | | | | | | | | | This is a follow-up to D29395 where we try to be good citizens and let the user know that we've probably gone off the rails. This should allow us to resolve: https://llvm.org/bugs/show_bug.cgi?id=31809 Differential Revision: https://reviews.llvm.org/D29404 llvm-svn: 294208
* [SCEV] limit recursion depth and operands number in getAddExprDaniil Fukalov2017-02-061-19/+39
| | | | | | | | | | | | | | | | | | | | | for a quite big function with source like %add = add nsw i32 %mul, %conv %mul1 = mul nsw i32 %add, %conv %add2 = add nsw i32 %mul1, %add %mul3 = mul nsw i32 %add2, %add ; repeat couple of thousands times that can be produced by loop unroll, getAddExpr() tries to recursively construct SCEV and runs almost infinite time. Added recursion depth restriction (with new parameter to set it) Reviewers: sanjoy Subscribers: hfinkel, llvm-commits, mzolotukhin Differential Revision: https://reviews.llvm.org/D28158 llvm-svn: 294181
* [SLP] Make sortMemAccesses explicitly return an error. NFC.Michael Kuperstein2017-02-031-12/+10
| | | | llvm-svn: 294029
* [SLP] Use SCEV to sort memory accesses.Michael Kuperstein2017-02-031-17/+34
| | | | | | | | | | This generalizes memory access sorting to use differences between SCEVs, instead of relying on constant offsets. That allows us to properly do SLP vectorization of non-sequentially ordered loads within loops bodies. Differential Revision: https://reviews.llvm.org/D29425 llvm-svn: 294027
* Revert "[ThinLTO] Add an auto-hide feature"Mehdi Amini2017-02-031-8/+4
| | | | | | | | | This reverts commit r293970. After more discussion, this belongs to the linker side and there is no added value to do it at this level. llvm-svn: 293993
* [ThinLTO] Add an auto-hide featureMehdi Amini2017-02-031-4/+8
| | | | | | | | | | | | | | | When a symbol is not exported outside of the DSO, it is can be hidden. Usually we try to internalize as much as possible, but it is not always possible, for instance a symbol can be referenced outside of the LTO unit, or there can be cross-module reference in ThinLTO. This is a recommit of r293912 after fixing build failures, and a recommit of r293918 after fixing LLD tests. Differential Revision: https://reviews.llvm.org/D28978 llvm-svn: 293970
* Revert "[ThinLTO] Add an auto-hide feature"Mehdi Amini2017-02-021-8/+4
| | | | | | This reverts commit r293918, one lld test does not pass. llvm-svn: 293961
* [PGO] internal option cleanupsXinliang David Li2017-02-021-1/+12
| | | | | | | | | | 1. Added comments for options 2. Added missing option cl::desc field 3. Uniified function filter option for graph viewing. Now PGO count/raw-counts share the same filter option: -view-bfi-func-name=. llvm-svn: 293938
* [PGO] make graph view internal options available for all buildsXinliang David Li2017-02-021-10/+0
| | | | | | Differential Revision: https://reviews.llvm.org/D29259 llvm-svn: 293921
* [ThinLTO] Add an auto-hide featureMehdi Amini2017-02-021-4/+8
| | | | | | | | | | | | | | When a symbol is not exported outside of the DSO, it is can be hidden. Usually we try to internalize as much as possible, but it is not always possible, for instance a symbol can be referenced outside of the LTO unit, or there can be cross-module reference in ThinLTO. This is a recommit of r293912 after fixing build failures. Differential Revision: https://reviews.llvm.org/D28978 llvm-svn: 293918
* Revert "[ThinLTO] Add an auto-hide feature"Mehdi Amini2017-02-021-8/+4
| | | | | | This reverts r293912, bots are broken. llvm-svn: 293914
* [ThinLTO] Add an auto-hide featureMehdi Amini2017-02-021-4/+8
| | | | | | | | | | | | When a symbol is not exported outside of the DSO, it is can be hidden. Usually we try to internalize as much as possible, but it is not always possible, for instance a symbol can be referenced outside of the LTO unit, or there can be cross-module reference in ThinLTO. Differential Revision: https://reviews.llvm.org/D28978 llvm-svn: 293912
* [JumpThread] Enhance finding partial redundant loads by continuing scanning ↵Jun Bum Lim2017-02-021-1/+5
| | | | | | | | | | | | | | | | single predecessor Summary: While scanning predecessors to find an available loaded value, if the predecessor has a single predecessor, we can continue scanning through the single predecessor. Reviewers: mcrosier, rengolin, reames, davidxl, haicheng Reviewed By: rengolin Subscribers: zzheng, llvm-commits Differential Revision: https://reviews.llvm.org/D29200 llvm-svn: 293896
* [LV] Also port failure remarks to new OptimizationRemarkEmitter APIAdam Nemet2017-02-021-1/+3
| | | | llvm-svn: 293866
* [ValueTracking] remove a FIXME for something we don't want to do; NFCSanjay Patel2017-02-011-4/+0
| | | | | | | | The comment was added with: https://reviews.llvm.org/rL293773 ...but there would be a cost to implement this and possibly no payoff. llvm-svn: 293823
* [LV] Move interleaved access helper functions to VectorUtils (NFC)Matthew Simpson2017-02-011-0/+85
| | | | | | | | | | | | This patch moves some helper functions related to interleaved access vectorization out of LoopVectorize.cpp and into VectorUtils.cpp. We would like to use these functions in a follow-on patch that improves interleaved load and store lowering in (ARM/AArch64)ISelLowering.cpp. One of the functions was already duplicated there and has been removed. Differential Revision: https://reviews.llvm.org/D29398 llvm-svn: 293788
* [ValueTracking] avoid crashing from bad assumptions (PR31809)Sanjay Patel2017-02-011-0/+17
| | | | | | | | | | | | | A program may contain llvm.assume info that disagrees with other analysis. This may be caused by UB in the program, so we must not crash because of that. As noted in the code comments: https://llvm.org/bugs/show_bug.cgi?id=31809 ...we can do better, but this at least avoids the assert/crash in the bug report. Differential Revision: https://reviews.llvm.org/D29395 llvm-svn: 293773
* [SCEV] Simplify/generalize howFarToZero solving.Eli Friedman2017-01-311-59/+9
| | | | | | | | | | | | | | Make SolveLinEquationWithOverflow take the start as a SCEV, so we can solve more cases. With that implemented, get rid of the special case for powers of two. The additional functionality probably isn't particularly useful, but it might help a little for certain cases involving pointer arithmetic. Differential Revision: https://reviews.llvm.org/D28884 llvm-svn: 293576
* NVPTX: Refactor NVPTXInferAddressSpaces to check TTIMatt Arsenault2017-01-301-0/+4
| | | | | | Add a new TTI hook for getting the generic address space value. llvm-svn: 293563
* [ValueTracking] clean up lookThroughCast; NFCISanjay Patel2017-01-291-42/+48
| | | | | | | | | 1. Use auto with dyn_cast. 2. Don't use else after return. 3. Convert chain of 'else if' to switch. 4. Improve variable names. llvm-svn: 293432
OpenPOWER on IntegriCloud