summaryrefslogtreecommitdiffstats
path: root/llvm/test/Analysis/ScalarEvolution
Commit message (Collapse)AuthorAgeFilesLines
* [SCEV] Fix PR25369Sanjoy Das2015-11-021-0/+78
| | | | | | | | | | | | | Have `getConstantEvolutionLoopExitValue` work correctly with multiple entry loops. As far as I can tell, `getConstantEvolutionLoopExitValue` never did the right thing for multiple entry loops; and before r249712 it would silently return an incorrect answer. r249712 changed SCEV to fail an assert on a multiple entry loop, and this change fixes the underlying issue. llvm-svn: 251770
* [SCEV] Don't create SCEV expressions that break LCSSASanjoy Das2015-10-311-0/+24
| | | | | | | | | | | | | Prevent `createNodeFromSelectLikePHI` from creating SCEV expressions that break LCSSA. A better fix for the same issue is to teach SCEVExpander to not break LCSSA by inserting PHI nodes at appropriate places. That's planned for the future. Fixes PR25360. llvm-svn: 251756
* [SCEV] Generalize the SCEV algorithm for creating expressions for PHI nodesSilviu Baranga2015-10-301-0/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: When forming expressions for phi nodes having an incoming value from outside the loop A and a value coming from the previous iteration B we were forming an AddRec if: - B was an AddRec - the value A was equal to the value for B at iteration -1 (or equal to the value of B shifted by one iteration, at iteration 0) In this case, we were computing the expression to be the expression of B, shifted by one iteration. This changes generalizes the logic above by removing the restriction that B needs to be an AddRec. For this we introduce two expression rewriters that allow us to - shift an expression by one iteration - get the value of an expression at iteration 0 This allows us to get SCEV expressions for PHI nodes when these expressions are not AddRecExprs. Reviewers: sanjoy Subscribers: llvm-commits, sanjoy Differential Revision: http://reviews.llvm.org/D14175 llvm-svn: 251700
* [SCEV] Compute max backedge count for loops with "shift ivs"Sanjoy Das2015-10-281-0/+164
| | | | | | | | | | | | | | This teaches SCEV to compute //max// backedge taken counts for loops like for (int i = k; i != 0; i >>>= 1) whatever(); SCEV yet cannot represent the exact backedge count for these loops, and this patch does not change that. This is really geared towards teaching SCEV that loops like the above are *not* infinite. llvm-svn: 251558
* [SCEV] Commute zero extends through <nuw> additionsSanjoy Das2015-10-221-0/+37
| | | | llvm-svn: 251052
* [SCEV] Commute sign extends through nsw additionsSanjoy Das2015-10-221-0/+41
| | | | | | | | | | | | Summary: Depends on D13613. Reviewers: atrick, hfinkel, reames, nlewycky Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D13685 llvm-svn: 251049
* [SCEV] Mark AddExprs as nsw or nuw if legalSanjoy Das2015-10-223-6/+50
| | | | | | | | | | | | | | Summary: This uses `ScalarEvolution::getRange` and not potentially control dependent `nsw` and `nuw` bits on the arithmetic instruction. Reviewers: atrick, hfinkel, nlewycky Subscribers: llvm-commits, sanjoy Differential Revision: http://reviews.llvm.org/D13613 llvm-svn: 251048
* Revert "Revert "This patch builds on top of D13378 to handle constant ↵Mehdi Amini2015-10-071-0/+51
| | | | | | | | | | condition."" This reverts commit r249528 and reapply r249431. The fix for the fallout has been commited in r249575. From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 249581
* Revert "This patch builds on top of D13378 to handle constant condition."James Molloy2015-10-071-51/+0
| | | | | | This reverts commit r249431. This caused failures in sqlite3: http://lab.llvm.org:8011/builders/clang-native-arm-lnt/builds/14453 llvm-svn: 249528
* This patch builds on top of D13378 to handle constant condition.Mehdi Amini2015-10-061-0/+51
| | | | | | | | | | | | | | | | | | | | With this patch, clang -O3 optimizes correctly providing > 1000x speedup on this artificial benchmark): for (a=0; a<n; a++) for (b=0; b<n; b++) for (c=0; c<n; c++) for (d=0; d<n; d++) for (e=0; e<n; e++) for (f=0; f<n; f++) x++; From test-suite/SingleSource/Benchmarks/Shootout/nestedloop.c Reviewers: sanjoyd Differential Revision: http://reviews.llvm.org/D13390 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 249431
* [SCEV] Recognize simple br-phi patternsSanjoy Das2015-10-021-0/+104
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Teach SCEV to match patterns like ``` br %cond, label %left, label %right left: br label %merge right: br label %merge merge: V = phi [ %x, %left ], [ %y, %right ] ``` as "select %cond, %x, %y". Before this SCEV would match PHI nodes exclusively to add recurrences. This addresses PR25005. Reviewers: joker.eph, joker-eph, atrick Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D13378 llvm-svn: 249211
* [SCEV] Reapply 'Teach isLoopBackedgeGuardedByCond to exploit trip counts'Sanjoy Das2015-09-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | Summary: If the trip count of a specific backedge is `N`, then we know that backedge is effectively guarded by the condition `{0,+,1} u< N`. This change teaches SCEV to use this condition to prove things in `isLoopBackedgeGuardedByCond`. Depends on D12948 Depends on D12949 The original checkin, r248608 had to be backed out due to an issue with a ObjCXX unit test. That issue is now fixed, so re-landing. Reviewers: atrick, reames, majnemer, hfinkel Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D12950 llvm-svn: 248638
* Revert two SCEV changes that caused test failures in clang.Sanjoy Das2015-09-251-1/+1
| | | | | | r248606: "[SCEV] Exploit A < B => (A+K) < (B+K) when possible" r248608: "[SCEV] Teach isLoopBackedgeGuardedByCond to exploit trip counts." llvm-svn: 248614
* [SCEV] Teach isLoopBackedgeGuardedByCond to exploit trip counts.Sanjoy Das2015-09-251-1/+1
| | | | | | | | | | | | | | | | | | | Summary: If the trip count of a specific backedge is `N`, then we know that backedge is effectively guarded by the condition `{0,+,1} u< N`. This change teaches SCEV to use this condition to prove things in `isLoopBackedgeGuardedByCond`. Depends on D12948 Depends on D12949 Reviewers: atrick, reames, majnemer, hfinkel Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D12950 llvm-svn: 248608
* [ScalarEvolution] Fix PR24757.Sanjoy Das2015-09-101-0/+35
| | | | | | | | | | | | | | | | | | | Summary: PR24757 was caused by some incorect math in `ScalarEvolution::HowFarToZero` -- the smallest unsigned solution for X in 2^N * A = 2^N * X is not necessarily A. Reviewers: atrick, majnemer, meheff Subscribers: llvm-commits, sanjoy Differential Revision: http://reviews.llvm.org/D12721 llvm-svn: 247242
* ScalarEvolution assume hanging bugfixPiotr Padlewski2015-09-091-0/+139
| | | | | | http://reviews.llvm.org/D12719 llvm-svn: 247184
* [PM/AA] Rebuild LLVM's alias analysis infrastructure in a way compatibleChandler Carruth2015-09-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | with the new pass manager, and no longer relying on analysis groups. This builds essentially a ground-up new AA infrastructure stack for LLVM. The core ideas are the same that are used throughout the new pass manager: type erased polymorphism and direct composition. The design is as follows: - FunctionAAResults is a type-erasing alias analysis results aggregation interface to walk a single query across a range of results from different alias analyses. Currently this is function-specific as we always assume that aliasing queries are *within* a function. - AAResultBase is a CRTP utility providing stub implementations of various parts of the alias analysis result concept, notably in several cases in terms of other more general parts of the interface. This can be used to implement only a narrow part of the interface rather than the entire interface. This isn't really ideal, this logic should be hoisted into FunctionAAResults as currently it will cause a significant amount of redundant work, but it faithfully models the behavior of the prior infrastructure. - All the alias analysis passes are ported to be wrapper passes for the legacy PM and new-style analysis passes for the new PM with a shared result object. In some cases (most notably CFL), this is an extremely naive approach that we should revisit when we can specialize for the new pass manager. - BasicAA has been restructured to reflect that it is much more fundamentally a function analysis because it uses dominator trees and loop info that need to be constructed for each function. All of the references to getting alias analysis results have been updated to use the new aggregation interface. All the preservation and other pass management code has been updated accordingly. The way the FunctionAAResultsWrapperPass works is to detect the available alias analyses when run, and add them to the results object. This means that we should be able to continue to respect when various passes are added to the pipeline, for example adding CFL or adding TBAA passes should just cause their results to be available and to get folded into this. The exception to this rule is BasicAA which really needs to be a function pass due to using dominator trees and loop info. As a consequence, the FunctionAAResultsWrapperPass directly depends on BasicAA and always includes it in the aggregation. This has significant implications for preserving analyses. Generally, most passes shouldn't bother preserving FunctionAAResultsWrapperPass because rebuilding the results just updates the set of known AA passes. The exception to this rule are LoopPass instances which need to preserve all the function analyses that the loop pass manager will end up needing. This means preserving both BasicAAWrapperPass and the aggregating FunctionAAResultsWrapperPass. Now, when preserving an alias analysis, you do so by directly preserving that analysis. This is only necessary for non-immutable-pass-provided alias analyses though, and there are only three of interest: BasicAA, GlobalsAA (formerly GlobalsModRef), and SCEVAA. Usually BasicAA is preserved when needed because it (like DominatorTree and LoopInfo) is marked as a CFG-only pass. I've expanded GlobalsAA into the preserved set everywhere we previously were preserving all of AliasAnalysis, and I've added SCEVAA in the intersection of that with where we preserve SCEV itself. One significant challenge to all of this is that the CGSCC passes were actually using the alias analysis implementations by taking advantage of a pretty amazing set of loop holes in the old pass manager's analysis management code which allowed analysis groups to slide through in many cases. Moving away from analysis groups makes this problem much more obvious. To fix it, I've leveraged the flexibility the design of the new PM components provides to just directly construct the relevant alias analyses for the relevant functions in the IPO passes that need them. This is a bit hacky, but should go away with the new pass manager, and is already in many ways cleaner than the prior state. Another significant challenge is that various facilities of the old alias analysis infrastructure just don't fit any more. The most significant of these is the alias analysis 'counter' pass. That pass relied on the ability to snoop on AA queries at different points in the analysis group chain. Instead, I'm planning to build printing functionality directly into the aggregation layer. I've not included that in this patch merely to keep it smaller. Note that all of this needs a nearly complete rewrite of the AA documentation. I'm planning to do that, but I'd like to make sure the new design settles, and to flesh out a bit more of what it looks like in the new pass manager first. Differential Revision: http://reviews.llvm.org/D12080 llvm-svn: 247167
* [PM] Port ScalarEvolution to the new pass manager.Chandler Carruth2015-08-171-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change makes ScalarEvolution a stand-alone object and just produces one from a pass as needed. Making this work well requires making the object movable, using references instead of overwritten pointers in a number of places, and other refactorings. I've also wired it up to the new pass manager and added a RUN line to a test to exercise it under the new pass manager. This includes basic printing support much like with other analyses. But there is a big and somewhat scary change here. Prior to this patch ScalarEvolution was never *actually* invalidated!!! Re-running the pass just re-wired up the various other analyses and didn't remove any of the existing entries in the SCEV caches or clear out anything at all. This might seem OK as everything in SCEV that can uses ValueHandles to track updates to the values that serve as SCEV keys. However, this still means that as we ran SCEV over each function in the module, we kept accumulating more and more SCEVs into the cache. At the end, we would have a SCEV cache with every value that we ever needed a SCEV for in the entire module!!! Yowzers. The releaseMemory routine would dump all of this, but that isn't realy called during normal runs of the pipeline as far as I can see. To make matters worse, there *is* actually a key that we don't update with value handles -- there is a map keyed off of Loop*s. Because LoopInfo *does* release its memory from run to run, it is entirely possible to run SCEV over one function, then over another function, and then lookup a Loop* from the second function but find an entry inserted for the first function! Ouch. To make matters still worse, there are plenty of updates that *don't* trip a value handle. It seems incredibly unlikely that today GVN or another pass that invalidates SCEV can update values in *just* such a way that a subsequent run of SCEV will incorrectly find lookups in a cache, but it is theoretically possible and would be a nightmare to debug. With this refactoring, I've fixed all this by actually destroying and recreating the ScalarEvolution object from run to run. Technically, this could increase the amount of malloc traffic we see, but then again it is also technically correct. ;] I don't actually think we're suffering from tons of malloc traffic from SCEV because if we were, the fact that we never clear the memory would seem more likely to have come up as an actual problem before now. So, I've made the simple fix here. If in fact there are serious issues with too much allocation and deallocation, I can work on a clever fix that preserves the allocations (while clearing the data) between each run, but I'd prefer to do that kind of optimization with a test case / benchmark that shows why we need such cleverness (and that can test that we actually make it faster). It's possible that this will make some things faster by making the SCEV caches have higher locality (due to being significantly smaller) so until there is a clear benchmark, I think the simple change is best. Differential Revision: http://reviews.llvm.org/D12063 llvm-svn: 245193
* [SCEV] Apply NSW and NUW flags via poison value analysis for sub, mul and shlBjarke Hammersholt Roune2015-08-142-1/+235
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: http://reviews.llvm.org/D11212 made Scalar Evolution able to propagate NSW and NUW flags from instructions to SCEVs for add instructions. This patch expands that to sub, mul and shl instructions. This change makes LSR able to generate pointer induction variables for loops like these, where the index is 32 bit and the pointer is 64 bit: for (int i = 0; i < numIterations; ++i) sum += ptr[i - offset]; for (int i = 0; i < numIterations; ++i) sum += ptr[i * stride]; for (int i = 0; i < numIterations; ++i) sum += ptr[3 * (i << 7)]; Reviewers: atrick, sanjoy Subscribers: sanjoy, majnemer, hfinkel, llvm-commits, meheff, jingyue, eliben Differential Revision: http://reviews.llvm.org/D11860 llvm-svn: 245118
* [SCEV] Apply NSW and NUW flags via poison value analysisJingyue Wu2015-07-281-0/+358
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Make Scalar Evolution able to propagate NSW and NUW flags from instructions to SCEVs in some cases. This is based on reasoning about when poison from instructions with these flags would trigger undefined behavior. This gives a 13% speed-up on some Eigen3-based Google-internal microbenchmarks for NVPTX. There does not seem to be clear agreement about when poison should be considered to propagate through instructions. In this analysis, poison propagates only in cases where that should be uncontroversial. This change makes LSR able to create induction variables for expressions like &ptr[i + offset] for loops like this: for (int i = 0; i < limit; ++i) { sum += ptr[i + offset]; } Here ptr is a 64 bit pointer and offset is a 32 bit integer. For NVPTX, LSR currently creates an induction variable for i + offset instead, which is not as fast. Improving this situation is what brings the 13% speed-up on some Eigen3-based Google-internal microbenchmarks for NVPTX. There are more details in this discussion on llvmdev. June: http://lists.cs.uiuc.edu/pipermail/llvmdev/2015-June/thread.html#87234 July: http://lists.cs.uiuc.edu/pipermail/llvmdev/2015-July/thread.html#87392 Patch by Bjarke Roune Reviewers: eliben, atrick, sanjoy Subscribers: majnemer, hfinkel, jingyue, meheff, llvm-commits Differential Revision: http://reviews.llvm.org/D11212 llvm-svn: 243460
* [opaque pointer type] Add textual IR support for explicit type parameter to ↵David Blaikie2015-04-164-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the call instruction See r230786 and r230794 for similar changes to gep and load respectively. Call is a bit different because it often doesn't have a single explicit type - usually the type is deduced from the arguments, and just the return type is explicit. In those cases there's no need to change the IR. When that's not the case, the IR usually contains the pointer type of the first operand - but since typed pointers are going away, that representation is insufficient so I'm just stripping the "pointerness" of the explicit type away. This does make the IR a bit weird - it /sort of/ reads like the type of the first operand: "call void () %x(" but %x is actually of type "void ()*" and will eventually be just of type "ptr". But this seems not too bad and I don't think it would benefit from repeating the type ("void (), void () * %x(" and then eventually "void (), ptr %x(") as has been done with gep and load. This also has a side benefit: since the explicit type is no longer a pointer, there's no ambiguity between an explicit type and a function that returns a function pointer. Previously this case needed an explicit type (eg: a function returning a void() function was written as "call void () () * @x(" rather than "call void () * @x(" because of the ambiguity between a function returning a pointer to a void() function and a function returning void). No ambiguity means even function pointer return types can just be written alone, without writing the whole function's type. This leaves /only/ the varargs case where the explicit type is required. Given the special type syntax in call instructions, the regex-fu used for migration was a bit more involved in its own unique way (as every one of these is) so here it is. Use it in conjunction with the apply.sh script and associated find/xargs commands I've provided in rr230786 to migrate your out of tree tests. Do let me know if any of this doesn't cover your cases & we can iterate on a more general script/regexes to help others with out of tree tests. About 9 test cases couldn't be automatically migrated - half of those were functions returning function pointers, where I just had to manually delete the function argument types now that we didn't need an explicit function type there. The other half were typedefs of function types used in calls - just had to manually drop the * from those. import fileinput import sys import re pat = re.compile(r'((?:=|:|^|\s)call\s(?:[^@]*?))(\s*$|\s*(?:(?:\[\[[a-zA-Z0-9_]+\]\]|[@%](?:(")?[\\\?@a-zA-Z0-9_.]*?(?(3)"|)|{{.*}}))(?:\(|$)|undef|inttoptr|bitcast|null|asm).*$)') addrspace_end = re.compile(r"addrspace\(\d+\)\s*\*$") func_end = re.compile("(?:void.*|\)\s*)\*$") def conv(match, line): if not match or re.search(addrspace_end, match.group(1)) or not re.search(func_end, match.group(1)): return line return line[:match.start()] + match.group(1)[:match.group(1).rfind('*')].rstrip() + match.group(2) + line[match.end():] for line in sys.stdin: sys.stdout.write(conv(re.search(pat, line), line)) llvm-svn: 235145
* [SCEV] Look at backedge dominating conditions (re-land r233447).Sanjoy Das2015-04-011-0/+55
| | | | | | | | | | | | | Summary: This change teaches ScalarEvolution::isLoopBackedgeGuardedByCond to look at edges within the loop body that dominate the latch. We don't do an exhaustive search for all possible edges, but only a quick walk up the dom tree. This re-lands r233447. r233447 was reverted because it caused massive compile-time regressions. This change has a fix for the same issue. llvm-svn: 233829
* Revert "[SCEV] Look at backedge dominating conditions."Daniel Jasper2015-03-301-55/+0
| | | | | | | This leads to terribly slow compile times under MSAN. More discussion on the commit thread of r233447. llvm-svn: 233529
* [SCEV] Look at backedge dominating conditions.Sanjoy Das2015-03-271-0/+55
| | | | | | | | | | | | | | | | Summary: This change teaches ScalarEvolution::isLoopBackedgeGuardedByCond to look at edges within the loop body that dominate the latch. We don't do an exhaustive search for all possible edges, but only a quick walk up the dom tree. Reviewers: atrick, hfinkel Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D8627 llvm-svn: 233447
* [SCEV] Revert bailout added in r75511.Sanjoy Das2015-03-261-0/+45
| | | | | | | | | | | | | | Summary: With the introduction of MarkPendingLoopPredicates in r157092, I don't think the bailout is needed anymore. Reviewers: atrick, nicholas Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D8624 llvm-svn: 233296
* When simplifying a SCEV truncate by distributing, consider it a ↵Nick Lewycky2015-03-201-0/+12
| | | | | | simplification to replace a cast, even if we end up with a trunc around the term. Fixes PR22960! llvm-svn: 232794
* [SCEV] Make isImpliedCond smarter.Sanjoy Das2015-03-181-0/+30
| | | | | | | | | | | | | | | | | | Summary: This change teaches isImpliedCond to infer things like "X sgt 0" => "X - 1 sgt -1". The `ConstantRange` class has the logic to do the heavy lifting, this change simply gets ScalarEvolution to exploit that when reasonable. Depends on D8345 Reviewers: atrick Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D8346 llvm-svn: 232576
* [SCEV] Fix PR22856.Sanjoy Das2015-03-131-0/+33
| | | | | | | | | | | | | | | | | | | | Summary: ScalarEvolutionExpander assumes that the header block of a loop is a legal place to have a use for a phi node. This is true only for phis that are either in the header or dominate the header block, but it is not true for phi nodes that are strictly internal to the loop body. This change teaches ScalarEvolutionExpander to place uses of PHI nodes in the basic block the PHI nodes belong to. This is always legal, and `hoistIVInc` ensures that the said position dominates `IsomorphicInc`. Reviewers: atrick Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D8311 llvm-svn: 232189
* [opaque pointer type] Add textual IR support for explicit type parameter to ↵David Blaikie2015-03-135-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | gep operator Similar to gep (r230786) and load (r230794) changes. Similar migration script can be used to update test cases, which successfully migrated all of LLVM and Polly, but about 4 test cases needed manually changes in Clang. (this script will read the contents of stdin and massage it into stdout - wrap it in the 'apply.sh' script shown in previous commits + xargs to apply it over a large set of test cases) import fileinput import sys import re rep = re.compile(r"(getelementptr(?:\s+inbounds)?\s*\()((<\d*\s+x\s+)?([^@]*?)(|\s*addrspace\(\d+\))\s*\*(?(3)>)\s*)(?=$|%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|zeroinitializer|<|\[\[[a-zA-Z]|\{\{)", re.MULTILINE | re.DOTALL) def conv(match): line = match.group(1) line += match.group(4) line += ", " line += match.group(2) return line line = sys.stdin.read() off = 0 for match in re.finditer(rep, line): sys.stdout.write(line[off:match.start()]) sys.stdout.write(conv(match)) off = match.end() sys.stdout.write(line[off:]) llvm-svn: 232184
* When forming an addrec out of a phi don't just look at the last computation ↵Nick Lewycky2015-03-131-1/+63
| | | | | | | | and steal its flags for our own, there may be other computations in the middle. Check whether the LHS of the computation is the phi itself and then we know it's safe to steal the flags. Fixes PR22795. There's a missed optimization opportunity where we could look at the full chain of computation and take the intersection of the flags instead of only looking one instruction deep. llvm-svn: 232134
* [SCEV] Unify getUnsignedRange and getSignedRangeSanjoy Das2015-03-091-0/+39
| | | | | | | | | | | | | | | | | | | | | | Summary: This removes some duplicated code, and also helps optimization: e.g. in the test case added, `%idx ULT 128` in `@x` is not currently optimized to `true` by `-indvars` but will be, after this change. The only functional change in ths commit is that for add recurrences, ScalarEvolution::getRange will be more aggressive -- computing the unsigned (resp. signed) range for a SCEVAddRecExpr will now look at the NSW (resp. NUW) bits and check for signed (resp. unsigned) overflow. This can be a strict improvement in some cases (such as the attached test case), and should be no worse in other cases. Reviewers: atrick, nlewycky Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D8142 llvm-svn: 231709
* [SCEV] Add a `scalar-evolution-print-constant-ranges' optionSanjoy Das2015-03-098-20/+20
| | | | | | | | | | | | | | Summary: Unused in this commit, but will be used in a subsequent change (D8142) by a FileCheck test. Reviewers: atrick Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D8143 llvm-svn: 231708
* [SCEV] make SCEV smarter about proving no-wrap.Sanjoy Das2015-03-041-0/+44
| | | | | | | | | | | | | | | Summary: Teach SCEV to prove no overflow for an add recurrence by proving something about the range of another add recurrence a loop-invariant distance away from it. Reviewers: atrick, hfinkel Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D7980 llvm-svn: 231305
* Make DataLayout Non-Optional in the ModuleMehdi Amini2015-03-041-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: DataLayout keeps the string used for its creation. As a side effect it is no longer needed in the Module. This is "almost" NFC, the string is no longer canonicalized, you can't rely on two "equals" DataLayout having the same string returned by getStringRepresentation(). Get rid of DataLayoutPass: the DataLayout is in the Module The DataLayout is "per-module", let's enforce this by not duplicating it more than necessary. One more step toward non-optionality of the DataLayout in the module. Make DataLayout Non-Optional in the Module Module->getDataLayout() will never returns nullptr anymore. Reviewers: echristo Subscribers: resistor, llvm-commits, jholewinski Differential Revision: http://reviews.llvm.org/D7992 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 231270
* [opaque pointer type] Add textual IR support for explicit type parameter to ↵David Blaikie2015-02-2731-73/+73
| | | | | | | | | | | | | | | | | | | | | | | | load instruction Essentially the same as the GEP change in r230786. A similar migration script can be used to update test cases, though a few more test case improvements/changes were required this time around: (r229269-r229278) import fileinput import sys import re pat = re.compile(r"((?:=|:|^)\s*load (?:atomic )?(?:volatile )?(.*?))(| addrspace\(\d+\) *)\*($| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$)") for line in sys.stdin: sys.stdout.write(re.sub(pat, r"\1, \2\3*\4", line)) Reviewers: rafael, dexonsmith, grosser Differential Revision: http://reviews.llvm.org/D7649 llvm-svn: 230794
* [opaque pointer type] Add textual IR support for explicit type parameter to ↵David Blaikie2015-02-2731-115/+115
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | getelementptr instruction One of several parallel first steps to remove the target type of pointers, replacing them with a single opaque pointer type. This adds an explicit type parameter to the gep instruction so that when the first parameter becomes an opaque pointer type, the type to gep through is still available to the instructions. * This doesn't modify gep operators, only instructions (operators will be handled separately) * Textual IR changes only. Bitcode (including upgrade) and changing the in-memory representation will be in separate changes. * geps of vectors are transformed as: getelementptr <4 x float*> %x, ... ->getelementptr float, <4 x float*> %x, ... Then, once the opaque pointer type is introduced, this will ultimately look like: getelementptr float, <4 x ptr> %x with the unambiguous interpretation that it is a vector of pointers to float. * address spaces remain on the pointer, not the type: getelementptr float addrspace(1)* %x ->getelementptr float, float addrspace(1)* %x Then, eventually: getelementptr float, ptr addrspace(1) %x Importantly, the massive amount of test case churn has been automated by same crappy python code. I had to manually update a few test cases that wouldn't fit the script's model (r228970,r229196,r229197,r229198). The python script just massages stdin and writes the result to stdout, I then wrapped that in a shell script to handle replacing files, then using the usual find+xargs to migrate all the files. update.py: import fileinput import sys import re ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))") normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))") def conv(match, line): if not match: return line line = match.groups()[0] if len(match.groups()[5]) == 0: line += match.groups()[2] line += match.groups()[3] line += ", " line += match.groups()[1] line += "\n" return line for line in sys.stdin: if line.find("getelementptr ") == line.find("getelementptr inbounds"): if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("): line = conv(re.match(ibrep, line), line) elif line.find("getelementptr ") != line.find("getelementptr ("): line = conv(re.match(normrep, line), line) sys.stdout.write(line) apply.sh: for name in "$@" do python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name" rm -f "$name.tmp" done The actual commands: From llvm/src: find test/ -name *.ll | xargs ./apply.sh From llvm/src/tools/clang: find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}" From llvm/src/tools/polly: find test/ -name *.ll | xargs ./apply.sh After that, check-all (with llvm, clang, clang-tools-extra, lld, compiler-rt, and polly all checked out). The extra 'rm' in the apply.sh script is due to a few files in clang's test suite using interesting unicode stuff that my python script was throwing exceptions on. None of those files needed to be migrated, so it seemed sufficient to ignore those cases. Reviewers: rafael, dexonsmith, grosser Differential Revision: http://reviews.llvm.org/D7636 llvm-svn: 230786
* Bugfix: SCEVExpander incorrectly marks increment operations as no-wrapSanjoy Das2015-02-253-1/+132
| | | | | | | | | | | | | | | | | | | | | (The change was landed in r230280 and caused the regression PR22674. This version contains a fix and a test-case for PR22674). When emitting the increment operation, SCEVExpander marks the operation as nuw or nsw based on the flags on the preincrement SCEV. This is incorrect because, for instance, it is possible that {-6,+,1} is <nuw> while {-6,+,1}+1 = {-5,+,1} is not. This change teaches SCEV to mark the increment as nuw/nsw only if it can explicitly prove that the increment operation won't overflow. Apart from the attached test case, another (more realistic) manifestation of the bug can be seen in Transforms/IndVarSimplify/pr20680.ll. Differential Revision: http://reviews.llvm.org/D7778 llvm-svn: 230533
* Revert r230280: "Bugfix: SCEVExpander incorrectly marks increment operations ↵Hans Wennborg2015-02-242-31/+1
| | | | | | | | | | as no-wrap" This caused PR22674, failing this assert: Instructions.h:2281: llvm::Value* llvm::PHINode::getOperand(unsigned int) const: Assertion `i_nocapture < OperandTraits<PHINode>::operands(this) && "getOperand() out of range!"' failed. llvm-svn: 230341
* Fix bug 22641Sanjoy Das2015-02-243-36/+31
| | | | | | | | | | | | | | | The bug was a result of getPreStartForExtend interpreting nsw/nuw flags on an add recurrence more strongly than is legal. {S,+,X}<nsw> implies S+X is nsw only if the backedge of the loop is taken at least once. NOTE: I had accidentally committed an unrelated change with the commit message of this change in r230275 (r230275 was reverted in r230279). This is the correct change for this commit message. Differential Revision: http://reviews.llvm.org/D7808 llvm-svn: 230291
* Bugfix: SCEVExpander incorrectly marks increment operations as no-wrapSanjoy Das2015-02-232-1/+31
| | | | | | | | | | | | | | | | | | | | | When emitting the increment operation, SCEVExpander marks the operation as nuw or nsw based on the flags on the preincrement SCEV. This is incorrect because, for instance, it is possible that {-6,+,1} is <nuw> while {-6,+,1}+1 = {-5,+,1} is not. This change teaches SCEV to mark the increment as nuw/nsw only if it can explicitly prove that the increment operation won't overflow. Apart from the attached test case, another (more realistic) manifestation of the bug can be seen in Transforms/IndVarSimplify/pr20680.ll. NOTE: this change was landed with an incorrect commit message in rL230275 and was reverted for that reason in rL230279. This commit message is the correct one. Differential Revision: http://reviews.llvm.org/D7778 llvm-svn: 230280
* Revert 230275.Sanjoy Das2015-02-232-31/+1
| | | | | | | | 230275 got committed with an incorrect commit message due to a mixup on my side. Will re-land in a few moments with the correct commit message. llvm-svn: 230279
* Fix bug 22641Sanjoy Das2015-02-232-1/+31
| | | | | | | | | | | The bug was a result of getPreStartForExtend interpreting nsw/nuw flags on an add recurrence more strongly than is legal. {S,+,X}<nsw> implies S+X is nsw only if the backedge of the loop is taken at least once. Differential Revision: http://reviews.llvm.org/D7808 llvm-svn: 230275
* Generalize getExtendAddRecStart to work with both sign and zeroSanjoy Das2015-02-181-0/+97
| | | | | | | | | | | extensions. This change also removes `DEBUG(dbgs() << "SCEV: untested prestart overflow check\n");` because that case has a unit test now. Differential Revision: http://reviews.llvm.org/D7645 llvm-svn: 229600
* Bugfix: SCEV incorrectly marks certain add recurrences as nswSanjoy Das2015-02-091-0/+26
| | | | | | | | | | | | | | When creating a scev for sext({X,+,Y}), scev checks if the expression is equivalent to {sext X,+,zext Y}. If it can prove that, it also tags the original {X,+,Y} as <nsw>, which is not correct. In the test case I run `-scalar-evolution` twice because the bug manifests only once SCEV has run through and seen the `sext` expressions (and then does a in-place mutation on {X,+,Y}). Differential Revision: http://reviews.llvm.org/D7495 llvm-svn: 228586
* Allow ScalarEvolution to catch more min/max casesJohannes Doerfert2015-02-091-0/+53
| | | | | | | | | | | | For the attached test case different types are used in the ICmpInst and SelectInst that represent the min/max expressions. However, if the ICmpInst type is smaller a comparison with the sign/zero extended operands would have yielded the same result. This situation might arise after the instruction combination pass was applied. Differential Revision: http://reviews.llvm.org/D7338 llvm-svn: 228572
* Bugfix: ScalarEvolution incorrectly assumes that the start of certainSanjoy Das2015-02-081-0/+116
| | | | | | | | | | | | | add recurrences don't overflow. This change makes the optimization more restrictive. It still assumes that an overflowing `add nsw` is undefined behavior; and this change will need revisiting once we have a consistent semantics for poison values. Differential Revision: http://reviews.llvm.org/D7331 llvm-svn: 228552
* Make ScalarEvolution less aggressive with respect to no-wrap flags.Sanjoy Das2015-01-221-0/+41
| | | | | | | | | | | | ScalarEvolution currently lowers a subtraction recurrence to an add recurrence with the same no-wrap flags as the subtraction. This is incorrect because `sub nsw X, Y` is not the same as `add nsw X, -Y` and `sub nuw X, Y` is not the same as `add nuw X, -Y`. This patch fixes the issue, and adds two test cases demonstrating the bug. Differential Revision: http://reviews.llvm.org/D7081 llvm-svn: 226755
* Fix PR22179.Sanjoy Das2015-01-101-0/+28
| | | | | | | | | | | We were incorrectly inferring nsw for certain SCEVs. We can be more aggressive here (see Richard Smith's comment on http://llvm.org/bugs/show_bug.cgi?id=22179) but this change just focuses on correctness. Differential Revision: http://reviews.llvm.org/D6914 llvm-svn: 225591
* IR: Make metadata typeless in assemblyDuncan P. N. Exon Smith2014-12-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that `Metadata` is typeless, reflect that in the assembly. These are the matching assembly changes for the metadata/value split in r223802. - Only use the `metadata` type when referencing metadata from a call intrinsic -- i.e., only when it's used as a `Value`. - Stop pretending that `ValueAsMetadata` is wrapped in an `MDNode` when referencing it from call intrinsics. So, assembly like this: define @foo(i32 %v) { call void @llvm.foo(metadata !{i32 %v}, metadata !0) call void @llvm.foo(metadata !{i32 7}, metadata !0) call void @llvm.foo(metadata !1, metadata !0) call void @llvm.foo(metadata !3, metadata !0) call void @llvm.foo(metadata !{metadata !3}, metadata !0) ret void, !bar !2 } !0 = metadata !{metadata !2} !1 = metadata !{i32* @global} !2 = metadata !{metadata !3} !3 = metadata !{} turns into this: define @foo(i32 %v) { call void @llvm.foo(metadata i32 %v, metadata !0) call void @llvm.foo(metadata i32 7, metadata !0) call void @llvm.foo(metadata i32* @global, metadata !0) call void @llvm.foo(metadata !3, metadata !0) call void @llvm.foo(metadata !{!3}, metadata !0) ret void, !bar !2 } !0 = !{!2} !1 = !{i32* @global} !2 = !{!3} !3 = !{} I wrote an upgrade script that handled almost all of the tests in llvm and many of the tests in cfe (even handling many `CHECK` lines). I've attached it (or will attach it in a moment if you're speedy) to PR21532 to help everyone update their out-of-tree testcases. This is part of PR21532. llvm-svn: 224257
* ScalarEvolution: HowFarToZero was wrongly using signed divisionDavid Majnemer2014-11-161-2/+2
| | | | | | | | | | | HowFarToZero was supposed to use unsigned division in order to calculate the backedge taken count. However, SCEVDivision::divide performs signed division. Unless I am mistaken, no users of SCEVDivision actually want signed arithmetic: switch to udiv and urem. This fixes PR21578. llvm-svn: 222093
OpenPOWER on IntegriCloud