summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Analysis
Commit message (Collapse)AuthorAgeFilesLines
* Reintroduce InlineCostAnalyzer::getInlineCost() variant with explicit calleeDavid Chisnall2012-04-061-1/+4
| | | | | | | | parameter until we have a more sensible API for doing the same thing. Reviewed by Chandler. llvm-svn: 154180
* Always compute all the bits in ComputeMaskedBits.Rafael Espindola2012-04-043-162/+102
| | | | | | | | This allows us to keep passing reduced masks to SimplifyDemandedBits, but know about all the bits if SimplifyDemandedBits fails. This allows instcombine to simplify cases like the one in the included testcase. llvm-svn: 154011
* Add a line number for the scope of the function (starting at the firstEric Christopher2012-04-032-2/+10
| | | | | | | | | | brace) so that we get more accurate line number information about the declaration of a given function and the line where the function first starts. Part of rdar://11026482 llvm-svn: 153916
* Teach CodeGen's version of computeMaskedBits to understand the range metadata.Rafael Espindola2012-03-311-2/+2
| | | | | | | | This is the CodeGen equivalent of r153747. I tested that there is not noticeable performance difference with any combination of -O0/-O2 /-g when compiling gcc as a single compilation unit. llvm-svn: 153817
* Fix a typo reported in IRC by someone reviewing this code.Chandler Carruth2012-03-311-1/+1
| | | | llvm-svn: 153815
* Remove a bunch of empty, dead, and no-op methods from all of theseChandler Carruth2012-03-311-10/+0
| | | | | | | | | | interfaces. These methods were used in the old inline cost system where there was a persistent cache that had to be updated, invalidated, and cleared. We're now doing more direct computations that don't require this intricate dance. Even if we resume some level of caching, it would almost certainly have a simpler and more narrow interface than this. llvm-svn: 153813
* Initial commit for the rewrite of the inline cost analysis to operateChandler Carruth2012-03-312-583/+946
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | on a per-callsite walk of the called function's instructions, in breadth-first order over the potentially reachable set of basic blocks. This is a major shift in how inline cost analysis works to improve the accuracy and rationality of inlining decisions. A brief outline of the algorithm this moves to: - Build a simplification mapping based on the callsite arguments to the function arguments. - Push the entry block onto a worklist of potentially-live basic blocks. - Pop the first block off of the *front* of the worklist (for breadth-first ordering) and walk its instructions using a custom InstVisitor. - For each instruction's operands, re-map them based on the simplification mappings available for the given callsite. - Compute any simplification possible of the instruction after re-mapping, and store that back int othe simplification mapping. - Compute any bonuses, costs, or other impacts of the instruction on the cost metric. - When the terminator is reached, replace any conditional value in the terminator with any simplifications from the mapping we have, and add any successors which are not proven to be dead from these simplifications to the worklist. - Pop the next block off of the front of the worklist, and repeat. - As soon as the cost of inlining exceeds the threshold for the callsite, stop analyzing the function in order to bound cost. The primary goal of this algorithm is to perfectly handle dead code paths. We do not want any code in trivially dead code paths to impact inlining decisions. The previous metric was *extremely* flawed here, and would always subtract the average cost of two successors of a conditional branch when it was proven to become an unconditional branch at the callsite. There was no handling of wildly different costs between the two successors, which would cause inlining when the path actually taken was too large, and no inlining when the path actually taken was trivially simple. There was also no handling of the code *path*, only the immediate successors. These problems vanish completely now. See the added regression tests for the shiny new features -- we skip recursive function calls, SROA-killing instructions, and high cost complex CFG structures when dead at the callsite being analyzed. Switching to this algorithm required refactoring the inline cost interface to accept the actual threshold rather than simply returning a single cost. The resulting interface is pretty bad, and I'm planning to do lots of interface cleanup after this patch. Several other refactorings fell out of this, but I've tried to minimize them for this patch. =/ There is still more cleanup that can be done here. Please point out anything that you see in review. I've worked really hard to try to mirror at least the spirit of all of the previous heuristics in the new model. It's not clear that they are all correct any more, but I wanted to minimize the change in this single patch, it's already a bit ridiculous. One heuristic that is *not* yet mirrored is to allow inlining of functions with a dynamic alloca *if* the caller has a dynamic alloca. I will add this back, but I think the most reasonable way requires changes to the inliner itself rather than just the cost metric, and so I've deferred this for a subsequent patch. The test case is XFAIL-ed until then. As mentioned in the review mail, this seems to make Clang run about 1% to 2% faster in -O0, but makes its binary size grow by just under 4%. I've looked into the 4% growth, and it can be fixed, but requires changes to other parts of the inliner. llvm-svn: 153812
* Add computeMaskedBitsLoad back, as it was the change to instsimplify thatRafael Espindola2012-03-301-0/+26
| | | | | | caused the slowdown last time. llvm-svn: 153747
* Lowercase the tag name to match the rest of dwarf.Eric Christopher2012-03-292-3/+3
| | | | llvm-svn: 153691
* Add support for objc property decls according to the page at:Eric Christopher2012-03-292-3/+21
| | | | | | | | | | http://llvm.org/docs/SourceLevelDebugging.html#objcproperty including type and DECL. Expand the metadata needed accordingly. rdar://11144023 llvm-svn: 153639
* Handle intrinsics in GlobalsModRef. Fixes pr12351.Rafael Espindola2012-03-281-0/+6
| | | | llvm-svn: 153604
* Revert r153521 as it's causing large regressions on the nightly testers.Chad Rosier2012-03-282-41/+0
| | | | | | | | Original commit message for r153521 (aka r153423): Use the new range metadata in computeMaskedBits and add a new optimization to instruction simplify that lets us remove an and when loding a boolean value. llvm-svn: 153587
* Reapply r153423; the original commit was fine. The failing test, distray, had Chad Rosier2012-03-272-0/+41
| | | | | | | | | | undefined behavior, which Rafael was kind enough to fix. Original commit message for r153423: Use the new range metadata in computeMaskedBits and add a new optimization to instruction simplify that lets us remove an and when loding a boolean value. llvm-svn: 153521
* SCEV fix: Handle loop invariant loads.Andrew Trick2012-03-261-1/+5
| | | | | | Fixes PR11882: NULL dereference in ComputeLoadConstantCompareExitLimit. llvm-svn: 153480
* Revert r153423 as this is causing failures on our internal nightly testers.Chad Rosier2012-03-262-41/+0
| | | | | | | | Original commit message: Use the new range metadata in computeMaskedBits and add a new optimization to instruction simplify that lets us remove an and when loading a boolean value. llvm-svn: 153452
* Use the new range metadata in computeMaskedBits and add a new optimization toRafael Espindola2012-03-262-0/+41
| | | | | | instruction simplify that lets us remove an and when loding a boolean value. llvm-svn: 153423
* Teach instsimplify how to simplify comparisons of pointers which areChandler Carruth2012-03-251-1/+45
| | | | | | | constant-offsets of a common base using the generic GEP-walking logic I added for computing pointer differences in the same situation. llvm-svn: 153419
* Switch the pointer-difference simplification logic to only work withChandler Carruth2012-03-251-1/+1
| | | | | | | | | | | | inbounds GEPs. This isn't really necessary for simplifying pointer differences, but I'm planning to re-use the same code to simplify pointer comparisons where it is necessary. Since real code almost exclusively uses inbounds GEPs, it doesn't seem worth it to support the extra complexity of turning it on and off. If anyone would like that back, feel free to shout. Note that instcombine will still catch any of these patterns. llvm-svn: 153418
* Try to harden the recursive simplification still further. This is againChandler Carruth2012-03-241-7/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | spotted by inspection, and I've crafted no test case that triggers it on my machine, but some of the windows builders are hitting what looks like memory corruption, so *something* is amiss here. This patch takes a more generalized approach to eliminating double-visits. Imagine code such as: %x = ... %y = add %x, 1 %z = add %x, %y You can imagine that if we simplify %x, we would add %y and %z to the list. If the use-chain order happens to cause us to add them in reverse order, we could pull %y off first, and simplify it, adding %z to the list. We now have %z on the list twice, and will reference it after it is deleted. Currently, all my test cases happen to not trigger this, likely due to the use-chain ordering, but there seems no guarantee that such a situation could not occur, so we should handle it correctly. Again, if anyone knows how to craft a testcase that actually triggers this, please let me know. llvm-svn: 153397
* Don't add the instruction about to be RAUW'ed and erased to theChandler Carruth2012-03-241-2/+4
| | | | | | | | | worklist. This can happen in theory when an instruction uses itself, such as a PHI node. This was spotted by inspection, and unfortunately I've not been able to come up with a test case that would trigger it. If anyone has ideas, let me know... llvm-svn: 153396
* Refactor the interface to recursively simplifying instructions to be tadChandler Carruth2012-03-241-47/+71
| | | | | | | | | | | | | | | | | | | | | | | | | bit simpler by handling a common case explicitly. Also, refactor the implementation to use a worklist based walk of the recursive users, rather than trying to use value handles to detect and recover from RAUWs during the recursive descent. This fixes a very subtle bug in the previous implementation where degenerate control flow structures could cause mutually recursive instructions (PHI nodes) to collapse in just such a way that From became equal to To after some amount of recursion. At that point, we hit the inf-loop that the assert at the top attempted to guard against. This problem is defined away when not using value handles in this manner. There are lots of comments claiming that the WeakVH will protect against just this sort of error, but they're not accurate about the actual implementation of WeakVHs, which do still track RAUWs. I don't have any test case for the bug this fixes because it requires running the recursive simplification on unreachable phi nodes. I've no way to either run this or easily write an input that triggers it. It was found when using instruction simplification inside the inliner when running over the nightly test-suite. llvm-svn: 153393
* Take out the debug info probe stuff. It's making some changes toEric Christopher2012-03-231-21/+2
| | | | | | | the PassManager annoying and should be reimplemented as a decorator on top of existing passes (as should the timing data). llvm-svn: 153305
* Cleanup IVUsers::addUsersIfInteresting.Andrew Trick2012-03-221-12/+15
| | | | | | | Keep the public interface clean, even though LLVM proper does not currently use it. llvm-svn: 153263
* Teach instsimplify to gracefully degrade in the presence of instructionsChandler Carruth2012-03-211-0/+6
| | | | | | | | | | | | | | | | not attched to a basic block or function. There are conservatively correct answers in these cases, and this makes the analysis more useful in contexts where we have a partially formed bit of IR. I don't have any way to test this directly... suggestions welcome here, but I'm not seeing anything sadly. I only found this using a subsequent patch to the inliner which runs instsimplify on partially inlined instructions, and even then only on a quite large program. I never got a reasonable testcase out of it, and anything I do get is likely to be quite fragile due to requiring an interaction of two different passes, and the only result being a segfault if it goes wrong. llvm-svn: 153176
* LSR: teach isSimplifiedLoopNest to handle PHI IVUsers.Andrew Trick2012-03-201-1/+8
| | | | llvm-svn: 153132
* LSR: fix IVUsers isSimplifiedLoopNest to perform a full domtree walkAndrew Trick2012-03-201-19/+23
| | | | | | | | | instead of skipping the current loop. My prior fix was incomplete because of an overzealous compile-time optimization: Better fix for: <rdar://problem/11049788> Segmentation fault: 11 in LoopStrengthReduce llvm-svn: 153131
* Factor out the multiply analysis code in ComputeMaskedBits and apply it to theNick Lewycky2012-03-181-62/+76
| | | | | | | | overflow checking multiply intrinsic as well. Add a test for this, updating the test from grep to FileCheck. llvm-svn: 153028
* Start removing the use of an ad-hoc 'never inline' set and insteadChandler Carruth2012-03-161-8/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | directly query the function information which this set was representing. This simplifies the interface of the inline cost analysis, and makes the always-inline pass significantly more efficient. Previously, always-inline would first make a single set of every function in the module *except* those marked with the always-inline attribute. It would then query this set at every call site to see if the function was a member of the set, and if so, refuse to inline it. This is quite wasteful. Instead, simply check the function attribute directly when looking at the callsite. The normal inliner also had similar redundancy. It added every function in the module with the noinline attribute to its set to ignore, even though inside the cost analysis function we *already tested* the noinline attribute and produced the same result. The only tricky part of removing this is that we have to be able to correctly remove only the functions inlined by the always-inline pass when finalizing, which requires a bit of a hack. Still, much less of a hack than the set of all non-always-inline functions was. While I was touching this function, I switched a heavy-weight set to a vector with sort+unique. The algorithm already had a two-phase insert and removal pattern, we were just needlessly paying the uniquing cost on every insert. This probably speeds up some compiles by a small amount (-O0 compiles with lots of always-inline, so potentially heavy libc++ users), but I've not tried to measure it. I believe there is no functional change here, but yell if you spot one. None are intended. Finally, the direction this is going in is to greatly simplify the inline cost query interface so that we can replace its implementation with a much more clever one. Along the way, all the APIs get simplified, so it seems incrementally good. llvm-svn: 152903
* Pull the implementation of the code metrics out of the inline costChandler Carruth2012-03-163-158/+177
| | | | | | | | | | | analysis implementation. The header was already separated. Also cleanup all the comments in the header to follow a nice modern doxygen form. There is still plenty of cruft here, but some of that will fall out in subsequent refactorings and this was an easy step in the right direction. No functionality changed here. llvm-svn: 152898
* LSR fix: Add isSimplifiedLoopNest to IVUsers analysis.Andrew Trick2012-03-161-5/+41
| | | | | | | | | | | | | | Only record IVUsers that are dominated by simplified loop headers. Otherwise SCEVExpander will crash while looking for a preheader. I previously tried to work around this in LSR itself, but that was insufficient. This way, LSR can continue to run if some uses are not in simple loops, as long as we don't attempt to analyze those users. Fixes <rdar://problem/11049788> Segmentation fault: 11 in LoopStrengthReduce llvm-svn: 152892
* Do the right thing on NULL uint64 fields.Eric Christopher2012-03-161-1/+1
| | | | | | | | Patch by Clemens Hammacher! Fixes PR12243 llvm-svn: 152880
* Type sizes and fields offsets inside structs are unsigned. This is a highlyDuncan Sands2012-03-151-4/+2
| | | | | | | theoretical fix since it only matters for types with >= 2^63 bits (!) and also only matters if pointers have more than 64 bits, which is not supported anyway. llvm-svn: 152831
* Make the swap code here a bit more obvious what its doing... We'reChandler Carruth2012-03-151-1/+1
| | | | | | | essentially sorting the pair's arguments. I'd love to actually call sort here, but I'm just not that crazy. ;] llvm-svn: 152764
* Don't assume that the arguments are processed in some particular order.Chandler Carruth2012-03-151-2/+4
| | | | | | | This appears to not be the case with dragonegg at least in some contexts. Hopefully will fix the bootstrap assert failure there. llvm-svn: 152763
* Remove all remnants of partial specialization in the cost computationChandler Carruth2012-03-151-69/+0
| | | | | | side of things. This is all dead code. llvm-svn: 152759
* Extend the inline cost calculation to account for bonuses due toChandler Carruth2012-03-141-3/+86
| | | | | | | | | | | | | | | | | | | | | | | | | | | correlated pairs of pointer arguments at the callsite. This is designed to recognize the common C++ idiom of begin/end pointer pairs when the end pointer is a constant offset from the begin pointer. With the C-based idiom of a pointer and size, the inline cost saw the constant size calculation, and this provides the same level of information for begin/end pairs. In order to propagate this information we have to search for candidate operations on a pair of pointer function arguments (or derived from them) which would be simplified if the pointers had a known constant offset. Then the callsite analysis looks for such pointer pairs in the argument list, and applies the appropriate bonus. This helps LLVM detect that half of bounds-checked STL algorithms (such as hash_combine_range, and some hybrid sort implementations) disappear when inlined with a constant size input. However, it's not a complete fix due the inaccuracy of our cost metric for constants in general. I'm looking into that next. Benchmarks showed no significant code size change, and very minor performance changes. However, specific code such as hashing is showing significantly cleaner inlining decisions. llvm-svn: 152752
* Refactor the inline cost bonus calculation for constants to useChandler Carruth2012-03-141-20/+26
| | | | | | | | a worklist rather than a recursive call. No functionality changed. llvm-svn: 152706
* enhance jump threading to preserve TBAA information when PRE'ing loads,Chris Lattner2012-03-131-3/+13
| | | | | | | fixing rdar://11039258, an issue that came up when inspecting clang's bootstrapped codegen. llvm-svn: 152635
* Generalize the "trunc(ptrtoint(x)) - trunc(ptrtoint(y)) ->Duncan Sands2012-03-131-14/+34
| | | | | | trunc(ptrtoint(x-y))" optimization introduced by Chandler. llvm-svn: 152626
* Uniformize the InstructionSimplify interface by ensuring that all routinesDuncan Sands2012-03-132-337/+278
| | | | | | | | take a TargetLibraryInfo parameter. Internally, rather than passing TD, TLI and DT parameters around all over the place, introduce a struct for holding them. llvm-svn: 152623
* Fix regression from r151466: an we can't replace uses of an instruction ↵Eli Friedman2012-03-131-3/+7
| | | | | | reachable from the entry block with uses of an instruction not reachable from the entry block. PR12231. llvm-svn: 152595
* Address some review comments from Duncan. This moves the iterativeChandler Carruth2012-03-131-32/+23
| | | | | | | | | | | | | | | | | | | offset accumulation to use a boring APInt instead of ConstantExprs. I didn't go all the way to an 'int64_t' because I wanted APInt to handle any magic required to properly wrap the arithmetic when the pointer width is <64 bits. If there is a significant penalty from using APInt here, first off WTF, and secondly let me know and I'll do the math by hand. I've left one layer still operating w/ ConstantExpr because it makes the interface quite a bit simpler, and that one isn't iterative so has much lower cost. I suppose this may potentially speed up some strang compilation situations, but I don't really expect much. It should have no functional impact either way. llvm-svn: 152590
* Teach instsimplify how to constant fold pointer differences.Chandler Carruth2012-03-121-0/+122
| | | | | | | | | | | | | | | | | | | Typically instcombine has handled this, but pointer differences show up in several contexts where we would like to get constant folding, and cannot afford to run instcombine. Specifically, I'm working on improving the constant folding of arguments used in inline cost analysis with instsimplify. Doing this in instsimplify implies some algorithm changes. We have to handle multiple layers of all-constant GEPs because instsimplify cannot fold them into a single GEP the way instcombine can. Also, we're only interested in all-constant GEPs. The result is that this doesn't really replace the instcombine logic, it's just complimentary and focused on constant folding. Reviewed on IRC by Benjamin Kramer. llvm-svn: 152555
* llvm::SwitchInstStepan Dyatkovskiy2012-03-111-1/+1
| | | | | | | Renamed methods caseBegin, caseEnd and caseDefault with case_begin, case_end, and case_default. Added some notes relative to case iterators. llvm-svn: 152532
* Make helper static, so it can be inlined into its sole caller.Benjamin Kramer2012-03-101-3/+3
| | | | llvm-svn: 152515
* As Duncan pointed out, pointers tend not to be in floating point ↵Bill Wendling2012-03-101-6/+6
| | | | | | format...for now. llvm-svn: 152499
* Make this transformation slightly less agressive and more correct.Bill Wendling2012-03-101-8/+18
| | | | | | | | | | | The 'CmpInst::isFalseWhenEqual' function returns 'false' for values other than simply equality. For instance, it returns 'false' for <= or >=. This isn't the correct behavior for this transformation, which is checking for strict equality and non-equality. It was causing the gcc.c-torture/execute/frame-address.c test to fail because it would completely (and incorrectly) optimize a whole function into a 'ret i32 0'. llvm-svn: 152497
* Refactor some methods to look through bitcasts and GEPs on pointers intoChandler Carruth2012-03-101-25/+3
| | | | | | | | | | a common collection of methods on Value, and share their implementation. We had two variations in two different places already, and I need the third variation for inline cost estimation. Reviewed by Duncan Sands on IRC, but further comments here welcome. llvm-svn: 152490
* Factor out the analysis of addition and subtraction in ComputeMaskedBits. ReuseNick Lewycky2012-03-091-83/+123
| | | | | | it to analyze extractvalue(llvm.[us](add|sub).with.overflow.*) intrinsics! llvm-svn: 152398
* Undo a previous restriction on the inline cost calculation which NickChandler Carruth2012-03-091-107/+146
| | | | | | | | | | | | | | | | | | | | | | introduced. Specifically, there are cost reductions for all constant-operand icmp instructions against an alloca, regardless of whether the alloca will in fact be elligible for SROA. That means we don't want to abort the icmp reduction computation when we abort the SROA reduction computation. That in turn frees us from the need to keep a separate worklist and defer the ICmp calculations. Use this new-found freedom and some judicious function boundaries to factor the innards of computing the cost factor of any given instruction out of the loop over the instructions and into static helper functions. This greatly simplifies the code, and hopefully makes it more clear what is happening here. Reviewed by Eric Christopher. There is some concern that we'd like to ensure this doesn't get out of hand, and I plan to benchmark the effects of this change over the next few days along with some further fixes to the inline cost. llvm-svn: 152368
OpenPOWER on IntegriCloud