summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Analysis
Commit message (Collapse)AuthorAgeFilesLines
* MemDerefPrinter: Require DataLayoutPass for higher accuracyRamkumar Ramachandra2015-02-091-3/+12
| | | | | | | | | | Without a valid data layout, deferenceable(N) doesn't get parsed or propagated. Since this is the key item we are testing, add a dependency on the pass. Differential Revision: http://reviews.llvm.org/D7508 llvm-svn: 228611
* MemDepPrinter: cleanup a few loops (NFC)Ramkumar Ramachandra2015-02-091-9/+8
| | | | | | | | | Make use of the newly introduced inst_range to clean up two loops. Clean up a third one while at it. Differential Revision: http://reviews.llvm.org/D7455 llvm-svn: 228596
* Bugfix: SCEV incorrectly marks certain add recurrences as nswSanjoy Das2015-02-091-2/+10
| | | | | | | | | | | | | | When creating a scev for sext({X,+,Y}), scev checks if the expression is equivalent to {sext X,+,zext Y}. If it can prove that, it also tags the original {X,+,Y} as <nsw>, which is not correct. In the test case I run `-scalar-evolution` twice because the bug manifests only once SCEV has run through and seen the `sext` expressions (and then does a in-place mutation on {X,+,Y}). Differential Revision: http://reviews.llvm.org/D7495 llvm-svn: 228586
* Allow ScalarEvolution to catch more min/max casesJohannes Doerfert2015-02-091-23/+25
| | | | | | | | | | | | For the attached test case different types are used in the ICmpInst and SelectInst that represent the min/max expressions. However, if the ICmpInst type is smaller a comparison with the sign/zero extended operands would have yielded the same result. This situation might arise after the instruction combination pass was applied. Differential Revision: http://reviews.llvm.org/D7338 llvm-svn: 228572
* Bugfix: ScalarEvolution incorrectly assumes that the start of certainSanjoy Das2015-02-081-1/+18
| | | | | | | | | | | | | add recurrences don't overflow. This change makes the optimization more restrictive. It still assumes that an overflowing `add nsw` is undefined behavior; and this change will need revisiting once we have a consistent semantics for poison values. Differential Revision: http://reviews.llvm.org/D7331 llvm-svn: 228552
* Correctly combine alias.scope metadata by a union instead of intersectingBjorn Steinbrink2015-02-081-2/+2
| | | | | | | | | | | | | | | | | Summary: The alias.scope metadata represents sets of things an instruction might alias with. When generically combining the metadata from two instructions the result must be the union of the original sets, because the new instruction might alias with anything any of the original instructions aliased with. Reviewers: hfinkel Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D7490 llvm-svn: 228525
* ValueTracking: Make isBytewiseValue simpler and more powerful at the same time.Benjamin Kramer2015-02-071-19/+9
| | | | | | | Turns out there is a simpler way of checking that all bytes in a word are equal than binary decomposition. llvm-svn: 228503
* [BasicAA] Try to disambiguate GEPs through arrays of structs intoAhmed Bougacha2015-02-071-0/+104
| | | | | | | | | | | | | | | | | | | | | | | | | | | different fields. We can show that two GEPs off of the same (possibly multidimensional) array of structs, into different fields, can't alias. Quoting: For two GEPOperators GEP1 and GEP2, if we find that: - both GEPs begin indexing from the exact same pointer; - the last indices in both GEPs are constants, indexing into a struct; - said indices are different, hence,the pointed-to fields are different; - and both GEPs only index through arrays prior to that; this lets us determine that the struct that GEP1 indexes into and the struct that GEP2 indexes into must either precisely overlap or be completely disjoint. Because they cannot partially overlap, indexing into different non-overlapping fields of the struct will never alias. The other BasicAA::aliasGEP rules worked in some cases, but not all (for example, the i32x3 struct in the testcase). We can add this simple ad-hoc rule to complement them. rdar://19717375 Differential Revision: http://reviews.llvm.org/D7453 llvm-svn: 228498
* SCEV: Compress disposition pairs.Benjamin Kramer2015-02-071-18/+18
| | | | | | | Composing DenseMaps and SmallVectors is still somewhat suboptimal, but this at least halves the size of the vector elements. NFC. llvm-svn: 228497
* [InstSimplify] Add SimplifyFPBinOp function.Michael Zolotukhin2015-02-062-1/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | It is a variation of SimplifyBinOp, but it takes into account FastMathFlags. It is needed in inliner and loop-unroller to accurately predict the transformation's outcome (previously we dropped the flags and were too conservative in some cases). Example: float foo(float *a, float b) { float r; if (a[1] * b) r = /* a lot of expensive computations */; else r = 1; return r; } float boo(float *a) { return foo(a, 0.0); } Without this patch, we don't inline 'foo' into 'boo'. llvm-svn: 228432
* [LV] Move addRuntimeCheck to LoopAccessAnalysisAdam Nemet2015-02-061-0/+106
| | | | | | | | | | | | | This will allow it to be shared with the new Loop Distribution pass. getFirstInst is currently duplicated across LoopVectorize.cpp and LoopAccessAnalysis.cpp. This is a short-term work-around until we figure out a better solution. NFC. (The code moved is adjusted a bit for the name of the Loop member and that PtrRtCheck is now a reference rather than a pointer.) llvm-svn: 228418
* Whitespace.Chad Rosier2015-02-061-2/+0
| | | | llvm-svn: 228397
* Introduce print-memderefs to test isDereferenceablePointerRamkumar Ramachandra2015-02-063-0/+63
| | | | | | | | | | Since testing the function indirectly is tricky, introduce a direct print-memderefs pass, in the same spirit as print-memdeps, which prints dereferenceability information matched by FileCheck. Differential Revision: http://reviews.llvm.org/D7075 llvm-svn: 228369
* Value soft float calls as more expensive in the inliner.Cameron Esfahani2015-02-052-0/+23
| | | | | | | | | | | | | | Summary: When evaluating floating point instructions in the inliner, ask the TTI whether it is an expensive operation. By default, it's not an expensive operation. This keeps the default behavior the same as before. The ARM TTI has been updated to return back TCC_Expensive for targets which don't have hardware floating point. Reviewers: chandlerc, echristo Reviewed By: echristo Subscribers: t.p.northover, aemerson, llvm-commits Differential Revision: http://reviews.llvm.org/D6936 llvm-svn: 228263
* ValueTracking: Make isSafeToSpeculativelyExecute a little cleanerDavid Majnemer2015-02-011-14/+14
| | | | | | No functional change intended. llvm-svn: 227760
* [LoopVectorize] Move LoopAccessAnalysis to its own moduleAdam Nemet2015-02-012-0/+1085
| | | | | | | | | | | | | | | | | | | | Other than moving code and adding the boilerplate for the new files, the code being moved is unchanged. There are a few global functions that are shared with the rest of the LoopVectorizer. I moved these to the new module as well (emitLoopAnalysis, stripIntegerCast, replaceSymbolicStrideSCEV) along with the Report class used by emitLoopAnalysis. There is probably room for further improvement in this area. I kept DEBUG_TYPE "loop-vectorize" because it's used as the PassName with emitOptimizationRemarkAnalysis. This will obviously have to change. NFC. This is part of the patchset that splits out the memory dependence logic from LoopVectorizationLegality into a new class LoopAccessAnalysis. LoopAccessAnalysis will be used by the new Loop Distribution pass. llvm-svn: 227756
* [multiversion] Kill FunctionTargetTransformInfo, TTI itself is nowChandler Carruth2015-02-012-51/+0
| | | | | | per-function and supports the exact desired interface. llvm-svn: 227743
* [multiversion] Remove the function parameter from the unrollingChandler Carruth2015-02-011-2/+2
| | | | | | preferences interface on TTI now that all of TTI is per-function. llvm-svn: 227741
* [multiversion] Implement the old pass manager's TTI wrapper pass inChandler Carruth2015-02-011-5/+10
| | | | | | | | | | | | | | | | | | | | | | | terms of the new pass manager's TargetIRAnalysis. Yep, this is one of the nicer bits of the new pass manager's design. Passes can in many cases operate in a vacuum and so we can just nest things when convenient. This is particularly convenient here as I can now consolidate all of the TargetMachine logic on this analysis. The most important change here is that this pushes the function we need TTI for all the way into the TargetMachine, and re-creates the TTI object for each function rather than re-using it for each function. We're now prepared to teach the targets to produce function-specific TTI objects with specific subtargets cached, etc. One piece of feedback I'd love here is whether its worth renaming any of this stuff. None of the names really seem that awesome to me at this point, but TargetTransformInfoWrapperPass is particularly ... odd. TargetIRAnalysisWrapper might make more sense. I would want to do that rename separately anyways, but let me know what you think. llvm-svn: 227731
* [multiversion] Thread a function argument through all the callers of theChandler Carruth2015-02-013-4/+4
| | | | | | | | | | | | | | getTTI method used to get an actual TTI object. No functionality changed. This just threads the argument and ensures code like the inliner can correctly look up the callee's TTI rather than using a fixed one. The next change will use this to implement per-function subtarget usage by TTI. The changes after that should eliminate the need for FTTI as that will have become the default. llvm-svn: 227730
* [PM] Port TTI to the new pass manager, introducing a TargetIRAnalysis toChandler Carruth2015-02-011-0/+17
| | | | | | | | | | | | | | produce it. This adds a function to the TargetMachine that produces this analysis via a callback for each function. This in turn faves the way to produce a *different* TTI per-function with the correct subtarget cached. I've also done the necessary wiring in the opt tool to thread the target machine down and make it available to the pass registry so that we can construct this analysis from a target machine when available. llvm-svn: 227721
* [PM] Switch the TargetMachine interface from accepting a pass managerChandler Carruth2015-01-311-13/+17
| | | | | | | | | | | | | | | | | | | | | | | base which it adds a single analysis pass to, to instead return the type erased TargetTransformInfo object constructed for that TargetMachine. This removes all of the pass variants for TTI. There is now a single TTI *pass* in the Analysis layer. All of the Analysis <-> Target communication is through the TTI's type erased interface itself. While the diff is large here, it is nothing more that code motion to make types available in a header file for use in a different source file within each target. I've tried to keep all the doxygen comments and file boilerplate in line with this move, but let me know if I missed anything. With this in place, the next step to making TTI work with the new pass manager is to introduce a really simple new-style analysis that produces a TTI object via a callback into this routine on the target machine. Once we have that, we'll have the building blocks necessary to accept a function argument as well. llvm-svn: 227685
* [PM] Change the core design of the TTI analysis to use a polymorphicChandler Carruth2015-01-315-527/+117
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | type erased interface and a single analysis pass rather than an extremely complex analysis group. The end result is that the TTI analysis can contain a type erased implementation that supports the polymorphic TTI interface. We can build one from a target-specific implementation or from a dummy one in the IR. I've also factored all of the code into "mix-in"-able base classes, including CRTP base classes to facilitate calling back up to the most specialized form when delegating horizontally across the surface. These aren't as clean as I would like and I'm planning to work on cleaning some of this up, but I wanted to start by putting into the right form. There are a number of reasons for this change, and this particular design. The first and foremost reason is that an analysis group is complete overkill, and the chaining delegation strategy was so opaque, confusing, and high overhead that TTI was suffering greatly for it. Several of the TTI functions had failed to be implemented in all places because of the chaining-based delegation making there be no checking of this. A few other functions were implemented with incorrect delegation. The message to me was very clear working on this -- the delegation and analysis group structure was too confusing to be useful here. The other reason of course is that this is *much* more natural fit for the new pass manager. This will lay the ground work for a type-erased per-function info object that can look up the correct subtarget and even cache it. Yet another benefit is that this will significantly simplify the interaction of the pass managers and the TargetMachine. See the future work below. The downside of this change is that it is very, very verbose. I'm going to work to improve that, but it is somewhat an implementation necessity in C++ to do type erasure. =/ I discussed this design really extensively with Eric and Hal prior to going down this path, and afterward showed them the result. No one was really thrilled with it, but there doesn't seem to be a substantially better alternative. Using a base class and virtual method dispatch would make the code much shorter, but as discussed in the update to the programmer's manual and elsewhere, a polymorphic interface feels like the more principled approach even if this is perhaps the least compelling example of it. ;] Ultimately, there is still a lot more to be done here, but this was the huge chunk that I couldn't really split things out of because this was the interface change to TTI. I've tried to minimize all the other parts of this. The follow up work should include at least: 1) Improving the TargetMachine interface by having it directly return a TTI object. Because we have a non-pass object with value semantics and an internal type erasure mechanism, we can narrow the interface of the TargetMachine to *just* do what we need: build and return a TTI object that we can then insert into the pass pipeline. 2) Make the TTI object be fully specialized for a particular function. This will include splitting off a minimal form of it which is sufficient for the inliner and the old pass manager. 3) Add a new pass manager analysis which produces TTI objects from the target machine for each function. This may actually be done as part of #2 in order to use the new analysis to implement #2. 4) Work on narrowing the API between TTI and the targets so that it is easier to understand and less verbose to type erase. 5) Work on narrowing the API between TTI and its clients so that it is easier to understand and less verbose to forward. 6) Try to improve the CRTP-based delegation. I feel like this code is just a bit messy and exacerbating the complexity of implementing the TTI in each target. Many thanks to Eric and Hal for their help here. I ended up blocked on this somewhat more abruptly than I expected, and so I appreciate getting it sorted out very quickly. Differential Revision: http://reviews.llvm.org/D7293 llvm-svn: 227669
* Fold fcmp in cases where value is provably non-negative. By Arch Robison.Elena Demikhovsky2015-01-282-0/+67
| | | | | | | | | | | | This patch folds fcmp in some cases of interest in Julia. The patch adds a function CannotBeOrderedLessThanZero that returns true if a value is provably not less than zero. I.e. the function returns true if the value is provably -0, +0, positive, or a NaN. The patch extends InstructionSimplify.cpp to fold instances of fcmp where: - the predicate is olt or uge - the first operand is provably not less than zero - the second operand is zero The motivation for handling these cases optimizing away domain checks for sqrt in Julia for common idioms such as sqrt(x*x+y*y).. http://reviews.llvm.org/D6972 llvm-svn: 227298
* Move EH personality type classification to Analysis/LibCallSemantics.hReid Kleckner2015-01-281-0/+16
| | | | | | | | | | | | | | Summary: Also add enum types for __C_specific_handler and _CxxFrameHandler3 for which we know a few things. Reviewers: majnemer Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D7214 llvm-svn: 227284
* Commoning of target specific load/store intrinsics in Early CSE.Chad Rosier2015-01-261-0/+19
| | | | | | | Phabricator revision: http://reviews.llvm.org/D7121 Patch by Sanjin Sijaric <ssijaric@codeaurora.org>! llvm-svn: 227149
* Refine memory dependence's notion of volatile semanticsPhilip Reames2015-01-261-16/+27
| | | | | | | | | | According to my reading of the LangRef, volatiles are only ordered with respect to other volatiles. It is entirely legal and profitable to forward unrelated loads over the volatile load. This patch implements this for GVN by refining the transition rules MemoryDependenceAnalysis uses when encountering a volatile. The added test cases show where the extra flexibility is profitable for local dependence optimizations. I have a related change (227110) which will extend this to non-local dependence (i.e. PRE), but that's essentially orthogonal to the semantic change in this patch. I have tested the two together and can confirm that PRE works over a volatile load with both changes. I will be submitting a PRE w/volatiles test case seperately in the near future. Differential Revision: http://reviews.llvm.org/D6901 llvm-svn: 227112
* Pass QueryInst down through non-local dependency calculationPhilip Reames2015-01-261-8/+13
| | | | | | | | | | | | | | This change is mostly motivated by exposing information about the original query instruction to the actual scanning work in getPointerDependencyFrom when used by GVN PRE. In a follow up change, I will use this to be more precise with regards to the semantics of volatile instructions encountered in the scan of a basic block. Worth noting, is that this change (despite appearing quite simple) is not semantically preserving. By providing more information to the helper routine, we allow some optimizations to kick in that weren't previously able to (when called from this code path.) In particular, we see that treatment of !invariant.load becomes more precise. In theory, we might see a difference with an ordered/atomic instruction as well, but I'm having a hard time actually finding a test case which shows that. Test wise, I've included new tests for !invariant.load which illustrate this difference. I've also included some updated TBAA tests which highlight that this change isn't needed for that optimization to kick in - it's handled inside alias analysis itself. Eventually, it would be nice to factor the !invariant.load handling inside alias analysis as well. Differential Revision: http://reviews.llvm.org/D6895 llvm-svn: 227110
* Fix incorrect partial aliasingDaniel Berlin2015-01-261-7/+20
| | | | | | Update testcases llvm-svn: 227099
* Fix delegationDaniel Berlin2015-01-261-2/+5
| | | | llvm-svn: 227098
* Implemented cost model for masked load/store operations.Elena Demikhovsky2015-01-251-0/+12
| | | | llvm-svn: 227035
* [PM] Rework how the TargetLibraryInfo pass integrates with the new passChandler Carruth2015-01-241-15/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | manager to support the actual uses of it. =] When I ported instcombine to the new pass manager I discover that it didn't work because TLI wasn't available in the right places. This is a somewhat surprising and/or subtle aspect of the new pass manager design that came up before but I think is useful to be reminded of: While the new pass manager *allows* a function pass to query a module analysis, it requires that the module analysis is already run and cached prior to the function pass manager starting up, possibly with a 'require<foo>' style utility in the pass pipeline. This is an intentional hurdle because using a module analysis from a function pass *requires* that the module analysis is run prior to entering the function pass manager. Otherwise the other functions in the module could be in who-knows-what state, etc. A somewhat surprising consequence of this design decision (at least to me) is that you have to design a function pass that leverages a module analysis to do so as an optional feature. Even if that means your function pass does no work in the absence of the module analysis, you have to handle that possibility and remain conservatively correct. This is a natural consequence of things being able to invalidate the module analysis and us being unable to re-run it. And it's a generally good thing because it lets us reorder passes arbitrarily without breaking correctness, etc. This ends up causing problems in one case. What if we have a module analysis that is *definitionally* impossible to invalidate. In the places this might come up, the analysis is usually also definitionally trivial to run even while other transformation passes run on the module, regardless of the state of anything. And so, it follows that it is natural to have a hard requirement on such analyses from a function pass. It turns out, that TargetLibraryInfo is just such an analysis, and InstCombine has a hard requirement on it. The approach I've taken here is to produce an analysis that models this flexibility by making it both a module and a function analysis. This exposes the fact that it is in fact safe to compute at any point. We can even make it a valid CGSCC analysis at some point if that is useful. However, we don't want to have a copy of the actual target library info state for each function! This state is specific to the triple. The somewhat direct and blunt approach here is to turn TLI into a pimpl, with the state and mutators in the implementation class and the query routines primarily in the wrapper. Then the analysis can lazily construct and cache the implementations, keyed on the triple, and on-demand produce wrappers of them for each function. One minor annoyance is that we will end up with a wrapper for each function in the module. While this is a bit wasteful (one pointer per function) it seems tolerable. And it has the advantage of ensuring that we pay the absolute minimum synchronization cost to access this information should we end up with a nice parallel function pass manager in the future. We could look into trying to mark when analysis results are especially cheap to recompute and more eagerly GC-ing the cached results, or we could look at supporting a variant of analyses whose results are specifically *not* cached and expected to just be used and discarded by the consumer. Either way, these seem like incremental enhancements that should happen when we start profiling the memory and CPU usage of the new pass manager and not before. The other minor annoyance is that if we end up using the TLI in both a module pass and a function pass, those will be produced by two separate analyses, and thus will point to separate copies of the implementation state. While a minor issue, I dislike this and would like to find a way to cleanly allow a single analysis instance to be used across multiple IR unit managers. But I don't have a good solution to this today, and I don't want to hold up all of the work waiting to come up with one. This too seems like a reasonable thing to incrementally improve later. llvm-svn: 226981
* [PM] Actually add the new pass manager support for the assumption cache.Chandler Carruth2015-01-221-0/+15
| | | | | | | | | | | | | | | I had already factored this analysis specifically to enable doing this, but hadn't actually committed the necessary wiring to get at this from the new pass manager. This also nicely shows how the separate cache object can be directly managed by the new pass manager. This analysis didn't have any direct tests and so I've added a printer pass and a boring test case. I chose to print the i1 value which is being assumed rather than the call to llvm.assume as that seems much more useful for testing... but suggestions on an even better printing strategy welcome. My main goal was to make sure things actually work. =] llvm-svn: 226868
* Intrinsics: introduce llvm_any_ty aka ValueType AnyRamkumar Ramachandra2015-01-221-0/+1
| | | | | | | | | | | | | | | Specifically, gc.result benefits from this greatly. Instead of: gc.result.int.* gc.result.float.* gc.result.ptr.* ... We now have a gc.result.* that can specialize to literally any type. Differential Revision: http://reviews.llvm.org/D7020 llvm-svn: 226857
* Make ScalarEvolution less aggressive with respect to no-wrap flags.Sanjoy Das2015-01-221-8/+7
| | | | | | | | | | | | ScalarEvolution currently lowers a subtraction recurrence to an add recurrence with the same no-wrap flags as the subtraction. This is incorrect because `sub nsw X, Y` is not the same as `add nsw X, -Y` and `sub nuw X, Y` is not the same as `add nuw X, -Y`. This patch fixes the issue, and adds two test cases demonstrating the bug. Differential Revision: http://reviews.llvm.org/D7081 llvm-svn: 226755
* Fixed a bug with how we determine bitset indices.George Burgess IV2015-01-211-1/+1
| | | | llvm-svn: 226671
* [PM] Port LoopInfo to the new pass manager, adding both a LoopAnalysisChandler Carruth2015-01-201-0/+21
| | | | | | | | | | | | | | pass and a LoopPrinterPass with the expected associated wiring. I've added a RUN line to the only test case (!!!) we have that actually prints loops. Everything seems to be working. This is somewhat exciting as this is the first analysis using another analysis to go in for the new pass manager. =D I also believe it is the last analysis necessary for porting instcombine, but of course I may yet discover more. llvm-svn: 226560
* [PM] Now that LoopInfo isn't in the Pass type hierarchy, it is muchChandler Carruth2015-01-181-26/+11
| | | | | | | | | | | | cleaner to derive from the generic base. Thise removes a ton of boiler plate code and somewhat strange and pointless indirections. It also remove a bunch of the previously needed friend declarations. To fully remove these, I also lifted the verify logic into the generic LoopInfoBase, which seems good anyways -- it is generic and useful logic even for the machine side. llvm-svn: 226385
* [PM] Cleanup more warnings my refactoring exposed where now we haveChandler Carruth2015-01-171-0/+2
| | | | | | | | | unused variables in a no-asserts build. I've fixed this by putting the entire loop behind an #ifndef as it contains nothing other than asserts. llvm-svn: 226377
* [PM] Split the LoopInfo object apart from the legacy pass, creatingChandler Carruth2015-01-1710-50/+56
| | | | | | | | | | a LoopInfoWrapperPass to wire the object up to the legacy pass manager. This switches all the clients of LoopInfo over and paves the way to port LoopInfo to the new pass manager. No functionality change is intended with this iteration. llvm-svn: 226373
* [PM] Port TargetLibraryInfo to the new pass manager, provided by theChandler Carruth2015-01-151-1/+22
| | | | | | | | | | | | | | | | | | | | TargetLibraryAnalysis pass. There are actually no direct tests of this already in the tree. I've added the most basic test that the pass manager bits themselves work, and the TLI object produced will be tested by an upcoming patches as they port passes which rely on TLI. This is starting to point out the awkwardness of the invalidate API -- it seems poorly fitting on the *result* object. I suspect I will change it to live on the analysis instead, but that's not for this change, and I'd rather have a few more passes ported in order to have more experience with how this plays out. I believe there is only one more analysis required in order to start porting instcombine. =] llvm-svn: 226160
* [PM] Separate the TargetLibraryInfo object from the immutable pass.Chandler Carruth2015-01-156-31/+44
| | | | | | | | | | | | | | The pass is really just a means of accessing a cached instance of the TargetLibraryInfo object, and this way we can re-use that object for the new pass manager as its result. Lots of delta, but nothing interesting happening here. This is the common pattern that is developing to allow analyses to live in both the old and new pass manager -- a wrapper pass in the old pass manager emulates the separation intrinsic to the new pass manager between the result and pass for analyses. llvm-svn: 226157
* Update libdeps since TLI was moved from Target to Analysis in r226078.NAKAMURA Takumi2015-01-151-1/+1
| | | | llvm-svn: 226126
* [PM] Move TargetLibraryInfo into the Analysis library.Chandler Carruth2015-01-159-7/+762
| | | | | | | | | | | | | | | | While the term "Target" is in the name, it doesn't really have to do with the LLVM Target library -- this isn't an abstraction which LLVM targets generally need to implement or extend. It has much more to do with modeling the various runtime libraries on different OSes and with different runtime environments. The "target" in this sense is the more general sense of a target of cross compilation. This is in preparation for porting this analysis to the new pass manager. No functionality changed, and updates inbound for Clang and Polly. llvm-svn: 226078
* For PR21145: recognise a builtin call to a known deallocation function even ifRichard Smith2015-01-151-1/+1
| | | | | | | | it's defined in the current module. Clang generates this situation for the C++14 sized deallocation functions, because it generates a weak definition in case one isn't provided by the C++ runtime library. llvm-svn: 226069
* [cleanup] Re-sort all the #include lines in LLVM usingChandler Carruth2015-01-148-11/+10
| | | | | | | | | | | utils/sort_includes.py. I clearly haven't done this in a while, so more changed than usual. This even uncovered a missing include from the InstrProf library that I've added. No functionality changed here, just mechanical cleanup of the include order. llvm-svn: 225974
* Revert r225854: [PM] Move the LazyCallGraph printing functionality toChandler Carruth2015-01-141-38/+36
| | | | | | | | | | | | | | | a print method. This was formulated on a bad idea, but sadly I didn't uncover how bad this was until I got further down the path. I had hoped that we could provide a low boilerplate way of printing analyses, but it just doesn't seem like this really fits the needs of the analyses. Not all analyses really want to do printing, and those that do don't all use the same interface. Instead, with the new pass manager let's just take advantage of the fact that creating an explicit printer pass like the LCG has is pretty low boilerplate already and rely on that for testing. llvm-svn: 225861
* [PM] Move the LazyCallGraph printing functionality to a print method.Chandler Carruth2015-01-131-36/+38
| | | | | | | | I'm adding generic analysis printing utility pass support which will require such a method (or a specialization) so this will let the existing printing logic satisfy that. llvm-svn: 225854
* [PM] Remove the defunt CGSCC-specific debug flag.Chandler Carruth2015-01-131-4/+0
| | | | | | | | Even before I sunk the debug flag into the opt tool this had been made obsolete by factoring the pass and analysis managers into a single set of templates that all used the core flag. No functionality changed here. llvm-svn: 225842
* [PM] Refactor the new pass manager to use a single template to implementChandler Carruth2015-01-131-32/+0
| | | | | | | | | | | | | | | | | | | | | | | the generic functionality of the pass managers themselves. In the new infrastructure, the pass "manager" isn't actually interesting at all. It just pipelines a single chunk of IR through N passes. We don't need to know anything about the IR or the passes to do this really and we can replace the 3 implementations of the exact same functionality with a single generic PassManager template, complementing the single generic AnalysisManager template. I've left typedefs in place to give convenient names to the various obvious instantiations of the template. With this, I think I've nuked almost all of the redundant logic in the managers, and I think the overall design is actually simpler for having single templates that clearly indicate there is no special logic here. The logging is made somewhat more annoying by this change, but I don't think the difference is worth having heavy-weight traits to help log things. llvm-svn: 225783
OpenPOWER on IntegriCloud