summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Analysis
Commit message (Collapse)AuthorAgeFilesLines
...
* [Loop Vectorizer] Consecutive memory access - fixed and simplifiedElena Demikhovsky2016-09-181-4/+4
| | | | | | | | | Amended consecutive memory access detection in Loop Vectorizer. Load/Store were not handled properly without preceding GEP instruction. Differential Revision: https://reviews.llvm.org/D20789 llvm-svn: 281853
* [InstCombine] allow vector types for constant folding / computeKnownBits ↵Sanjay Patel2016-09-161-3/+4
| | | | | | | | | | | | | | | | (PR24942) computeKnownBits() already works for integer vectors, so allow vector types when calling that from InstCombine. I don't think the change to use m_APInt in computeKnownBits is strictly necessary because we do check for ConstantVector later, but it's more efficient to handle the splat case without needing to loop on vector elements. This should work with InstSimplify, but doesn't yet, so I made that a FIXME comment on the test for PR24942: https://llvm.org/bugs/show_bug.cgi?id=24942 Differential Revision: https://reviews.llvm.org/D24677 llvm-svn: 281777
* Reapplying r278731 after fixing the problem that caused it to be reverted.David L Kreitzer2016-09-161-15/+100
| | | | | | | | | | Enhance SCEV to compute the trip count for some loops with unknown stride. Patch by Pankaj Chawla Differential Revision: https://reviews.llvm.org/D22377 llvm-svn: 281732
* [LCG] Redesign the lazy post-order iteration mechanism for theChandler Carruth2016-09-161-103/+127
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LazyCallGraph to support repeated, stable iterations, even in the face of graph updates. This is particularly important to allow the CGSCC pass manager to walk the RefSCCs (and thus everything else) in a module more than once. Lots of unittests and other tests were hard or impossible to write because repeated CGSCC pass managers which didn't invalidate the LazyCallGraph would conclude the module was empty after the first one. =[ Really, really bad. The interesting thing is that in many ways this simplifies the code. We can now re-use the same code for handling reference edge insertion updates of the RefSCC graph as we use for handling call edge insertion updates of the SCC graph. Outside of adapting to the shared logic for this (which isn't trivial, but is *much* simpler than the DFS it replaces!), the new code involves putting newly created RefSCCs when deleting a reference edge into the cached list in the correct way, and to re-formulate the iterator to be stable and effective even in the face of these kinds of updates. I've updated the unittests for the LazyCallGraph to re-iterate the postorder sequence and verify that this all works. We even check for using alternating iterators to trigger the lazy formation of RefSCCs after mutation has occured. It's worth noting that there are a reasonable number of likely simplifications we can make past this. It isn't clear that we need to keep the "LeafRefSCCs" around any more. But I've not removed that mostly because I want this to be a more isolated change. Differential Revision: https://reviews.llvm.org/D24219 llvm-svn: 281716
* [PM] Port CFGViewer and CFGPrinter to the new Pass ManagerSriraman Tallam2016-09-152-50/+69
| | | | | | Differential Revision: https://reviews.llvm.org/D24592 llvm-svn: 281640
* Add some shortcuts in LazyValueInfo to reduce compile time of Correlated ↵Wei Mi2016-09-151-0/+29
| | | | | | | | | | | | | | | Value Propagation. The patch is to partially fix PR10584. Correlated Value Propagation queries LVI to check non-null for pointer params of each callsite. If we know the def of param is an alloca instruction, we know it is non-null and can return early from LVI. Similarly, CVP queries LVI to check whether pointer for each mem access is constant. If the def of the pointer is an alloca instruction, we know it is not a constant pointer. These shortcuts can reduce the cost of CVP significantly. Differential Revision: https://reviews.llvm.org/D18066 llvm-svn: 281586
* Create a getelementptr instead of sub expr for ValueOffsetPair if theWei Mi2016-09-141-3/+22
| | | | | | | | | | | | value is a pointer. This patch is to fix PR30213. When expanding an expr based on ValueOffsetPair, if the value is of pointer type, we can only create a getelementptr instead of sub expr. Differential Revision: https://reviews.llvm.org/D24088 llvm-svn: 281439
* [ConstantFold] Improve the bitcast folding logic for constant vectors.Andrea Di Biagio2016-09-131-2/+13
| | | | | | | | | | | | | | | | | | | | The constant folder didn't know how to always fold bitcasts of constant integer vectors. In particular, it was unable to handle the case where a constant vector had some undef elements, and the resulting (i.e. bitcasted) vector type had more elements than the original vector type. Example: %cast = bitcast <2 x i64><i64 undef, i64 2> to <4 x i32> On a little endian target, %cast could have been folded to: <4 x i32><i32 undef, i32 undef, i32 2, i32 0> This patch improves the folding logic by teaching how to correctly propagate undef elements in the folded vector. Differential Revision: https://reviews.llvm.org/D24301 llvm-svn: 281343
* [LVI] Complete the abstract of the cache layer [NFCI]Philip Reames2016-09-121-72/+94
| | | | | | | | Convert the previous introduced is-a relationship between the LVICache and LVIImple clases into a has-a relationship and hide all the implementation details of the cache from the lazy query layer. The only slightly concerning change here is removing the addition of a queried block into the SeenBlock set in LVIImpl::getBlockValue. As far as I can tell, this was effectively dead code. I think it *used* to be the case that getCachedValueInfo wasn't const and might end up inserting elements in the cache during lookup. That's no longer true and hasn't been for a while. I did fixup the const usage to make that more obvious. llvm-svn: 281272
* [LVI] Sink a couple more cache manipulation routines into the cache itself ↵Philip Reames2016-09-121-36/+45
| | | | | | | | [NFCI] The only interesting bit here is the refactor of the handle callback and even that's pretty straight-forward. llvm-svn: 281267
* [LVI] Abstract out the actual cache logic [NFCI]Philip Reames2016-09-121-89/+97
| | | | | | Seperate the caching logic from the implementation of the lazy analysis. For the moment, the lazy analysis impl has a is-a relationship with the cache; this will change to a has-a relationship shortly. This was done as two steps merely to keep the changes simple and the diff understandable. llvm-svn: 281266
* Add handling of !invariant.load to PropagateMetadata.Justin Lebar2016-09-111-6/+6
| | | | | | | | | | | | | | Summary: This will let e.g. the load/store vectorizer propagate this metadata appropriately. Reviewers: arsenm Subscribers: tra, jholewinski, hfinkel, mzolotukhin Differential Revision: https://reviews.llvm.org/D23479 llvm-svn: 281153
* Do not widen load for different variable in GVN.Dehao Chen2016-09-091-37/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Widening load in GVN is too early because it will block other optimizations like PRE, LICM. https://llvm.org/bugs/show_bug.cgi?id=29110 The SPECCPU2006 benchmark impact of this patch: Reference: o2_nopatch (1): o2_patched Benchmark Base:Reference (1) ------------------------------------------------------- spec/2006/fp/C++/444.namd 25.2 -0.08% spec/2006/fp/C++/447.dealII 45.92 +1.05% spec/2006/fp/C++/450.soplex 41.7 -0.26% spec/2006/fp/C++/453.povray 35.65 +1.68% spec/2006/fp/C/433.milc 23.79 +0.42% spec/2006/fp/C/470.lbm 41.88 -1.12% spec/2006/fp/C/482.sphinx3 47.94 +1.67% spec/2006/int/C++/471.omnetpp 22.46 -0.36% spec/2006/int/C++/473.astar 21.19 +0.24% spec/2006/int/C++/483.xalancbmk 36.09 -0.11% spec/2006/int/C/400.perlbench 33.28 +1.35% spec/2006/int/C/401.bzip2 22.76 -0.04% spec/2006/int/C/403.gcc 32.36 +0.12% spec/2006/int/C/429.mcf 41.04 -0.41% spec/2006/int/C/445.gobmk 26.94 +0.04% spec/2006/int/C/456.hmmer 24.5 -0.20% spec/2006/int/C/458.sjeng 28 -0.46% spec/2006/int/C/462.libquantum 55.25 +0.27% spec/2006/int/C/464.h264ref 45.87 +0.72% geometric mean +0.23% For most benchmarks, it's a wash, but we do see stable improvements on some benchmarks, e.g. 447,453,482,400. Reviewers: davidxl, hfinkel, dberlin, sanjoy, reames Subscribers: gberry, junbuml Differential Revision: https://reviews.llvm.org/D24096 llvm-svn: 281074
* [LCG] Clean up and make NDEBUG verify calls more rigorous withChandler Carruth2016-09-041-32/+38
| | | | | | | | | | | make_scope_exit now that we have that utility. This makes the code much more clear and readable by isolating the check. It also makes it easy to go through and make sure all the interesting update routines have a start and end verify so we don't slowly let the graph drift into an invalid state. llvm-svn: 280619
* [LCG] A NFC refactoring to extract the logic for doingChandler Carruth2016-09-041-111/+184
| | | | | | | | | | | | | | | | | | | | | a postorder-sequence based update after edge insertion into a generic helper function. This separates the SCC-specific logic into two fairly simple lambdas and extracts the rest into a generic helper template function. I think this is a net win on its own merits because it disentangles different pieces of the algorithm. Now there is one place that does the two-step partition to identify a set of newly connected components and at the same time update the postorder sequence. However, I'm also hoping to re-use this an upcoming patch to update a cached post-order sequence of RefSCCs when doing the analogous update to the RefSCC graph, and I don't want to have two copies. The diff is quite messy but this really is just moving things around and making types generic rather than specific. llvm-svn: 280618
* Simplify code a bit. No functional change intended.Andrea Di Biagio2016-09-021-15/+16
| | | | | | | We don't need to call `GetCompareTy(LHS)' every single time true or false is returned from function SimplifyFCmpInst as suggested by Sanjay in review D24142. llvm-svn: 280491
* [instsimplify] Fix incorrect folding of an ordered fcmp with a vector of all ↵Andrea Di Biagio2016-09-021-1/+1
| | | | | | | | | | | | | | | | | | | | | | | NaN. This patch fixes a crash caused by an incorrect folding of an ordered comparison between a packed floating point vector and a splat vector of NaN. An ordered comparison between a vector and a constant vector of NaN, should always be folded into a constant vector where each element is i1 false. Since revision 266175, SimplifyFCmpInst folds the ordered fcmp into a scalar 'false'. Later on, this would cause an assertion failure, since the value type of the folded value doesn't match the expected value type of the uses of the original instruction: "Assertion failed: New->getType() == getType() && "replaceAllUses of value with new value of different type!". This patch fixes the issue and adds a test case to the already existing test InstSimplify/floating-point-compares.ll. Differential Revision: https://reviews.llvm.org/D24143 llvm-svn: 280488
* [LoopInfo] Add verification by recomputation.Michael Zolotukhin2016-08-311-3/+6
| | | | | | | | | | | | | | | | | Summary: Current implementation of LI verifier isn't ideal and fails to detect some cases when LI is incorrect. For instance, it checks that all recorded loops are in a correct form, but it has no way to check if there are no more other (unrecorded in LI) loops in the function. This patch adds a way to detect such bugs. Reviewers: chandlerc, sanjoy, hfinkel Subscribers: llvm-commits, silvas, mzolotukhin Differential Revision: https://reviews.llvm.org/D23437 llvm-svn: 280280
* Fix indent. NFC.Chad Rosier2016-08-311-2/+2
| | | | llvm-svn: 280270
* s/static inline/static/ for headers I have changed in r279475. NFC.Tim Shen2016-08-311-1/+1
| | | | llvm-svn: 280257
* [Loads] Properly populate the visited set in isDereferenceableAndAlignedPointerDavid Majnemer2016-08-311-2/+5
| | | | | | | | | There were paths where we wouldn't populate the visited set, causing us to recurse forever if an SSA variable was defined in terms of itself. This fixes PR30210. llvm-svn: 280191
* Fixup r279618, instantiate ↵NAKAMURA Takumi2016-08-301-2/+2
| | | | | | | | | | | *AnalysisManagerProxy<*AnalysisManager,LazyCallGraph::SCC>, instead of *AnalysisManagerProxy<*AnalysisManager,LazyCallGraph::SCC,LazyCallGraph&>, for PassID. Or they were not instantiated as expected; llvm::InnerAnalysisManagerProxy<llvm::AnalysisManager<llvm::Function>, llvm::LazyCallGraph::SCC>::PassID llvm::InnerAnalysisManagerProxy<llvm::AnalysisManager<llvm::Function>, llvm::LazyCallGraph::SCC>::PassID llvm-svn: 280105
* NFC: add early exit in ModuleSummaryAnalysisPiotr Padlewski2016-08-301-29/+32
| | | | | | | | | | | | | | | | Summary: Changed this code because it was not very readable. The one question that I got after changing it is, should we count calls to intrinsics? We don't add them to caller summary, so maybe we shouldn't also count them? Reviewers: tejohnson, eraman, mehdi_amini Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D23949 llvm-svn: 280036
* Fix a thinko in r278189.Easwaran Raman2016-08-291-1/+1
| | | | llvm-svn: 280008
* [Loop Vectorizer] Fixed memory confilict checks.Elena Demikhovsky2016-08-281-3/+29
| | | | | | | | | Fixed a bug in run-time checks for possible memory conflicts inside loop. The bug is in Low <-> High boundaries calculation. The High boundary should be calculated as "last memory access pointer + element size". Differential revision: https://reviews.llvm.org/D23176 llvm-svn: 279930
* [Inliner] Report when inlining fails because callee's def is unavailableAdam Nemet2016-08-261-10/+13
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: This is obviously an interesting case because it may motivate code restructuring or LTO. Reporting this requires instantiation of ORE in the loop where the call sites are first gathered. I've checked compile-time overhead *with* -Rpass-with-hotness and the worst slow-down was 6% in mcf and quickly tailing off. As before without -Rpass-with-hotness there is no overhead. Because this could be a pretty noisy diagnostics, it is currently qualified as 'verbose'. As of this patch, 'verbose' diagnostics are only emitted with -Rpass-with-hotness, i.e. when the output is expected to be filtered. Reviewers: eraman, chandlerc, davidxl, hfinkel Subscribers: tejohnson, Prazek, davide, llvm-commits Differential Revision: https://reviews.llvm.org/D23415 llvm-svn: 279860
* limit the number of instructions per block examined by dead store eliminationBob Haarman2016-08-261-6/+17
| | | | | | | | | | | | Summary: Dead store elimination gets very expensive when large numbers of instructions need to be analyzed. This patch limits the number of instructions analyzed per store to the value of the memdep-block-scan-limit parameter (which defaults to 100). This resulted in no observed difference in performance of the generated code, and no change in the statistics for the dead store elimination pass, but improved compilation time on some files by more than an order of magnitude. Reviewers: dexonsmith, bruno, george.burgess.iv, dberlin, reames, davidxl Subscribers: davide, chandlerc, dberlin, davidxl, eraman, tejohnson, mbodart, llvm-commits Differential Revision: https://reviews.llvm.org/D15537 llvm-svn: 279833
* Update a comment.George Burgess IV2016-08-251-3/+2
| | | | | | | r279696, which changed `LLVM_CONSTEXPR AliasAttr` to `const AliasAttr`, made this comment make less sense. llvm-svn: 279699
* Make some LLVM_CONSTEXPR variables const. NFC.George Burgess IV2016-08-254-21/+18
| | | | | | | | | | This patch changes LLVM_CONSTEXPR variable declarations to const variable declarations, since LLVM_CONSTEXPR expands to nothing if the current compiler doesn't support constexpr. In all of the changed cases, it looks like the code intended the variable to be const instead of sometimes-constexpr sometimes-not. llvm-svn: 279696
* Fix some Clang-tidy modernize-use-using and Include What You Use warnings; ↵Eugene Zelenko2016-08-252-11/+34
| | | | | | | | other minor fixes. Differential revision: https://reviews.llvm.org/D23861 llvm-svn: 279695
* The patch improves ValueTracking on left shift with nsw flag.Evgeny Stupachenko2016-08-241-5/+23
| | | | | | | | | | | | Summary: The patch fixes PR28946. Reviewers: majnemer, sanjoy Differential Revision: http://reviews.llvm.org/D23296 From: Li Huang llvm-svn: 279684
* [PM] Introduce basic update capabilities to the new PM's CGSCC passChandler Carruth2016-08-242-28/+361
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | manager, including both plumbing and logic to handle function pass updates. There are three fundamentally tied changes here: 1) Plumbing *some* mechanism for updating the CGSCC pass manager as the CG changes while passes are running. 2) Changing the CGSCC pass manager infrastructure to have support for the underlying graph to mutate mid-pass run. 3) Actually updating the CG after function passes run. I can separate them if necessary, but I think its really useful to have them together as the needs of #3 drove #2, and that in turn drove #1. The plumbing technique is to extend the "run" method signature with extra arguments. We provide the call graph that intrinsically is available as it is the basis of the pass manager's IR units, and an output parameter that records the results of updating the call graph during an SCC passes's run. Note that "...UpdateResult" isn't a *great* name here... suggestions very welcome. I tried a pretty frustrating number of different data structures and such for the innards of the update result. Every other one failed for one reason or another. Sometimes I just couldn't keep the layers of complexity right in my head. The thing that really worked was to just directly provide access to the underlying structures used to walk the call graph so that their updates could be informed by the *particular* nature of the change to the graph. The technique for how to make the pass management infrastructure cope with mutating graphs was also something that took a really, really large number of iterations to get to a place where I was happy. Here are some of the considerations that drove the design: - We operate at three levels within the infrastructure: RefSCC, SCC, and Node. In each case, we are working bottom up and so we want to continue to iterate on the "lowest" node as the graph changes. Look at how we iterate over nodes in an SCC running function passes as those function passes mutate the CG. We continue to iterate on the "lowest" SCC, which is the one that continues to contain the function just processed. - The call graph structure re-uses SCCs (and RefSCCs) during mutation events for the *highest* entry in the resulting new subgraph, not the lowest. This means that it is necessary to continually update the current SCC or RefSCC as it shifts. This is really surprising and subtle, and took a long time for me to work out. I actually tried changing the call graph to provide the opposite behavior, and it breaks *EVERYTHING*. The graph update algorithms are really deeply tied to this particualr pattern. - When SCCs or RefSCCs are split apart and refined and we continually re-pin our processing to the bottom one in the subgraph, we need to enqueue the newly formed SCCs and RefSCCs for subsequent processing. Queuing them presents a few challenges: 1) SCCs and RefSCCs use wildly different iteration strategies at a high level. We end up needing to converge them on worklist approaches that can be extended in order to be able to handle the mutations. 2) The order of the enqueuing need to remain bottom-up post-order so that we don't get surprising order of visitation for things like the inliner. 3) We need the worklists to have set semantics so we don't duplicate things endlessly. We don't need a *persistent* set though because we always keep processing the bottom node!!!! This is super, super surprising to me and took a long time to convince myself this is correct, but I'm pretty sure it is... Once we sink down to the bottom node, we can't re-split out the same node in any way, and the postorder of the current queue is fixed and unchanging. 4) We need to make sure that the "current" SCC or RefSCC actually gets enqueued here such that we re-visit it because we continue processing a *new*, *bottom* SCC/RefSCC. - We also need the ability to *skip* SCCs and RefSCCs that get merged into a larger component. We even need the ability to skip *nodes* from an SCC that are no longer part of that SCC. This led to the design you see in the patch which uses SetVector-based worklists. The RefSCC worklist is always empty until an update occurs and is just used to handle those RefSCCs created by updates as the others don't even exist yet and are formed on-demand during the bottom-up walk. The SCC worklist is pre-populated from the RefSCC, and we push new SCCs onto it and blacklist existing SCCs on it to get the desired processing. We then *directly* update these when updating the call graph as I was never able to find a satisfactory abstraction around the update strategy. Finally, we need to compute the updates for function passes. This is mostly used as an initial customer of all the update mechanisms to drive their design to at least cover some real set of use cases. There are a bunch of interesting things that came out of doing this: - It is really nice to do this a function at a time because that function is likely hot in the cache. This means we want even the function pass adaptor to support online updates to the call graph! - To update the call graph after arbitrary function pass mutations is quite hard. We have to build a fairly comprehensive set of data structures and then process them. Fortunately, some of this code is related to the code for building the cal graph in the first place. Unfortunately, very little of it makes any sense to share because the nature of what we're doing is so very different. I've factored out the one part that made sense at least. - We need to transfer these updates into the various structures for the CGSCC pass manager. Once those were more sanely worked out, this became relatively easier. But some of those needs necessitated changes to the LazyCallGraph interface to make it significantly easier to extract the changed SCCs from an update operation. - We also need to update the CGSCC analysis manager as the shape of the graph changes. When an SCC is merged away we need to clear analyses associated with it from the analysis manager which we didn't have support for in the analysis manager infrsatructure. New SCCs are easy! But then we have the case that the original SCC has its shape changed but remains in the call graph. There we need to *invalidate* the analyses associated with it. - We also need to invalidate analyses after we *finish* processing an SCC. But the analyses we need to invalidate here are *only those for the newly updated SCC*!!! Because we only continue processing the bottom SCC, if we split SCCs apart the original one gets invalidated once when its shape changes and is not processed farther so its analyses will be correct. It is the bottom SCC which continues being processed and needs to have the "normal" invalidation done based on the preserved analyses set. All of this is mostly background and context for the changes here. Many thanks to all the reviewers who helped here. Especially Sanjoy who caught several interesting bugs in the graph algorithms, David, Sean, and others who all helped with feedback. Differential Revision: http://reviews.llvm.org/D21464 llvm-svn: 279618
* [ValueTracking] Use a function_ref to avoid multiple instantiationsDavid Majnemer2016-08-231-5/+5
| | | | | | | No functional change intended, this should just be a code size improvement. llvm-svn: 279563
* [InstSimplify] allow icmp with constant folds for splat vectors, part 2Sanjay Patel2016-08-231-83/+77
| | | | | | | | | | | | Completes the m_APInt changes for simplifyICmpWithConstant(). Other commits in this series: https://reviews.llvm.org/rL279492 https://reviews.llvm.org/rL279530 https://reviews.llvm.org/rL279534 https://reviews.llvm.org/rL279538 llvm-svn: 279543
* [InstSimplify] allow icmp with constant folds for splat vectors, part 1Sanjay Patel2016-08-231-6/+10
| | | | llvm-svn: 279538
* [InstSimplify] add helper function for SimplifyICmpInst(); NFCISanjay Patel2016-08-221-133/+143
| | | | | | | | | And add a FIXME because the helper excludes folds for vectors. It's not clear yet how many of these are actually testable (and therefore necessary?) because later analysis uses computeKnownBits and other methods to catch many of these cases. llvm-svn: 279492
* [GraphTraits] Replace all NodeType usage with NodeRefTim Shen2016-08-222-12/+6
| | | | | | | | This should finish the GraphTraits migration. Differential Revision: http://reviews.llvm.org/D23730 llvm-svn: 279475
* Revert -r278267 [ValueTracking] An improvement to IR ValueTracking on ↵Artur Pilipenko2016-08-221-37/+1
| | | | | | | | | | Non-negative Integers This change cause performance regression on MultiSource/Benchmarks/TSVC/Symbolics-flt/Symbolics-flt from LNT and some other bechmarks. See https://reviews.llvm.org/D18777 for details. llvm-svn: 279433
* [GraphTraits] Make nodes_iterator dereference to NodeType*/NodeRefTim Shen2016-08-191-3/+3
| | | | | | | | | Currently nodes_iterator may dereference to a NodeType* or a NodeType&. Make them all dereference to NodeType*, which is NodeRef later. Differential Revision: https://reviews.llvm.org/D23704 Differential Revision: https://reviews.llvm.org/D23705 llvm-svn: 279326
* [AliasSetTracker] Degrade AliasSetTracker when may-alias sets get too large.Michael Kuperstein2016-08-191-9/+116
| | | | | | | | | | | | | | | | | | Repeated inserts into AliasSetTracker have quadratic behavior - inserting a pointer into AST is linear, since it requires walking over all "may" alias sets and running an alias check vs. every pointer in the set. We can avoid this by tracking the total number of pointers in "may" sets, and when that number exceeds a threshold, declare the tracker "saturated". This lumps all pointers into a single "may" set that aliases every other pointer. (This is a stop-gap solution until we migrate to MemorySSA) This fixes PR28832. Differential Revision: https://reviews.llvm.org/D23432 llvm-svn: 279274
* [PM] Rework the new PM support for building the ModuleSummaryIndex toChandler Carruth2016-08-191-33/+30
| | | | | | | | | | | | | | | | | | | | | | directly produce the index as the value type result. This requires making the index movable which is straightforward. It greatly simplifies things by allowing us to completely avoid the builder API and the layers of abstraction inherent there. Instead both pass managers can directly construct these when run by value. They still won't be constructed truly eagerly thanks to the optional in the legacy PM. The code that directly builds the index can also just share a direct function. A notable change here is that the result type of the analysis for the new PM is no longer a reference type. This was really problematic when making changes to how we handle result types to make our interface requirements *much* more strict and precise. But I think this is an overall improvement. Differential Revision: https://reviews.llvm.org/D23701 llvm-svn: 279216
* [Assumptions] Make collecting ephemeral values not quadratic in theChandler Carruth2016-08-181-23/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | number of assume intrinsics. The classical way to have a cache-friendly vector style container when we need queue semantics for BFS instead of stack semantics for DFS is to use an ever-growing vector and an index. Erasing from the front requires O(size) work, and unless we expect the worklist to grow *very* large, its probably cheaper to just grow and race down the list. But that makes it more bad that we're putting the assume intrinsics in this at all. We end up looking at the (by definition empty) use list to see if they're ephemeral (when we've already put them in that set), etc. Instead, directly populate the worklist with the operands when we mark the assume intrinsics as ephemeral. Also, test the visited set *before* putting things into the worklist so we don't accumulate the same value in the list 100s of times. It would be nice to use a set-vector for this but I think its useful to test the set earlier to avoid repeatedly querying whether the same instruction is safe to speculate. Hopefully with these changes the number of values pushed onto the worklist is smaller, and we avoid quadratic work by letting it grow as necessary. Differential Revision: https://reviews.llvm.org/D23396 llvm-svn: 279099
* SCEV: Don't assert about non-SCEV-able value in isSCEVExprNeverPoison() ↵Hans Wennborg2016-08-171-0/+4
| | | | | | | | (PR28932) Differential Revision: https://reviews.llvm.org/D23594 llvm-svn: 278999
* Replace a few more "fall through" comments with LLVM_FALLTHROUGHJustin Bogner2016-08-175-9/+13
| | | | | | Follow up to r278902. I had missed "fall through", with a space. llvm-svn: 278970
* [GraphWriter] Change GraphWriter to use NodeRef in GraphTraitsTim Shen2016-08-171-0/+1
| | | | | | | | | | | | | | | Summary: This is part of the "NodeType* -> NodeRef" migration. Notice that since GraphWriter prints object address as identity, I added a static_assert on NodeRef to be a pointer type. Reviewers: dblaikie Subscribers: llvm-commits, MatzeB Differential Revision: https://reviews.llvm.org/D23580 llvm-svn: 278966
* [LoopStrenghtReduce] Refactoring and addition of a new target cost function.Jonas Paulsson2016-08-171-0/+5
| | | | | | | | | | | | | | | | | | | | | | | Refactored so that a LSRUse owns its fixups, as oppsed to letting the LSRInstance own them. This makes it easier to rate formulas for LSRUses, since the fixups are available directly. The Offsets vector has been removed since it was no longer necessary. New target hook isFoldableMemAccessOffset(), which is used during formula rating. For SystemZ, this is useful to express that loads and stores with float or vector types with a big/negative offset should be avoided in loops. Without this, LSR will generate a lot of negative offsets that would require extra instructions for loading the address. Updated tests: test/CodeGen/SystemZ/loop-01.ll Reviewed by: Quentin Colombet and Ulrich Weigand. https://reviews.llvm.org/D19152 llvm-svn: 278927
* Replace "fallthrough" comments with LLVM_FALLTHROUGHJustin Bogner2016-08-172-9/+9
| | | | | | | This is a mechanical change of comments in switches like fallthrough, fall-through, or fall-thru to use the LLVM_FALLTHROUGH macro instead. llvm-svn: 278902
* ObjCARC: Don't increment or dereference end() when scanning argsDuncan P. N. Exon Smith2016-08-171-33/+37
| | | | | | | | | When there's only one argument and it doesn't match one of the known functions, return ARCInstKind::CallOrUser rather than falling through to the two argument case. The old behaviour both incremented past and dereferenced end(). llvm-svn: 278881
* Revert "Enhance SCEV to compute the trip count for some loops with unknown ↵Reid Kleckner2016-08-161-77/+4
| | | | | | | | stride." This reverts commit r278731. It caused http://crbug.com/638314 llvm-svn: 278853
* [InstSimplify] Fold gep (gep V, C), (xor V, -1) to C-1David Majnemer2016-08-161-1/+7
| | | | llvm-svn: 278779
OpenPOWER on IntegriCloud