summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Analysis/LazyCallGraph.cpp
Commit message (Collapse)AuthorAgeFilesLines
* Revert "[VectorUtils] Introduce the Vector Function Database (VFDatabase)."Francesco Petrogalli2019-12-131-6/+2
| | | | | | | | | This reverts commit 0be81968a283fd4161cb9ac9748d5ed200926292. The VFDatabase needs some rework to be able to handle vectorization and subsequent scalarization of intrinsics in out-of-tree versions of the compiler. For more details, see the discussion in https://reviews.llvm.org/D67572.
* [VectorUtils] Introduce the Vector Function Database (VFDatabase).Francesco Petrogalli2019-12-101-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduced the VFDatabase, the framework proposed in http://lists.llvm.org/pipermail/llvm-dev/2019-June/133484.html. [*] In this patch the VFDatabase is used to bridge the TargetLibraryInfo (TLI) calls that were previously used to query for the availability of vector counterparts of scalar functions. The VFISAKind field `ISA` of VFShape have been moved into into VFInfo, under the assumption that different vector ISAs may provide the same vector signature. At the moment, the vectorizer accepts any of the available ISAs as long as the signature provided by the VFDatabase matches the one expected in the vectorization process. For example, when targeting AVX or AVX2, which both have 256-bit registers, the IR signature of the two vector functions associated to the two ISAs is the same. The `getVectorizedFunction` method at the moment returns the first available match. We will need to add more heuristics to the search system to decide which of the available version (TLI, AVX, AVX2, ...) the system should prefer, when multiple versions with the same VFShape are present. Some of the code in this patch is based on the work done by Sumedh Arani in https://reviews.llvm.org/D66025. [*] Notice that in the proposal the VFDatabase was called SVFS. The name VFDatabase is more in line with LLVM recommendations for naming classes and variables. Differential Revision: https://reviews.llvm.org/D67572
* Second attempt to add iterator_range::empty()Jordan Rose2019-10-071-1/+1
| | | | | | | | | | | | Doing this makes MSVC complain that `empty(someRange)` could refer to either C++17's std::empty or LLVM's llvm::empty, which previously we avoided via SFINAE because std::empty is defined in terms of an empty member rather than begin and end. So, switch callers over to the new method as it is added. https://reviews.llvm.org/D68439 llvm-svn: 373935
* Change TargetLibraryInfo analysis passes to always require FunctionTeresa Johnson2019-09-071-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This is the first change to enable the TLI to be built per-function so that -fno-builtin* handling can be migrated to use function attributes. See discussion on D61634 for background. This is an enabler for fixing handling of these options for LTO, for example. This change should not affect behavior, as the provided function is not yet used to build a specifically per-function TLI, but rather enables that migration. Most of the changes were very mechanical, e.g. passing a Function to the legacy analysis pass's getTLI interface, or in Module level cases, adding a callback. This is similar to the way the per-function TTI analysis works. There was one place where we were looking for builtins but not in the context of a specific function. See FindCXAAtExit in lib/Transforms/IPO/GlobalOpt.cpp. I'm somewhat concerned my workaround could provide the wrong behavior in some corner cases. Suggestions welcome. Reviewers: chandlerc, hfinkel Subscribers: arsenm, dschuff, jvesely, nhaehnle, mehdi_amini, javed.absar, sbc100, jgravelle-google, eraman, aheejin, steven_wu, george.burgess.iv, dexonsmith, jfb, asbirlea, gchatelet, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D66428 llvm-svn: 371284
* Revert "[CallGraph] Refine call graph for indirect calls with !callees metadata"Benjamin Kramer2019-08-161-4/+2
| | | | | | | This reverts commit r369025. Crashes clang, test case is on the mailing list. llvm-svn: 369096
* [CallGraph] Refine call graph for indirect calls with !callees metadataMark Lacey2019-08-151-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For indirect call sites having a small set of possible callees, !callees metadata can be used to indicate what those callees are. This patch updates the call graph and lazy call graph analyses so that they consider this metadata when encountering call sites. For the call graph, it adds a new external call graph node to the graph for each unique !callees metadata node. A call graph edge connects an indirect call site with the external node associated with the !callees metadata that is attached to it. And there is an edge from this external node to each of the callees indicated by the metadata. Similarly, for the lazy call graph, the patch adds Ref edges from a caller to the possible callees indicated by the metadata. The primary purpose of the patch is to facilitate iterating over the functions in a module such that all of the callees indicated by a given !callees metadata node will be visited prior to the functions containing call sites annotated by that node. This property is required by optimizations performing a bottom-up traversal of the SCC DAG. For example, the inliner can be made to inline through an indirect call. If the call site is annotated with !callees metadata, this patch ensures that the inliner will have visited all of the callees prior to the caller, allowing it to reliably compute the cost of inlining one or more of the potential callees. Original patch by @mssimpso. I've made some small changes to get it to apply, build, and pass tests on the top of tree, as well as some minor tweaks to formatting and functionality. Subscribers: mehdi_amini, hiraditya, llvm-commits, mssimpso Tags: #llvm Differential Revision: https://reviews.llvm.org/D39339 llvm-svn: 369025
* Change two unnecessary uses of llvm::size(C) to C.size()Fangrui Song2019-08-061-4/+2
| | | | llvm-svn: 368011
* [LCG] Add aliased functions as LCG rootsGuozhi Wei2019-04-051-0/+13
| | | | | | | | | Current LCG doesn't check aliased functions. So if an internal function has a public alias it will not be added to CG SCC, but it is still reachable from outside through the alias. So this patch adds aliased functions to SCC. Differential Revision: https://reviews.llvm.org/D59898 llvm-svn: 357795
* Update the file headers across all of the LLVM projects in the monorepoChandler Carruth2019-01-191-4/+3
| | | | | | | | | | | | | | | | | to reflect the new license. We understand that people may be surprised that we're moving the header entirely to discuss the new license. We checked this carefully with the Foundation's lawyer and we believe this is the correct approach. Essentially, all code in the project is now made available by the LLVM project under our new license, so you will see that the license headers include that license only. Some of our contributors have contributed code under our old license, and accordingly, we have retained a copy of our old license notice in the top-level files in each project and repository. llvm-svn: 351636
* ADT/STLExtras: Introduce llvm::empty; NFCMatthias Braun2018-10-311-1/+1
| | | | | | | | This is modeled after C++17 std::empty(). Differential Revision: https://reviews.llvm.org/D53909 llvm-svn: 345679
* [STLExtras] Add size() for ranges, and remove distance()Vedant Kumar2018-05-161-3/+3
| | | | | | | | | | r332057 introduced distance() for ranges. Based on post-commit feedback, this renames distance() to size(). The new size() is also only enabled when the operation is O(1). Differential Revision: https://reviews.llvm.org/D46976 llvm-svn: 332551
* Rename DEBUG macro to LLVM_DEBUG.Nicola Zaghen2018-05-141-9/+10
| | | | | | | | | | | | | | | | The DEBUG() macro is very generic so it might clash with other projects. The renaming was done as follows: - git grep -l 'DEBUG' | xargs sed -i 's/\bDEBUG\s\?(/LLVM_DEBUG(/g' - git diff -U0 master | ../clang/tools/clang-format/clang-format-diff.py -i -p1 -style LLVM - Manual change to APInt - Manually chage DOCS as regex doesn't match it. In the transition period the DEBUG() macro is still present and aliased to the LLVM_DEBUG() one. Differential Revision: https://reviews.llvm.org/D43624 llvm-svn: 332240
* [STLExtras] Add distance() for ranges, pred_size(), and succ_size()Vedant Kumar2018-05-101-4/+3
| | | | | | | | | | | This commit adds a wrapper for std::distance() which works with ranges. As it would be a common case to write `distance(predecessors(BB))`, this also introduces `pred_size()` and `succ_size()` helpers to make that easier to write. Differential Revision: https://reviews.llvm.org/D46668 llvm-svn: 332057
* IWYU for llvm-config.h in llvm, additions.Nico Weber2018-04-301-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | See r331124 for how I made a list of files missing the include. I then ran this Python script: for f in open('filelist.txt'): f = f.strip() fl = open(f).readlines() found = False for i in xrange(len(fl)): p = '#include "llvm/' if not fl[i].startswith(p): continue if fl[i][len(p):] > 'Config': fl.insert(i, '#include "llvm/Config/llvm-config.h"\n') found = True break if not found: print 'not found', f else: open(f, 'w').write(''.join(fl)) and then looked through everything with `svn diff | diffstat -l | xargs -n 1000 gvim -p` and tried to fix include ordering and whatnot. No intended behavior change. llvm-svn: 331184
* Fix more spelling mistakes in comments of LLVM Analysis passesVedant Kumar2018-03-021-5/+5
| | | | | | | | Patch by Reshabh Sharma! Differential Revision: https://reviews.llvm.org/D43939 llvm-svn: 326601
* Fix typos of occurred and occurrenceMalcolm Parsons2018-01-241-1/+1
| | | | llvm-svn: 323318
* Reverting r315590; it did not include changes for llvm-tblgen, which is ↵Aaron Ballman2017-10-151-3/+3
| | | | | | | | causing link errors for several people. Error LNK2019 unresolved external symbol "public: void __cdecl `anonymous namespace'::MatchableInfo::dump(void)const " (?dump@MatchableInfo@?A0xf4f1c304@@QEBAXXZ) referenced in function "public: void __cdecl `anonymous namespace'::AsmMatcherEmitter::run(class llvm::raw_ostream &)" (?run@AsmMatcherEmitter@?A0xf4f1c304@@QEAAXAEAVraw_ostream@llvm@@@Z) llvm-tblgen D:\llvm\2017\utils\TableGen\AsmMatcherEmitter.obj 1 llvm-svn: 315854
* [dump] Remove NDEBUG from test to enable dump methods [NFC]Don Hinton2017-10-121-3/+3
| | | | | | | | | | | | | | | Summary: Add LLVM_FORCE_ENABLE_DUMP cmake option, and use it along with LLVM_ENABLE_ASSERTIONS to set LLVM_ENABLE_DUMP. Remove NDEBUG and only use LLVM_ENABLE_DUMP to enable dump methods. Move definition of LLVM_ENABLE_DUMP from config.h to llvm-config.h so it'll be picked up by public headers. Differential Revision: https://reviews.llvm.org/D38406 llvm-svn: 315590
* [Analysis] Fix some Clang-tidy modernize-use-using and Include What You Use ↵Eugene Zelenko2017-08-111-6/+20
| | | | | | warnings; other minor fixes (NFC). llvm-svn: 310766
* [LCG] Fix an assert in a on-scope-exit lambda that checked the contentsChandler Carruth2017-08-101-7/+9
| | | | | | | | | | of the returned value. Checking the returned value from inside of a scoped exit isn't actually valid. It happens to work when NRVO fires and the stars align, which they reliably do with Clang but don't, for example, on MSVC builds. llvm-svn: 310547
* [LCG] Completely remove the map-based association of post-order numbersChandler Carruth2017-08-091-35/+34
| | | | | | | | | | | | | | | | | | | | | | | to Nodes when removing ref edges from a RefSCC. This map based association turns out to be pretty expensive for large RefSCCs and pointless as we already have embedded data members inside nodes that we use to track the DFS state. We can reuse one of those and the map becomes unnecessary. This also fuses the update of those numbers into the scan across the pending stack of nodes so that we don't walk the nodes twice during the DFS. With this I expect the new PM to be faster than the old PM for the test case I have been optimizing. That said, it also seems simpler and more direct in many ways. The side storage was always pretty awkward. The last remaining hot-spot in the profile of the LCG once this is done will be the edge iterator walk in the DFS. I'll take a look at improving that next. llvm-svn: 310456
* [LCG] Special case when removing a ref edge from a RefSCC leavesChandler Carruth2017-08-091-12/+29
| | | | | | | | | | | | | | | that RefSCC still connected. This is common and can be handled much more efficiently. As soon as we know we've covered every node in the RefSCC with the DFS, we can simply reset our state and return. This avoids numerous data structure updates and other complexity. On top of other changes, this appears to get new PM back to parity with the old PM for a large protocol buffer message source code. The dense map updates are very hot in this function. llvm-svn: 310451
* [LCG] Switch one of the update methods for the LazyCallGraph to supportChandler Carruth2017-08-091-100/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | limited batch updates. Specifically, allow removing multiple reference edges starting from a common source node. There are a few constraints that play into supporting this form of batching: 1) The way updates occur during the CGSCC walk, about the most we can functionally batch together are those with a common source node. This also makes the batching simpler to implement, so it seems a worthwhile restriction. 2) The far and away hottest function for large C++ files I measured (generated code for protocol buffers) showed a huge amount of time was spent removing ref edges specifically, so it seems worth focusing there. 3) The algorithm for removing ref edges is very amenable to this restricted batching. There are just both API and implementation special casing for the non-batch case that gets in the way. Once removed, supporting batches is nearly trivial. This does modify the API in an interesting way -- now, we only preserve the target RefSCC when the RefSCC structure is unchanged. In the face of any splits, we create brand new RefSCC objects. However, all of the users were OK with it that I could find. Only the unittest needed interesting updates here. How much does batching these updates help? I instrumented the compiler when run over a very large generated source file for a protocol buffer and found that the majority of updates are intrinsically updating one function at a time. However, nearly 40% of the total ref edges removed are removed as part of a batch of removals greater than one, so these are the cases batching can help with. When compiling the IR for this file with 'opt' and 'O3', this patch reduces the total time by 8-9%. Differential Revision: https://reviews.llvm.org/D36352 llvm-svn: 310450
* [LCG] Remove yet another variable only used inside of asserts.Chandler Carruth2017-08-051-3/+3
| | | | llvm-svn: 310174
* [LCG] Fold otherwise unused variable into assert.Benjamin Kramer2017-08-051-3/+2
| | | | | | No functionality change intended. llvm-svn: 310173
* [LCG] Completely remove the parent set and leaf tracking for RefSCCs.Chandler Carruth2017-08-051-176/+3
| | | | | | | | | | | | | | | | | | | | | | | After the previous series of patches, this is now trivial and deletes a pretty astonishing amount of complexity. This has been a long time coming, as the move toward a PO sequence of RefSCCs started eroding the underlying use cases for this half of the data structure. Among the biggest advantages here is that now there aren't two independent data structures that need to stay in sync. Some of my profiling has also indicated that updating the parent sets was among the most expensive parts of the lazy call graph. Eliminating it whole sale is likely to be a nice win in terms of compile time. Last but not least, I had discussed with some folks previously keeping it around for asserts and other correctness checking, but once the fundamentals of the parent and child checking were implemented without the parent sets their value in correctness checking was tiny and no where near worth the cost of the complexity required to keep everything up-to-date. llvm-svn: 310171
* [LCG] Re-implement the basic isParentOf, isAncestorOf, isChildOf, andChandler Carruth2017-08-051-10/+37
| | | | | | | | | | | | | | | | | isDescendantOf methods on RefSCCs in terms of the forward edges rather than the parent sets. This is technically slower, but probably not interestingly slower, and all of these routines were already so expensive that they're guarded behind both !NDEBUG and EXPENSIVE_CHECKS. This removes another non-critical usage of parent sets. I've also added some comments to try and help clarify to any potential users the costs of these routines. They're mostly useful for debugging, asserts, or other queries. llvm-svn: 310170
* [LCG] Add the concept of a "dead" node and use it to avoid a complexChandler Carruth2017-08-051-8/+1
| | | | | | | | | | | | | | | | | | | | | | walk over the parent set. When removing a single function from the call graph, we previously would walk the entire RefSCC's parent set and then walk every outgoing edge just to find the ones to remove. In addition to this being quite high complexity in theory, it is also the last fundamental use of the parent sets. With this change, when we remove a function we transform the node containing it to be recognizably "dead" and then teach the edge iterators to recognize edges to such nodes and skip them the same way they skip null edges. We can't move fully to using "dead" nodes -- when disconnecting two live nodes we need to null out the edge. But the complexity this adds to the edge sequence isn't too bad and the simplification of lazily handling this seems like a significant win. llvm-svn: 310169
* [LCG] Replace an implicit bool operator with a named function. (NFC)Chandler Carruth2017-08-051-2/+2
| | | | | | | | | The definition of 'false' here was already pretty vague and debatable, and I'm about to add another potential 'false' that would actually make much more sense in a bool operator. Especially given how rarely this is used, a nicely named method seems better. llvm-svn: 310165
* [LCG] When removing a dead function and clearing out the dataChandler Carruth2017-08-051-0/+2
| | | | | | | | structures, actually null out the graph pointers as well. We won't ever update these, and we certainly shouldn't be calling any methods on them, so it seems good to defensively nuke them. llvm-svn: 310164
* [LCG] Rather than walking the directed graph structure to update graphChandler Carruth2017-08-051-14/+4
| | | | | | | | | pointers in node objects, just walk the map from function to node. It doesn't have stable ordering, but works just as well and is much simpler. We don't need ordering when just updating internal pointers. llvm-svn: 310163
* [LCG] Remove the complex walk of the parent sets to update graphChandler Carruth2017-08-051-11/+2
| | | | | | | | | pointers. This is completely unnecessary as we have a trivial list of RefSCCs now that we can walk. llvm-svn: 310162
* [LCG] Remove the use of the parent sets to compute connectivity whenChandler Carruth2017-08-051-16/+14
| | | | | | | | | | merging RefSCCs. The logic to directly use the reference edges is simpler and not substantially slower (despite the comments to the contrary) because this is not actually an especially hot part of LCG in practice. llvm-svn: 310161
* [PM/LCG] Follow-up fix to r308088 to handle deletion of libraryChandler Carruth2017-07-191-1/+6
| | | | | | | | | | | | | | | | | | | | | | | functions. In the prior commit, we provide ordering to the LCG between functions and library function definitions that they might begin to call through transformations. But we still would delete these library functions from the call graph if they became dead during inlining. While this immediately crashed, it also exposed a loss of information. We shouldn't remove definitions of library functions that can still usefully participate in the LCG-powered CGSCC optimization process. If new call edges are formed, we want to have definitions to be called. We can still remove these functions if truly dead using global-dce, etc, but removing them during the CGSCC walk is premature. This fixes a crash in the new PM when optimizing some unusual libraries that end up with "internal" lib functions such as the code in the "R" language's libraries. llvm-svn: 308417
* [PM/LCG] Teach the LazyCallGraph to maintain reference edges from everyChandler Carruth2017-07-151-8/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | function to every defined function known to LLVM as a library function. LLVM can introduce calls to these functions either by replacing other library calls or by recognizing patterns (such as memset_pattern or vector math patterns) and replacing those with calls. When these library functions are actually defined in the module, we need to have reference edges to them initially so that we visit them during the CGSCC walk in the right order and can effectively rebuild the call graph afterward. This was discovered when building code with Fortify enabled as that is a common case of both inline definitions of library calls and simplifications of code into calling them. This can in extreme cases of LTO-ing with libc introduce *many* more reference edges. I discussed a bunch of different options with folks but all of them are unsatisfying. They either make the graph operations substantially more complex even when there are *no* defined libfuncs, or they introduce some other complexity into the callgraph. So this patch goes with the simplest possible solution of actual synthetic reference edges. If this proves to be a memory problem, I'm happy to implement one of the clever techniques to save memory here. llvm-svn: 308088
* [PM] Fix a nasty bug in the new PM where we failed to properlyChandler Carruth2017-07-091-7/+13
| | | | | | | | | | | | | | | | | | | | | | | | invalidation of analyses when merging SCCs. While I've added a bunch of testing of this, it takes something much more like the inliner to really trigger this as you need to have partially-analyzed SCCs with updates at just the right time. So I've added a direct test for this using the inliner and verifying the domtree. Without the changes here, this test ends up finding a stale dominator tree. However, to handle this properly, we need to invalidate analyses *before* merging the SCCs. After talking to Philip and Sanjoy about this they convinced me this was the right approach. To do this, we need a callback mechanism when merging SCCs so we can observe the cycle that will be merged before the merge happens. This API update ended up being surprisingly easy. With this commit, the new PM passes the test-suite again. It hadn't since MemorySSA was enabled for EarlyCSE as that also will find this bug very quickly. llvm-svn: 307498
* Sort the remaining #include lines in include/... and lib/....Chandler Carruth2017-06-061-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | I did this a long time ago with a janky python script, but now clang-format has built-in support for this. I fed clang-format every line with a #include and let it re-sort things according to the precise LLVM rules for include ordering baked into clang-format these days. I've reverted a number of files where the results of sorting includes isn't healthy. Either places where we have legacy code relying on particular include ordering (where possible, I'll fix these separately) or where we have particular formatting around #include lines that I didn't want to disturb in this patch. This patch is *entirely* mechanical. If you get merge conflicts or anything, just ignore the changes in this patch and run clang-format over your #include lines in the files. Sorry for any noise here, but it is important to keep these things stable. I was seeing an increasing number of patches with irrelevant re-ordering of #include lines because clang-format was used. This patch at least isolates that churn, makes it easy to skip when resolving conflicts, and gets us to a clean baseline (again). llvm-svn: 304787
* [LCG] Fix EXPENSIVE_CHECKS typo. NFCFrancis Visoiu Mistrih2017-02-281-5/+5
| | | | | | Differential Revision: https://reviews.llvm.org/D30434 llvm-svn: 296500
* [PM/LCG] Teach LCG to support spurious reference edges.Chandler Carruth2017-02-091-1/+8
| | | | | | | | | | | | | | | | | | Somewhat amazingly, this only requires teaching it to clean them up when deleting a dead function from the graph. And we already have exactly the necessary data structures to do that in the parent RefSCCs. This allows ArgPromote to work in a much simpler way be merely letting reference edges linger in the graph after the causing IR is deleted. We will clean up these edges when we run any function pass over the IR, but don't remove them eagerly. This avoids all of the quadratic update issues both in the current pass manager and in my previous attempt with the new pass manager. Differential Revision: https://reviews.llvm.org/D29579 llvm-svn: 294663
* [PM/LCG] Teach the LazyCallGraph how to replace a function withoutChandler Carruth2017-02-091-167/+189
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | disturbing the graph or having to update edges. This is motivated by porting argument promotion to the new pass manager. Because of how LLVM IR Function objects work, in order to change their signature a new object needs to be created. This is efficient and straight forward in the IR but previously was very hard to implement in LCG. We could easily replace the function a node in the graph represents. The challenging part is how to handle updating the edges in the graph. LCG previously used an edge to a raw function to represent a node that had not yet been scanned for calls and references. This was the core of its laziness. However, that model causes this kind of update to be very hard: 1) The keys to lookup an edge need to be `Function*`s that would all need to be updated when we update the node. 2) There will be some unknown number of edges that haven't transitioned from `Function*` edges to `Node*` edges. All of this complexity isn't necessary. Instead, we can always build a node around any function, always pointing edges at it and always using it as the key to lookup an edge. To maintain the laziness, we need to sink the *edges* of a node into a secondary object and explicitly model transitioning a node from empty to populated by scanning the function. This design seems much cleaner in a number of ways, but importantly there is now exactly *one* place where the `Function*` has to be updated! Some other cleanups that fall out of this include having something to model the *entry* edges more accurately. Rather than hand rolling parts of the node in the graph itself, we have an explicit `EdgeSequence` object that gives us exactly the functionality needed. We also have a consistent place to define the edge iterators and can use them for both the entry edges and the internal edges of the graph. The API used to model the separation between a node and its edges is intentionally very thin as most clients are expected to deal with nodes that have populated edges. We model this exactly as an optional does with an additional method to populate the edges when that is a reasonable thing for a client to do. This is based on API design suggestions from Richard Smith and David Blaikie, credit goes to them for helping pick how to model this without it being either too explicit or too implicit. The patch is somewhat noisy due to shifting around iterator types and new syntax for walking the edges of a node, but most of the functionality change is in the `Edge`, `EdgeSequence`, and `Node` types. Differential Revision: https://reviews.llvm.org/D29577 llvm-svn: 294653
* [PM/LCG] Fix the no-asserts build after r294227. Sorry for the noise.Chandler Carruth2017-02-061-0/+2
| | | | llvm-svn: 294235
* [PM/LCG] Remove the lazy RefSCC formation from the LazyCallGraph duringChandler Carruth2017-02-061-171/+117
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | iteration. The lazy formation of RefSCCs isn't really the most important part of the laziness here -- that has to do with walking the functions themselves -- and isn't essential to maintain. Originally, there were incremental update algorithms that relied on updates happening predominantly near the most recent RefSCC formed, but those have been replaced with ones that have much tighter general case bounds at this point. We do still perform asserts that only scale well due to this incrementality, but those are easy to place behind EXPENSIVE_CHECKS. Removing this simplifies the entire analysis by having a single up-front step that builds all of the RefSCCs in a direct Tarjan walk. We can even easily replace this with other or better algorithms at will and with much less confusion now that there is no iterator-based incremental logic involved. This removes a lot of complexity from LCG. Another advantage of moving in this direction is that it simplifies testing the system substantially as we no longer have to worry about observing and mutating the graph half-way through the RefSCC formation. We still need a somewhat special iterator for RefSCCs because we want the iterator to remain stable in the face of graph updates. However, this now merely involves relative indexing to the current RefSCC's position in the sequence which isn't too hard. Differential Revision: https://reviews.llvm.org/D29381 llvm-svn: 294227
* Cleanup dump() functions.Matthias Braun2017-01-281-3/+9
| | | | | | | | | | | | | | | | | | We had various variants of defining dump() functions in LLVM. Normalize them (this should just consistently implement the things discussed in http://lists.llvm.org/pipermail/cfe-dev/2014-January/034323.html For reference: - Public headers should just declare the dump() method but not use LLVM_DUMP_METHOD or #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP) - The definition of a dump method should look like this: #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP) LLVM_DUMP_METHOD void MyClass::dump() { // print stuff to dbgs()... } #endif llvm-svn: 293359
* [PM] Teach the CGSCC's CG update utility to more carefully invalidateChandler Carruth2016-12-281-10/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | analyses when we're about to break apart an SCC. We can't wait until after breaking apart the SCC to invalidate things: 1) Which SCC do we then invalidate? All of them? 2) Even if we invalidate all of them, a newly created SCC may not have a proxy that will convey the invalidation to functions! Previously we only invalidated one of the SCCs and too late. This led to stale analyses remaining in the cache. And because the caching strategy actually works, they would get used and chaos would ensue. Doing invalidation early is somewhat pessimizing though if we *know* that the SCC structure won't change. So it turns out that the design to make the mutation API force the caller to know the *kind* of mutation in advance was indeed 100% correct and we didn't do enough of it. So this change also splits two cases of switching a call edge to a ref edge into two separate APIs so that callers can clearly test for this and take the easy path without invalidating when appropriate. This is particularly important in this case as we expect most inlines to be between functions in separate SCCs and so the common case is that we don't have to so aggressively invalidate analyses. The LCG API change in turn needed some basic cleanups and better testing in its unittest. No interesting functionality changed there other than more coverage of the returned sequence of SCCs. While this seems like an obvious improvement over the current state, I'd like to revisit the core concept of invalidating within the CG-update layer at all. I'm wondering if we would be better served forcing the callers to handle the invalidation beforehand in the cases that they can handle it. An interesting example is when we want to teach the inliner to *update and preserve* analyses. But we can cross that bridge when we get there. With this patch, the new pass manager an build all of the LLVM test suite at -O3 and everything passes. =D I haven't bootstrapped yet and I'm sure there are still plenty of bugs, but this gives a nice baseline so I'm going to increasingly focus on fleshing out the missing functionality, especially the bits that are just turned off right now in order to let us establish this baseline. llvm-svn: 290664
* [LCG] Teach the ref edge removal to handle a ref edge that is trivialChandler Carruth2016-12-281-1/+7
| | | | | | | | | | | due to a call cycle. This actually crashed the ref removal before. I've added a unittest that covers this kind of interesting graph structure and mutation. llvm-svn: 290645
* [LCG] Minor cleanup to the LCG walk over a function, NFC.Chandler Carruth2016-12-091-19/+22
| | | | | | | | This just hoists the check for declarations up a layer which allows various sets used in the walk to be smaller. Also moves the relevant comments to match, and catches a few other cleanups in this code. llvm-svn: 289163
* [LCG] Add basic verification of the parent set and fix bugs it uncovers.Chandler Carruth2016-12-071-4/+23
| | | | | | The existing unittests actually cover this now that we verify things. llvm-svn: 288875
* [LCG] Add some much needed asserts and verify runs to uncoverChandler Carruth2016-12-061-4/+12
| | | | | | | | | | | | | | | | | a hilarious bug and fix it. We somehow were never verifying the RefSCCs newly formed when splitting an existing one apart, and when verifying them we weren't really checking the SCC indices mapping effectively. If we had been, it would have been blindingly obvious that right after putting something int `RC.SCCs` we should update `RC.SCCIndices` instead of `SCCIndices` which we were about to clear and rebuild anyways. =[ Anyways, this is thoroughly covered by existing tests now that we actually verify things properly. llvm-svn: 288795
* [PM] Change the static object whose address is used to uniquely identifyChandler Carruth2016-11-231-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | analyses to have a common type which is enforced rather than using a char object and a `void *` type when used as an identifier. This has a number of advantages. First, it at least helps some of the confusion raised in Justin Lebar's code review of why `void *` was being used everywhere by having a stronger type that connects to documentation about this. However, perhaps more importantly, it addresses a serious issue where the alignment of these pointer-like identifiers was unknown. This made it hard to use them in pointer-like data structures. We were already dodging this in dangerous ways to create the "all analyses" entry. In a subsequent patch I attempted to use these with TinyPtrVector and things fell apart in a very bad way. And it isn't just a compile time or type system issue. Worse than that, the actual alignment of these pointer-like opaque identifiers wasn't guaranteed to be a useful alignment as they were just characters. This change introduces a type to use as the "key" object whose address forms the opaque identifier. This both forces the objects to have proper alignment, and provides type checking that we get it right everywhere. It also makes the types somewhat less mysterious than `void *`. We could go one step further and introduce a truly opaque pointer-like type to return from the `ID()` static function rather than returning `AnalysisKey *`, but that didn't seem to be a clear win so this is just the initial change to get to a reliably typed and aligned object serving is a key for all the analyses. Thanks to Richard Smith and Justin Lebar for helping pick plausible names and avoid making this refactoring many times. =] And thanks to Sean for the super fast review! While here, I've tried to move away from the "PassID" nomenclature entirely as it wasn't really helping and is overloaded with old pass manager constructs. Now we have IDs for analyses, and key objects whose address can be used as IDs. Where possible and clear I've shortened this to just "ID". In a few places I kept "AnalysisID" to make it clear what was being identified. Differential Revision: https://reviews.llvm.org/D27031 llvm-svn: 287783
* [LCG] Add a previously missing assert about the relationship of RefSCCs.Chandler Carruth2016-11-221-0/+7
| | | | | | No intended change, everything seems to be in working order already. llvm-svn: 287705
OpenPOWER on IntegriCloud