summaryrefslogtreecommitdiffstats
path: root/llvm/unittests/Analysis
Commit message (Collapse)AuthorAgeFilesLines
* [SCEV] Ensure ScalarEvolution::createAddRecFromPHIWithCastsImpl properly ↵Daniel Neilson2017-09-051-0/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | | | handles out of range truncations of the start and accum values Summary: When constructing the predicate P1 in ScalarEvolution::createAddRecFromPHIWithCastsImpl() it is possible for the PHISCEV from which the predicate is constructed to be a SCEVConstant instead of a SCEVAddRec. If this happens, then the cast<SCEVAddRec>(PHISCEV) in the code will assert. Such a PHISCEV is possible if either the start value or the accumulator value is a constant value that not equal to its truncated value, and if the truncated value is zero. This patch adds tests that demonstrate the cast<> assertion, and fixes this problem by checking whether the PHISCEV is a constant before constructing the P1 predicate; if it is, then P1 is equivalent to one of P2 or P3. Additionally, if we know that the start value or accumulator value are constants then we check whether the P2 and/or P3 predicates are known false at compile time; if either is, then we bail out of constructing the AddRec. Reviewers: sanjoy, mkazantsev, silviu.baranga Reviewed By: mkazantsev Subscribers: mkazantsev, llvm-commits Differential Revision: https://reviews.llvm.org/D37265 llvm-svn: 312568
* [PM] Switch the CGSCC debug messages to use the standard LLVM debugChandler Carruth2017-08-111-4/+4
| | | | | | | | | | | | | | | | printing techniques with a DEBUG_TYPE controlling them. It was a mistake to start re-purposing the pass manager `DebugLogging` variable for generic debug printing -- those logs are intended to be very minimal and primarily used for testing. More detailed and comprehensive logging doesn't make sense there (it would only make for brittle tests). Moreover, we kept forgetting to propagate the `DebugLogging` variable to various places making it also ineffective and/or unavailable. Switching to `DEBUG_TYPE` makes this a non-issue. llvm-svn: 310695
* [LCG] Switch one of the update methods for the LazyCallGraph to supportChandler Carruth2017-08-091-23/+94
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | limited batch updates. Specifically, allow removing multiple reference edges starting from a common source node. There are a few constraints that play into supporting this form of batching: 1) The way updates occur during the CGSCC walk, about the most we can functionally batch together are those with a common source node. This also makes the batching simpler to implement, so it seems a worthwhile restriction. 2) The far and away hottest function for large C++ files I measured (generated code for protocol buffers) showed a huge amount of time was spent removing ref edges specifically, so it seems worth focusing there. 3) The algorithm for removing ref edges is very amenable to this restricted batching. There are just both API and implementation special casing for the non-batch case that gets in the way. Once removed, supporting batches is nearly trivial. This does modify the API in an interesting way -- now, we only preserve the target RefSCC when the RefSCC structure is unchanged. In the face of any splits, we create brand new RefSCC objects. However, all of the users were OK with it that I could find. Only the unittest needed interesting updates here. How much does batching these updates help? I instrumented the compiler when run over a very large generated source file for a protocol buffer and found that the majority of updates are intrinsically updating one function at a time. However, nearly 40% of the total ref edges removed are removed as part of a batch of removals greater than one, so these are the cases batching can help with. When compiling the IR for this file with 'opt' and 'O3', this patch reduces the total time by 8-9%. Differential Revision: https://reviews.llvm.org/D36352 llvm-svn: 310450
* Avoid comparison between signed and unsigned in SCEVExitLimitForget testsMax Kazantsev2017-08-041-3/+3
| | | | llvm-svn: 310029
* Fix SCEVExitLimitForget tests to make Sanitizer happyMax Kazantsev2017-08-041-2/+9
| | | | llvm-svn: 310023
* Do not want to use BFI to get profile count for sample pgoDehao Chen2017-08-031-3/+7
| | | | | | | | | | | | | | Summary: For SamplePGO, we already record the callsite count in the call instruction itself. So we do not want to use BFI to get profile count as it is less accurate. Reviewers: tejohnson, davidxl, eraman Reviewed By: eraman Subscribers: sanjoy, llvm-commits, mehdi_amini Differential Revision: https://reviews.llvm.org/D36025 llvm-svn: 309964
* Fix use after free in unit test.Benjamin Kramer2017-08-031-6/+6
| | | | llvm-svn: 309952
* Removed unused variabled from unit testMax Kazantsev2017-08-031-2/+0
| | | | llvm-svn: 309929
* [SCEV] Re-enable "Cache results of computeExitLimit"Max Kazantsev2017-08-031-0/+160
| | | | | | | | | | The patch rL309080 was reverted because it did not clean up the cache on "forgetValue" method call. This patch re-enables this change, adds the missing check and introduces two new unit tests that make sure that the cache is cleaned properly. Differential Revision: https://reviews.llvm.org/D36087 llvm-svn: 309925
* Allow None as a MemoryLocation to getModRefInfoAlina Sbirlea2017-08-011-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Adding part of the changes in D30369 (needed to make progress): Current patch updates AliasAnalysis and MemoryLocation, but does _not_ clean up MemorySSA. Original summary from D30369, by dberlin: Currently, we have instructions which affect memory but have no memory location. If you call, for example, MemoryLocation::get on a fence, it asserts. This means things specifically have to avoid that. It also means we end up with a copy of each API, one taking a memory location, one not. This starts to fix that. We add MemoryLocation::getOrNone as a new call, and reimplement the old asserting version in terms of it. We make MemoryLocation optional in the (Instruction, MemoryLocation) version of getModRefInfo, and kill the old one argument version in favor of passing None (it had one caller). Now both can handle fences because you can just use MemoryLocation::getOrNone on an instruction and it will return a correct answer. We use all this to clean up part of MemorySSA that had to handle this difference. Note that literally every actual getModRefInfo interface we have could be made private and replaced with: getModRefInfo(Instruction, Optional<MemoryLocation>) and getModRefInfo(Instruction, Optional<MemoryLocation>, Instruction, Optional<MemoryLocation>) and delegating to the right ones, if we wanted to. I have not attempted to do this yet. Reviewers: dberlin, davide, dblaikie Subscribers: sanjoy, hfinkel, chandlerc, llvm-commits Differential Revision: https://reviews.llvm.org/D35441 llvm-svn: 309641
* [PM/LCG] Teach the LazyCallGraph to maintain reference edges from everyChandler Carruth2017-07-152-20/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | function to every defined function known to LLVM as a library function. LLVM can introduce calls to these functions either by replacing other library calls or by recognizing patterns (such as memset_pattern or vector math patterns) and replacing those with calls. When these library functions are actually defined in the module, we need to have reference edges to them initially so that we visit them during the CGSCC walk in the right order and can effectively rebuild the call graph afterward. This was discovered when building code with Fortify enabled as that is a common case of both inline definitions of library calls and simplifications of code into calling them. This can in extreme cases of LTO-ing with libc introduce *many* more reference edges. I discussed a bunch of different options with folks but all of them are unsatisfying. They either make the graph operations substantially more complex even when there are *no* defined libfuncs, or they introduce some other complexity into the callgraph. So this patch goes with the simplest possible solution of actual synthetic reference edges. If this proves to be a memory problem, I'm happy to implement one of the clever techniques to save memory here. llvm-svn: 308088
* [PM] Fix a silly bug in my recent update to the CG update logic.Chandler Carruth2017-07-121-6/+6
| | | | | | | | I used the wrong variable to update. This was even covered by a unittest I wrote, and the comments for the unittest were correct (if confusing) but the test itself just matched the buggy behavior. =[ llvm-svn: 307764
* Enhance synchscope representationKonstantin Zhuravlyov2017-07-111-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | OpenCL 2.0 introduces the notion of memory scopes in atomic operations to global and local memory. These scopes restrict how synchronization is achieved, which can result in improved performance. This change extends existing notion of synchronization scopes in LLVM to support arbitrary scopes expressed as target-specific strings, in addition to the already defined scopes (single thread, system). The LLVM IR and MIR syntax for expressing synchronization scopes has changed to use *syncscope("<scope>")*, where <scope> can be "singlethread" (this replaces *singlethread* keyword), or a target-specific name. As before, if the scope is not specified, it defaults to CrossThread/System scope. Implementation details: - Mapping from synchronization scope name/string to synchronization scope id is stored in LLVM context; - CrossThread/System and SingleThread scopes are pre-defined to efficiently check for known scopes without comparing strings; - Synchronization scope names are stored in SYNC_SCOPE_NAMES_BLOCK in the bitcode. Differential Revision: https://reviews.llvm.org/D21723 llvm-svn: 307722
* CGSCCPassManagerTest.cpp: Fix warnings. [-Wunused-variable]NAKAMURA Takumi2017-07-091-0/+2
| | | | llvm-svn: 307511
* [PM] Fix a nasty bug in the new PM where we failed to properlyChandler Carruth2017-07-091-16/+17
| | | | | | | | | | | | | | | | | | | | | | | | invalidation of analyses when merging SCCs. While I've added a bunch of testing of this, it takes something much more like the inliner to really trigger this as you need to have partially-analyzed SCCs with updates at just the right time. So I've added a direct test for this using the inliner and verifying the domtree. Without the changes here, this test ends up finding a stale dominator tree. However, to handle this properly, we need to invalidate analyses *before* merging the SCCs. After talking to Philip and Sanjoy about this they convinced me this was the right approach. To do this, we need a callback mechanism when merging SCCs so we can observe the cycle that will be merged before the merge happens. This API update ended up being surprisingly easy. With this commit, the new PM passes the test-suite again. It hadn't since MemorySSA was enabled for EarlyCSE as that also will find this bug very quickly. llvm-svn: 307498
* [PM] Add unittesting of the call graph update logic with complexChandler Carruth2017-07-091-0/+166
| | | | | | | | | | | dependencies between analyses. This uncovers even more issues with the proxies and the splitting apart of SCCs which are fixed in this patch. I discovered this while trying to add more rigorous testing for a change I'm making to the call graph update invalidation logic. llvm-svn: 307497
* [PM] Finish implementing and fix a chain of bugs uncovered by testingChandler Carruth2017-07-091-6/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the invalidation propagation logic from an SCC to a Function. I wrote the infrastructure to test this but didn't actually use it in the unit test where it was designed to be used. =[ My bad. Once I actually added it to the test case I discovered that it also hadn't been properly implemented, so I've implemented it. The logic in the FAM proxy for an SCC pass to propagate invalidation follows the same ideas as the FAM proxy for a Module pass, but the implementation is a bit different to reflect the fact that it is forwarding just for an SCC. However, implementing this correctly uncovered a surprising "bug" (it was conservatively correct but relatively very expensive) in how we handle invalidation when splitting one SCC into multiple SCCs. We did an eager invalidation when in reality we should be deferring invaliadtion for the *current* SCC to the CGSCC pass manager and just invaliating the newly constructed SCCs. Otherwise we end up invalidating too much too soon. This was exposed by the inliner test case that I've updated. Now, we invalidate *just* the split off '(test1_f)' SCC when doing the CG update, and then the inliner finishes and invalidates the '(test1_g, test1_h)' SCC's analyses. The first few attempts at fixing this hit still more bugs, but all of those are covered by existing tests. For example, the inliner should also preserve the FAM proxy to avoid unnecesasry invalidation, and this is safe because the CG update routines it uses handle any necessary adjustments to the FAM proxy. Finally, the unittests for the CGSCC pass manager needed a bunch of updates where we weren't correctly preserving the FAM proxy because it hadn't been fully implemented and failing to preserve it didn't matter. Note that this doesn't yet fix the current crasher due to MemSSA finding a stale dominator tree, but without this the fix to that crasher doesn't really make any sense when testing because it relies on the proxy behavior. llvm-svn: 307487
* [AST] Fix a bug in aliasesUnknownInst. Make sure we are comparing the ↵Xin Tong2017-06-252-0/+88
| | | | | | | | | | | | | | | | unknown instructions in the alias set and the instruction interested in. Summary: Make sure we are comparing the unknown instructions in the alias set and the instruction interested in. I believe this is clearly a bug (missed opportunity). I can also add some test cases if desired. Reviewers: hfinkel, davide, dberlin Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D34597 llvm-svn: 306241
* GlobalsModRef: Ensure optnone+readonly/readnone attributes are respectedDavid Blaikie2017-06-071-5/+19
| | | | llvm-svn: 304945
* [mssa] Fix case when there is no definition in a block prior to an inserted use.Alina Sbirlea2017-06-071-0/+46
| | | | | | | | | | | | | | | Summary: Check that the first access before one being tested is valid. Before this patch, if there was no definition prior to the Use being tested, the first time Iter was deferenced, it hit the sentinel. Reviewers: dberlin, gbiv Subscribers: sanjoy, Prazek, llvm-commits Differential Revision: https://reviews.llvm.org/D33950 llvm-svn: 304926
* GlobalsModRef+OptNone: Don't prove readnone/other properties from an optnone ↵David Blaikie2017-06-062-0/+42
| | | | | | | | | | | | | | | | | | | | function Seems like at least one reasonable interpretation of optnone is that the optimizer never "looks inside" a function. This fix is consistent with that interpretation. Specifically this came up in the situation: f3 calls f2 calls f1 f2 is always_inline f1 is optnone The application of readnone to f1 (& thus to f2) caused the inliner to kill the call to f2 as being trivially dead (without even checking the cost function, as it happens - not sure if that's also a bug). llvm-svn: 304833
* Re-sort #include lines for unittests. This uses a slightly modifiedChandler Carruth2017-06-068-9/+8
| | | | | | | | | | | | | | | clang-format (https://reviews.llvm.org/D33932) to keep primary headers at the top and handle new utility headers like 'gmock' consistently with other utility headers. No other change was made. I did no manual edits, all of this is clang-format. This should allow other changes to have more clear and focused diffs, and is especially motivated by moving some headers into more focused libraries. llvm-svn: 304786
* [OrderedBasicBlock] Return false for comesBefore(A, A)Benjamin Kramer2017-06-022-2/+61
| | | | | | | So far it would return true for the first uncached query, then cached queries return false. llvm-svn: 304545
* ScalarEvolution unit test: fix typo that breaks check-allGor Nishanov2017-05-271-1/+1
| | | | llvm-svn: 304063
* [SCEVExpander] Try harder to avoid introducing inttoptrKeno Fischer2017-05-271-5/+83
| | | | | | | | | | | | | | | | Summary: This fixes introduction of an incorrect inttoptr/ptrtoint pair in the included test case which makes use of non-integral pointers. I suspect there are more cases like this left, but this takes care of the one I was seeing at the moment. Reviewers: sanjoy Subscribers: mzolotukhin, llvm-commits Differential Revision: https://reviews.llvm.org/D33129 llvm-svn: 304058
* Revert "Add pthread_self function prototype and make it speculatable."Xin Tong2017-05-211-5/+3
| | | | | | | | This reverts commit 143d7445b5dfa2f6d6c45bdbe0433d9fc531be21. Build breaking llvm-svn: 303496
* Add pthread_self function prototype and make it speculatable.Xin Tong2017-05-201-3/+5
| | | | | | | | | | | | | | Summary: This allows pthread_self to be pulled out of a loop by LICM. Reviewers: hfinkel, arsenm, davide Reviewed By: davide Subscribers: davide, wdng, llvm-commits Differential Revision: https://reviews.llvm.org/D32782 llvm-svn: 303495
* Add hasProfileSummary and has{Sample|Instrumentation}Profile methodsEaswaran Raman2017-05-161-0/+8
| | | | | | | | ProfileSummaryInfo already checks whether the module has sample profile in determining profile counts. This will also be useful in inliner to clean up threshold updates. llvm-svn: 303204
* [TLI] Add declarations for various math header file routines from ↵Andrew Kaylor2017-05-121-0/+46
| | | | | | | | | | math-finite.h that create '__<func>_finite as functions Patch by Chris Chrulski Differential Revision: https://reviews.llvm.org/D31787 llvm-svn: 302955
* Restrict call metadata based hotness detection to Sample PGO modeTeresa Johnson2017-05-111-0/+6
| | | | | | | | | | | | | | | | | | | | | | | Summary: Don't use the metadata on call instructions for determining hotness unless we are in sample PGO mode, where it is needed because profile counts are not accurate. In instrumentation mode this is not necessary and does more harm than good when calls have VP metadata that hasn't been properly scaled after transformations or dropped after constant prop based devirtualization (both should be fixed, but we don't need to do this in the first place for instrumentation PGO). This required adjusting a number of tests to distinguish between sample and instrumentation PGO handling, and to add in profile summary metadata so that getProfileCount can get the summary. Reviewers: davidxl, danielcdh Subscribers: aemerson, rengolin, mehdi_amini, Prazek, llvm-commits Differential Revision: https://reviews.llvm.org/D32877 llvm-svn: 302844
* TargetLibraryInfo: Introduce wcslenMatthias Braun2017-05-051-0/+1
| | | | | | | | | | | wcslen is part of the C99 and C++98 standards. - This introduces the function to TargetLibraryInfo. - Also set attributes for wcslen in llvm::inferLibFuncAttributes(). Differential Revision: https://reviews.llvm.org/D32837 llvm-svn: 302278
* Teach SCEV normalization to de/normalize non-affine add recsSanjoy Das2017-04-251-1/+98
| | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Before this change, SCEV Normalization would incorrectly normalize non-affine add recurrences. To work around this there was (still is) a check in place to make sure we only tried to normalize affine add recurrences. We recently found a bug in aforementioned check to bail out of normalizing non-affine add recurrences. However, instead of fixing the bailout, I have decided to teach SCEV normalization to work correctly with non-affine add recurrences, making the bailout unnecessary (I'll remove it in a subsequent change). I've also added some unit tests (which would have failed before this change). Reviewers: atrick, sunfish, efriedma Reviewed By: atrick Subscribers: mcrosier, mzolotukhin, llvm-commits Differential Revision: https://reviews.llvm.org/D32104 llvm-svn: 301281
* [SCEV] Add a local cache for getZeroExtendExpr and getSignExtendExpr to preventWei Mi2017-04-171-0/+90
| | | | | | | | | | | | | | | | | | the exponential behavior. The patch is to fix PR32043. Functions getZeroExtendExpr and getSignExtendExpr may call themselves recursively more than once. This is potentially a 2^N complexity behavior. The exponential behavior was not commonly exposed before because of existing global cache mechnism like UniqueSCEVs or some early return mechanism when flags FlagNSW or FlagNUW are seen. However, we still have case which can expose the exponential behavior, like the case in PR32043, so we add a local cache in getZeroExtendExpr and getSignExtendExpr. If the input of the functions -- SCEV and type pair have been seen before, we can find the extended expression directly in the local cache. Differential Revision: https://reviews.llvm.org/D30350 llvm-svn: 300494
* Generalize SCEV's unit testing helper a bitSanjoy Das2017-04-141-10/+11
| | | | llvm-svn: 300379
* [ValueTracking] Avoid undefined behavior in unittest by not making a named ↵Craig Topper2017-04-141-1/+1
| | | | | | | | | | | | ArrayRef from a std::initializer_list One of the ValueTracking unittests creates a named ArrayRef initialized by a std::initializer_list. The underlying array for an std::initializer_list is only guaranteed to have a lifetime as long as the initializer_list object itself. So this can leave the ArrayRef pointing at an array that no long exists. This fixes this to just create an explicit array instead of an ArrayRef. Differential Revision: https://reviews.llvm.org/D32089 llvm-svn: 300354
* Add a unit test for SCEV NormalizationSanjoy Das2017-04-141-0/+63
| | | | llvm-svn: 300332
* MemorySSA: Move to Analysis, from Transforms/Utils. It's used asDaniel Berlin2017-04-112-0/+866
| | | | | | | | Analysis, it has Analysis passes, and once NewGVN is made an Analysis, this removes the cross dependency from Analysis to Transform/Utils. NFC. llvm-svn: 299980
* Allow DataLayout to specify addrspace for allocas.Matt Arsenault2017-04-101-1/+3
| | | | | | | | | | | | | | | | | | | | | | | LLVM makes several assumptions about address space 0. However, alloca is presently constrained to always return this address space. There's no real way to avoid using alloca, so without this there is no way to opt out of these assumptions. The problematic assumptions include: - That the pointer size used for the stack is the same size as the code size pointer, which is also the maximum sized pointer. - That 0 is an invalid, non-dereferencable pointer value. These are problems for AMDGPU because alloca is used to implement the private address space, which uses a 32-bit index as the pointer value. Other pointers are 64-bit and behave more like LLVM's notion of generic address space. By changing the address space used for allocas, we can change our generic pointer type to be LLVM's generic pointer type which does have similar properties. llvm-svn: 299888
* Fix signed/unsigned comparison warningsSimon Pilgrim2017-03-111-2/+2
| | | | llvm-svn: 297561
* Refactor the PSI to extract getCallSiteCount and remove checks for profile type.Dehao Chen2017-03-101-6/+0
| | | | | | | | | | | | | | Summary: There is no need to check profile count as only CallInst will have metadata attached. Reviewers: eraman Reviewed By: eraman Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D30799 llvm-svn: 297500
* [SCEV] Decrease the recursion threshold for CompareValueComplexitySanjoy Das2017-03-051-1/+37
| | | | | | | | | | Fixes PR32142. r287232 accidentally increased the recursion threshold for CompareValueComplexity from 2 to 32. This change reverses that change by introducing a separate flag for CompareValueComplexity's threshold. llvm-svn: 296992
* Fix signed-unsigned comparison warningSanjoy Das2017-02-251-1/+1
| | | | llvm-svn: 296274
* [ValueTracking] Don't do an unchecked shift in ComputeNumSignBitsSanjoy Das2017-02-251-0/+19
| | | | | | | | | | | | | | | | | | | | | | Summary: Previously we used to return a bogus result, 0, for IR like `ashr %val, -1`. I've also added an assert checking that `ComputeNumSignBits` at least returns 1. That assert found an already checked in test case where we were returning a bad result for `ashr %val, -1`. Fixes PR32045. Reviewers: spatel, majnemer Reviewed By: spatel, majnemer Subscribers: efriedma, mcrosier, llvm-commits Differential Revision: https://reviews.llvm.org/D30311 llvm-svn: 296273
* [PM/LCG] Teach LCG to support spurious reference edges.Chandler Carruth2017-02-091-0/+81
| | | | | | | | | | | | | | | | | | Somewhat amazingly, this only requires teaching it to clean them up when deleting a dead function from the graph. And we already have exactly the necessary data structures to do that in the parent RefSCCs. This allows ArgPromote to work in a much simpler way be merely letting reference edges linger in the graph after the causing IR is deleted. We will clean up these edges when we run any function pass over the IR, but don't remove them eagerly. This avoids all of the quadratic update issues both in the current pass manager and in my previous attempt with the new pass manager. Differential Revision: https://reviews.llvm.org/D29579 llvm-svn: 294663
* [PM/LCG] Teach the LazyCallGraph how to replace a function withoutChandler Carruth2017-02-091-81/+176
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | disturbing the graph or having to update edges. This is motivated by porting argument promotion to the new pass manager. Because of how LLVM IR Function objects work, in order to change their signature a new object needs to be created. This is efficient and straight forward in the IR but previously was very hard to implement in LCG. We could easily replace the function a node in the graph represents. The challenging part is how to handle updating the edges in the graph. LCG previously used an edge to a raw function to represent a node that had not yet been scanned for calls and references. This was the core of its laziness. However, that model causes this kind of update to be very hard: 1) The keys to lookup an edge need to be `Function*`s that would all need to be updated when we update the node. 2) There will be some unknown number of edges that haven't transitioned from `Function*` edges to `Node*` edges. All of this complexity isn't necessary. Instead, we can always build a node around any function, always pointing edges at it and always using it as the key to lookup an edge. To maintain the laziness, we need to sink the *edges* of a node into a secondary object and explicitly model transitioning a node from empty to populated by scanning the function. This design seems much cleaner in a number of ways, but importantly there is now exactly *one* place where the `Function*` has to be updated! Some other cleanups that fall out of this include having something to model the *entry* edges more accurately. Rather than hand rolling parts of the node in the graph itself, we have an explicit `EdgeSequence` object that gives us exactly the functionality needed. We also have a consistent place to define the edge iterators and can use them for both the entry edges and the internal edges of the graph. The API used to model the separation between a node and its edges is intentionally very thin as most clients are expected to deal with nodes that have populated edges. We model this exactly as an optional does with an additional method to populate the edges when that is a reasonable thing for a client to do. This is based on API design suggestions from Richard Smith and David Blaikie, credit goes to them for helping pick how to model this without it being either too explicit or too implicit. The patch is somewhat noisy due to shifting around iterator types and new syntax for walking the edges of a node, but most of the functionality change is in the `Edge`, `EdgeSequence`, and `Node` types. Differential Revision: https://reviews.llvm.org/D29577 llvm-svn: 294653
* [SCEV] Scale back the test added in r294181 as it goes quadratic inChandler Carruth2017-02-061-1/+5
| | | | | | | | | | | | | | | SCEV. This test was immediately the slowest test in 'check-llvm' even in an optimized build and was driving up the total test time by 50% for me. Sanjoy has filed a PR about the quadratic behavior in SCEV but it is also concerning that the test still passes given that r294181 added a threshold at 32 to SCEV. I've followed up on the original patch to figure out how this test should work long-term, but for now I want to get check-llvm to be fast again. llvm-svn: 294241
* [PM/LCG] Remove the lazy RefSCC formation from the LazyCallGraph duringChandler Carruth2017-02-061-267/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | iteration. The lazy formation of RefSCCs isn't really the most important part of the laziness here -- that has to do with walking the functions themselves -- and isn't essential to maintain. Originally, there were incremental update algorithms that relied on updates happening predominantly near the most recent RefSCC formed, but those have been replaced with ones that have much tighter general case bounds at this point. We do still perform asserts that only scale well due to this incrementality, but those are easy to place behind EXPENSIVE_CHECKS. Removing this simplifies the entire analysis by having a single up-front step that builds all of the RefSCCs in a direct Tarjan walk. We can even easily replace this with other or better algorithms at will and with much less confusion now that there is no iterator-based incremental logic involved. This removes a lot of complexity from LCG. Another advantage of moving in this direction is that it simplifies testing the system substantially as we no longer have to worry about observing and mutating the graph half-way through the RefSCC formation. We still need a somewhat special iterator for RefSCCs because we want the iterator to remain stable in the face of graph updates. However, this now merely involves relative indexing to the current RefSCC's position in the sequence which isn't too hard. Differential Revision: https://reviews.llvm.org/D29381 llvm-svn: 294227
* [SCEV] limit recursion depth and operands number in getAddExprDaniil Fukalov2017-02-061-0/+28
| | | | | | | | | | | | | | | | | | | | | for a quite big function with source like %add = add nsw i32 %mul, %conv %mul1 = mul nsw i32 %add, %conv %add2 = add nsw i32 %mul1, %add %mul3 = mul nsw i32 %add2, %add ; repeat couple of thousands times that can be produced by loop unroll, getAddExpr() tries to recursively construct SCEV and runs almost infinite time. Added recursion depth restriction (with new parameter to set it) Reviewers: sanjoy Subscribers: hfinkel, llvm-commits, mzolotukhin Differential Revision: https://reviews.llvm.org/D28158 llvm-svn: 294181
* [Analysis] Add LibFunc_ prefix to enums in TargetLibraryInfo. (NFC)David L. Jones2017-01-231-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: The LibFunc::Func enum holds enumerators named for libc functions. Unfortunately, there are real situations, including libc implementations, where function names are actually macros (musl uses "#define fopen64 fopen", for example; any other transitively visible macro would have similar effects). Strictly speaking, a conforming C++ Standard Library should provide any such macros as functions instead (via <cstdio>). However, there are some "library" functions which are not part of the standard, and thus not subject to this rule (fopen64, for example). So, in order to be both portable and consistent, the enum should not use the bare function names. The old enum naming used a namespace LibFunc and an enum Func, with bare enumerators. This patch changes LibFunc to be an enum with enumerators prefixed with "LibFFunc_". (Unfortunately, a scoped enum is not sufficient to override macros.) There are additional changes required in clang. Reviewers: rsmith Subscribers: mehdi_amini, mzolotukhin, nemanjai, llvm-commits Differential Revision: https://reviews.llvm.org/D28476 llvm-svn: 292848
* [LoopInfo] Add helper methods to compute two useful orderings of theChandler Carruth2017-01-201-0/+75
| | | | | | | | | | | | | | | | | | | | | | loops in a function. These are relatively confusing to talk about and compute correctly so it seems really good to write down their implementation in one place. I've replaced one place we needed this in the loop PM infrastructure and I have another place in a pending patch that wants it. We can't quite use this for the core loop PM walk because there we're sometimes working on a sub-forest. I'll add the expected unittests before committing this but wanted to make sure folks were happy with these names / comments. Credit goes to Richard Smith for the idea for naming the order where siblings are in reverse program order but the tree traversal remains preorder. Differential Revision: https://reviews.llvm.org/D28932 llvm-svn: 292569
OpenPOWER on IntegriCloud