summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Analysis
Commit message (Collapse)AuthorAgeFilesLines
* [ThinLTO] Use DenseSet instead of SmallPtrSet for holding GUIDsTeresa Johnson2017-01-051-4/+4
| | | | | | | | | Should fix some more bot failures from r291108. This should have been a DenseSet, since GUID is not a pointer type. It caused some bots to fail, but for some reason I wasnt't getting a build failure. llvm-svn: 291115
* [ThinLTO] Subsume all importing checks into a single flagTeresa Johnson2017-01-051-26/+71
| | | | | | | | | | | | | | | | | | | Summary: This adds a new summary flag NotEligibleToImport that subsumes several existing flags (NoRename, HasInlineAsmMaybeReferencingInternal and IsNotViableToInline). It also subsumes the checking of references on the summary that was being done during the thin link by eligibleForImport() for each candidate. It is much more efficient to do that checking once during the per-module summary build and record it in the summary. Reviewers: mehdi_amini Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D28169 llvm-svn: 291108
* Currently isLikelyComplexAddressComputation tries to figure out if the given ↵Mohammed Agabaria2017-01-051-2/+3
| | | | | | | | | | | | | stride seems to be 'complex' and need some extra cost for address computation handling. This code seems to be target dependent which may not be the same for all targets. Passed the decision whether the given stride is complex or not to the target by sending stride information via SCEV to getAddressComputationCost instead of 'IsComplex'. Specifically at X86 targets we dont see any significant address computation cost in case of the strided access in general. Differential Revision: https://reviews.llvm.org/D27518 llvm-svn: 291106
* [ValueTracking] remove stale comments; NFCSanjay Patel2017-01-021-6/+0
| | | | | | | The checks were improved with: https://reviews.llvm.org/rL290194 llvm-svn: 290826
* AVX-512 Loop Vectorizer: Cost calculation for interleave load/store patterns.Elena Demikhovsky2017-01-021-0/+32
| | | | | | | | | | | | X86 target does not provide any target specific cost calculation for interleave patterns.It uses the common target-independent calculation, which gives very high numbers. As a result, the scalar version is chosen in many cases. The situation on AVX-512 is even worse, since we have 3-src shuffles that significantly reduce the cost. In this patch I calculate the cost on AVX-512. It will allow to compare interleave pattern with gather/scatter and choose a better solution (PR31426). * Shiffle-broadcast cost will be changed in Simon's upcoming patch. Differential Revision: https://reviews.llvm.org/D28118 llvm-svn: 290810
* Fix an issue with isGuaranteedToTransferExecutionToSuccessorSanjoy Das2016-12-311-6/+20
| | | | | | | | | | | | | | I'm not sure if this was intentional, but today isGuaranteedToTransferExecutionToSuccessor returns true for readonly and argmemonly calls that may throw. This commit changes the function to not implicitly infer nounwind this way. Even if we eventually specify readonly calls as not throwing, isGuaranteedToTransferExecutionToSuccessor is not the best place to infer that. We should instead teach FunctionAttrs or some other such pass to tag readonly functions / calls as nounwind instead. llvm-svn: 290794
* Avoid const_cast; NFCSanjoy Das2016-12-311-2/+3
| | | | llvm-svn: 290793
* [ValueTracking] make dominator tree requirement explicit for ↵Sanjay Patel2016-12-311-1/+6
| | | | | | | | | | | | | | | | | isKnownNonNullFromDominatingCondition(); NFCI I don't think this hole is currently exposed, but I crashed regression tests for jump-threading and loop-vectorize after I added calls to isKnownNonNullAt() in InstSimplify as part of trying to solve PR28430: https://llvm.org/bugs/show_bug.cgi?id=28430 That's because they call into value tracking with a context instruction, but no other parts of the query structure filled in. For more background, see the discussion in: https://reviews.llvm.org/D27855 llvm-svn: 290786
* [LVI] Remove count/erase idiom in favor of checking result value of erasePhilip Reames2016-12-301-6/+2
| | | | | | Minor compile time win. Avoids an additional O(N) scan in the case where we are removing an element and costs nothing when we aren't. llvm-svn: 290768
* [MemDep] Handle gep with zeros for invariant.groupPiotr Padlewski2016-12-301-20/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: gep 0, 0 is equivalent to bitcast. LLVM canonicalizes it to getelementptr because it make SROA can then handle it. Simple case like void g(A &a) { z(a); if (glob) a.foo(); } void testG() { A a; g(a); } was not devirtualized with -fstrict-vtable-pointers because luck of handling for gep 0 in Memory Dependence Analysis Reviewers: dberlin, nlewycky, chandlerc Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D28126 llvm-svn: 290763
* [LVI] Manually hoist computation from loopPhilip Reames2016-12-301-7/+12
| | | | | | Minor compile time win. Not known to be a hot spot, just something I noticed while reading. llvm-svn: 290759
* [PM] Teach the CGSCC's CG update utility to more carefully invalidateChandler Carruth2016-12-282-20/+66
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | analyses when we're about to break apart an SCC. We can't wait until after breaking apart the SCC to invalidate things: 1) Which SCC do we then invalidate? All of them? 2) Even if we invalidate all of them, a newly created SCC may not have a proxy that will convey the invalidation to functions! Previously we only invalidated one of the SCCs and too late. This led to stale analyses remaining in the cache. And because the caching strategy actually works, they would get used and chaos would ensue. Doing invalidation early is somewhat pessimizing though if we *know* that the SCC structure won't change. So it turns out that the design to make the mutation API force the caller to know the *kind* of mutation in advance was indeed 100% correct and we didn't do enough of it. So this change also splits two cases of switching a call edge to a ref edge into two separate APIs so that callers can clearly test for this and take the easy path without invalidating when appropriate. This is particularly important in this case as we expect most inlines to be between functions in separate SCCs and so the common case is that we don't have to so aggressively invalidate analyses. The LCG API change in turn needed some basic cleanups and better testing in its unittest. No interesting functionality changed there other than more coverage of the returned sequence of SCCs. While this seems like an obvious improvement over the current state, I'd like to revisit the core concept of invalidating within the CG-update layer at all. I'm wondering if we would be better served forcing the callers to handle the invalidation beforehand in the cases that they can handle it. An interesting example is when we want to teach the inliner to *update and preserve* analyses. But we can cross that bridge when we get there. With this patch, the new pass manager an build all of the LLVM test suite at -O3 and everything passes. =D I haven't bootstrapped yet and I'm sure there are still plenty of bugs, but this gives a nice baseline so I'm going to increasingly focus on fleshing out the missing functionality, especially the bits that are just turned off right now in order to let us establish this baseline. llvm-svn: 290664
* [LCG] Teach the ref edge removal to handle a ref edge that is trivialChandler Carruth2016-12-281-1/+7
| | | | | | | | | | | due to a call cycle. This actually crashed the ref removal before. I've added a unittest that covers this kind of interesting graph structure and mutation. llvm-svn: 290645
* [PM] Teach MemDep to invalidate its result object when its cachedChandler Carruth2016-12-271-0/+18
| | | | | | | | analysis handles become invalid. Add a test case for its invalidation logic. llvm-svn: 290620
* [PM] Remove a pointless optimization.Chandler Carruth2016-12-272-6/+0
| | | | | | | | There is no need to do this within an analysis. That method shouldn't even be reached if this predicate holds as the actual useful optimization is in the analysis manager itself. llvm-svn: 290614
* [ThinLTO] Fix "||" vs "|" mixup.Teresa Johnson2016-12-271-1/+1
| | | | | | | | | | | | | | The effect of the bug was that we would incorrectly create summaries for global and weak values defined in module asm (since we were essentially testing for bit 1 which is SF_Undefined, and the RecordStreamer ignores local undefined references). This would have resulted in conservatively disabling importing of anything referencing globals and weaks defined in module asm. Added these cases to the test which now fails without this bug fix. Fixes PR31459. llvm-svn: 290610
* [MemDep] Operand visited twice bugfixPiotr Padlewski2016-12-271-0/+1
| | | | | | | | Because operand was not marked as seen it was visited twice. It doesn't change behavior of optimization, it just saves redudant visit, so no test changes. llvm-svn: 290607
* [PM] Teach BasicAA how to invalidate its result object.Chandler Carruth2016-12-271-0/+18
| | | | | | | | | | | | | | | This requires custom handling because BasicAA caches handles to other analyses and so it needs to trigger indirect invalidation. This fixes one of the common crashes when using the new PM in real pipelines. I've also tweaked a regression test to check that we are at least handling the most immediate case. I'm going to work at re-structuring this test some to both scale better (rather than all being in one file) and check more invalidation paths in a follow-up commit, but I wanted to get the basic bug fix in place. llvm-svn: 290603
* [PM] Teach the AAManager and AAResults layer (the worst offender forChandler Carruth2016-12-271-1/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | inter-analysis dependencies) to use the new invalidation infrastructure. This teaches it to invalidate itself when any of the peer function AA results that it uses become invalid. We do this by just tracking the originating IDs. I've kept it in a somewhat clunky API since some users of AAResults are outside the new PM right now. We can clean this API up if/when those users go away. Secondly, it uses the registration on the outer analysis manager proxy to trigger deferred invalidation when a module analysis result becomes invalid. I've included test cases that specifically try to trigger use-after-free in both of these cases and they would crash or hang pretty horribly for me even without ASan. Now they work nicely. The `InvalidateAnalysis` utility pass required some tweaking to be useful in this context and it still is pretty garbage. I'd like to switch it back to the previous implementation and teach the explicit invalidate method on the AnalysisManager to take care of correctly triggering indirect invalidation, but I wanted to go ahead and send this out so folks could see how all of this stuff works together in practice. And, you know, that it does actually work. =] Differential Revision: https://reviews.llvm.org/D27205 llvm-svn: 290595
* [PM] Introduce the facilities for registering cross-IR-unit dependenciesChandler Carruth2016-12-272-6/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | that require deferred invalidation. This handles the other real-world invalidation scenario that we have cases of: a function analysis which caches references to a module analysis. We currently do this in the AA aggregation layer and might well do this in other places as well. Since this is relative rare, the technique is somewhat more cumbersome. Analyses need to register themselves when accessing the outer analysis manager's proxy. This proxy is already necessarily present to allow access to the outer IR unit's analyses. By registering here we can track and trigger invalidation when that outer analysis goes away. To make this work we need to enhance the PreservedAnalyses infrastructure to support a (slightly) more explicit model for "sets" of analyses, and allow abandoning a single specific analyses even when a set covering that analysis is preserved. That allows us to describe the scenario of preserving all Function analyses *except* for the one where deferred invalidation has triggered. We also need to teach the invalidator API to support direct ID calls instead of always going through a template to dispatch so that we can just record the ID mapping. I've introduced testing of all of this both for simple module<->function cases as well as for more complex cases involving a CGSCC layer. Much like the previous patch I've not tried to fully update the loop pass management layer because that layer is due to be heavily reworked to use similar techniques to the CGSCC to handle updates. As that happens, we'll have a better testing basis for adding support like this. Many thanks to both Justin and Sean for the extensive reviews on this to help bring the API design and documentation into a better state. Differential Revision: https://reviews.llvm.org/D27198 llvm-svn: 290594
* [Analysis] Ignore `nobuiltin` on `allocsize` function calls.George Burgess IV2016-12-271-10/+15
| | | | | | | | | We currently ignore the `allocsize` attribute on functions calls with the `nobuiltin` attribute when trying to lower `@llvm.objectsize`. We shouldn't care about `nobuiltin` here: `allocsize` is explicitly added by the user, not inferred based on a function's symbol. llvm-svn: 290588
* [Analysis] Refactor as promised in r290397.George Burgess IV2016-12-271-16/+22
| | | | | | | | This also makes us no longer check for `allocsize` on intrinsic calls. This shouldn't matter, since intrinsics should provide the information we get from `allocsize` on their own. llvm-svn: 290585
* [InstCombiner] Simplify lib calls to `round{,f}`Bryant Wong2016-12-261-0/+6
| | | | | | Differential Revision: https://reviews.llvm.org/D28110 llvm-svn: 290542
* [AliasAnalysis] Teach BasicAA about memcpy.Bryant Wong2016-12-251-0/+25
| | | | | | Differential Revision: https://reviews.llvm.org/D27034 llvm-svn: 290526
* [MemDep] NFC changesPiotr Padlewski2016-12-231-2/+1
| | | | llvm-svn: 290428
* Don't consider allocsize functions to be allocation functions.George Burgess IV2016-12-231-23/+28
| | | | | | | | | | | | | | | | | | | This patch fixes some ASAN unittest failures on FreeBSD. See the cfe-commits email thread for r290169 for more on those. According to the LangRef, the allocsize attribute only tells us about the number of bytes that exist at the memory location pointed to by the return value of a function. It does not necessarily mean that the function will only ever allocate. So, we need to be very careful about treating functions with allocsize as general allocation functions. This patch makes us fully conservative in this regard, though I suspect that we have room to be a bit more aggressive if we want. This has a FIXME that can be fixed by a relatively straightforward refactor; I just wanted to keep this patch minimal. If this sticks, I'll come back and fix it in a few days. llvm-svn: 290397
* [PM] Remove now-dead extern template and explicit instantiationChandler Carruth2016-12-221-2/+0
| | | | | | | | | | | | declarations. We're using a custom class here instead of the helper template, these bits just didn't get deleted when the other bits did get deleted. This was found by a really nice MSVC warning about explicitly instantiating a template where some member functions aren't defined and thus can't be instantiatied. llvm-svn: 290327
* [PM] Introduce a reasonable port of the main per-module pass pipelineChandler Carruth2016-12-221-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | from the old pass manager in the new one. I'm not trying to support (initially) the numerous options that are currently available to customize the pass pipeline. If we end up really wanting them, we can add them later, but I suspect many are no longer interesting. The simplicity of omitting them will help a lot as we sort out what the pipeline should look like in the new PM. I've also documented to the best of my ability *why* each pass or group of passes is used so that reading the pipeline is more helpful. In many cases I think we have some questionable choices of ordering and I've left FIXME comments in place so we know what to come back and revisit going forward. But for now, I've left it as similar to the current pipeline as I could. Lastly, I've had to comment out several places where passes are not ported to the new pass manager or where the loop pass infrastructure is not yet ready. I did at least fix a few bugs in the loop pass infrastructure uncovered by running the full pipeline, but I didn't want to go too far in this patch -- I'll come back and re-enable these as the infrastructure comes online. But I'd like to keep the comments in place because I don't want to lose track of which passes need to be enabled and where they go. One thing that seemed like a significant API improvement was to require that we don't build pipelines for O0. It seems to have no real benefit. I've also switched back to returning pass managers by value as at this API layer it feels much more natural to me for composition. But if others disagree, I'm happy to go back to an output parameter. I'm not 100% happy with the testing strategy currently, but it seems at least OK. I may come back and try to refactor or otherwise improve this in subsequent patches but I wanted to at least get a good starting point in place. Differential Revision: https://reviews.llvm.org/D28042 llvm-svn: 290325
* IR: Function summary representation for type tests.Peter Collingbourne2016-12-211-6/+28
| | | | | | | | | | | Each function summary has an attached list of type identifier GUIDs. The idea is that during the regular LTO phase we would match these GUIDs to type identifiers defined by the regular LTO module and store the resolutions in a top-level "type identifier summary" (which will be implemented separately). Differential Revision: https://reviews.llvm.org/D27967 llvm-svn: 290280
* TypeMetadataUtils: Simplify; spotted by Mehdi.Peter Collingbourne2016-12-211-2/+1
| | | | llvm-svn: 290264
* [ConstantFolding] Fix vector GEPs harderMichael Kuperstein2016-12-211-3/+6
| | | | | | | | | | For vector GEPs, CastGEPIndices can end up in an infinite recursion, because we compare the vector type to the scalar pointer type, find them different, and then try to cast a type to itself. Differential Revision: https://reviews.llvm.org/D28009 llvm-svn: 290260
* [CostModel] Pass shuffle mask args with ArrayRef. NFCI.Simon Pilgrim2016-12-211-2/+2
| | | | llvm-svn: 290257
* [Analysis] Centralize objectsize lowering logic.George Burgess IV2016-12-201-0/+30
| | | | | | | | | We're currently doing nearly the same thing for @llvm.objectsize in three different places: two of them are missing checks for overflow, and one of them could subtly break if InstCombine gets much smarter about removing alloc sites. Seems like a good idea to not do that. llvm-svn: 290214
* [SCEV] Be less conservative when extending bitwidths for computing ranges.Michael Zolotukhin2016-12-201-7/+6
| | | | | | | | | | | | | | | | | | | | | | | | Summary: In getRangeForAffineAR we compute ranges for affine exprs E = A + B*C, where ranges for A, B, and C are known. To avoid overflow, we need to operate on a bigger bitwidth, and originally we chose 2*x+1 for this (x being the original bitwidth). However, it is safe to use just 2*x: A+B*C <= (2^x - 1) + (2^x - 1)*(2^x - 1) = = 2^x - 1 + 2^2x - 2^x - 2^x + 1 = = 2^2x - 2^x <= 2^2x - 1 Unnecessary extending of bitwidths results in noticeable slowdowns: ranges perform arithmetic operations using APInt, which are much slower when bitwidths are bigger than 64. Reviewers: sanjoy, majnemer, chandlerc Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D27795 llvm-svn: 290211
* IR: Eliminate non-determinism in the module summary analysis.Peter Collingbourne2016-12-201-25/+23
| | | | | | | | | Also make the summary ref and call graph vectors immutable. This means a smaller API surface and fewer places to audit for non-determinism. Differential Revision: https://reviews.llvm.org/D27875 llvm-svn: 290200
* Use MaxDepth instead of repeating its valueMatt Arsenault2016-12-201-3/+3
| | | | llvm-svn: 290194
* Replace std::find_if with llvm::find_if. NFC.George Burgess IV2016-12-201-5/+4
| | | | llvm-svn: 290190
* [PM] Rework a loop in the CGSCC update logic to be more conservative andChandler Carruth2016-12-201-7/+11
| | | | | | | | | | clear. The current RefSCC can occur in exactly one position so we should just enforce that and leverage the property rather than checking for it anywhere. This addresses review comments made on another patch. llvm-svn: 290162
* [PM] Provide an initial, minimal port of the inliner to the new pass manager.Chandler Carruth2016-12-201-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This doesn't implement *every* feature of the existing inliner, but tries to implement the most important ones for building a functional optimization pipeline and beginning to sort out bugs, regressions, and other problems. Notable, but intentional omissions: - No alloca merging support. Why? Because it isn't clear we want to do this at all. Active discussion and investigation is going on to remove it, so for simplicity I omitted it. - No support for trying to iterate on "internally" devirtualized calls. Why? Because it adds what I suspect is inappropriate coupling for little or no benefit. We will have an outer iteration system that tracks devirtualization including that from function passes and iterates already. We should improve that rather than approximate it here. - Optimization remarks. Why? Purely to make the patch smaller, no other reason at all. The last one I'll probably work on almost immediately. But I wanted to skip it in the initial patch to try to focus the change as much as possible as there is already a lot of code moving around and both of these *could* be skipped without really disrupting the core logic. A summary of the different things happening here: 1) Adding the usual new PM class and rigging. 2) Fixing minor underlying assumptions in the inline cost analysis or inline logic that don't generally hold in the new PM world. 3) Adding the core pass logic which is in essence a loop over the calls in the nodes in the call graph. This is a bit duplicated from the old inliner, but only a handful of lines could realistically be shared. (I tried at first, and it really didn't help anything.) All told, this is only about 100 lines of code, and most of that is the mechanics of wiring up analyses from the new PM world. 4) Updating the LazyCallGraph (in the new PM) based on the *newly inlined* calls and references. This is very minimal because we cannot form cycles. 5) When inlining removes the last use of a function, eagerly nuking the body of the function so that any "one use remaining" inline cost heuristics are immediately refined, and queuing these functions to be completely deleted once inlining is complete and the call graph updated to reflect that they have become dead. 6) After all the inlining for a particular function, updating the LazyCallGraph and the CGSCC pass manager to reflect the function-local simplifications that are done immediately and internally by the inline utilties. These are the exact same fundamental set of CG updates done by arbitrary function passes. 7) Adding a bunch of test cases to specifically target CGSCC and other subtle aspects in the new PM world. Many thanks to the careful review from Easwaran and Sanjoy and others! Differential Revision: https://reviews.llvm.org/D24226 llvm-svn: 290161
* [IR] Remove the DIExpression field from DIGlobalVariable.Adrian Prantl2016-12-201-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements PR31013 by introducing a DIGlobalVariableExpression that holds a pair of DIGlobalVariable and DIExpression. Currently, DIGlobalVariables holds a DIExpression. This is not the best way to model this: (1) The DIGlobalVariable should describe the source level variable, not how to get to its location. (2) It makes it unsafe/hard to update the expressions when we call replaceExpression on the DIGLobalVariable. (3) It makes it impossible to represent a global variable that is in more than one location (e.g., a variable with multiple DW_OP_LLVM_fragment-s). We also moved away from attaching the DIExpression to DILocalVariable for the same reasons. This reapplies r289902 with additional testcase upgrades and a change to the Bitcode record for DIGlobalVariable, that makes upgrading the old format unambiguous also for variables without DIExpressions. <rdar://problem/29250149> https://llvm.org/bugs/show_bug.cgi?id=31013 Differential Revision: https://reviews.llvm.org/D26769 llvm-svn: 290153
* Add files I seem to have dropped in my revert (r290086).Daniel Jasper2016-12-191-0/+140
| | | | | | Sorry! llvm-svn: 290087
* Revert @llvm.assume with operator bundles (r289755-r289757)Daniel Jasper2016-12-1914-420/+465
| | | | | | | This creates non-linear behavior in the inliner (see more details in r289755's commit thread). llvm-svn: 290086
* Retry: [BPI] Use a safer constructor to calculate branch probabilitiesVedant Kumar2016-12-171-12/+12
| | | | | | | | | | | | | | | | | | | BPI may trigger signed overflow UB while computing branch probabilities for cold calls or to unreachables. For example, with our current choice of weights, we'll crash if there are >= 2^12 branches to an unreachable. Use a safer BranchProbability constructor which is better at handling fractions with large denominators. Changes since the initial commit: - Use explicit casts to ensure that multiplication operands are 64-bit ints. rdar://problem/29368161 Differential Revision: https://reviews.llvm.org/D27862 llvm-svn: 290022
* Revert "[BPI] Use a safer constructor to calculate branch probabilities"Vedant Kumar2016-12-171-12/+12
| | | | | | | | | | This reverts commit r290016. It breaks this bot, even though the test passes locally: http://bb.pgr.jp/builders/ninja-x64-msvc-RA-centos6/builds/32956/ AnalysisTests: /home/bb/ninja-x64-msvc-RA-centos6/llvm-project/llvm/lib/Support/BranchProbability.cpp:52: static llvm::BranchProbability llvm::BranchProbability::getBranchProbability(uint64_t, uint64_t): Assertion `Numerator <= Denominator && "Probability cannot be bigger than 1!"' failed. llvm-svn: 290019
* [BPI] Use a safer constructor to calculate branch probabilitiesVedant Kumar2016-12-171-12/+12
| | | | | | | | | | | | | | | BPI may trigger signed overflow UB while computing branch probabilities for cold calls or to unreachables. For example, with our current choice of weights, we'll crash if there are >= 2^12 branches to an unreachable. Use a safer BranchProbability constructor which is better at handling fractions with large denominators. rdar://problem/29368161 Differential Revision: https://reviews.llvm.org/D27862 llvm-svn: 290016
* ModuleSummaryAnalysis: Remove some duplicate code. NFCI.Peter Collingbourne2016-12-161-5/+0
| | | | llvm-svn: 290003
* Revert "[IR] Remove the DIExpression field from DIGlobalVariable."Adrian Prantl2016-12-161-2/+1
| | | | | | | | | | | | | | | | | This reverts commit 289920 (again). I forgot to implement a Bitcode upgrade for the case where a DIGlobalVariable has not DIExpression. Unfortunately it is not possible to safely upgrade these variables without adding a flag to the bitcode record indicating which version they are. My plan of record is to roll the planned follow-up patch that adds a unit: field to DIGlobalVariable into this patch before recomitting. This way we only need one Bitcode upgrade for both changes (with a version flag in the bitcode record to safely distinguish the record formats). Sorry for the churn! llvm-svn: 289982
* [IR] Remove the DIExpression field from DIGlobalVariable.Adrian Prantl2016-12-161-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements PR31013 by introducing a DIGlobalVariableExpression that holds a pair of DIGlobalVariable and DIExpression. Currently, DIGlobalVariables holds a DIExpression. This is not the best way to model this: (1) The DIGlobalVariable should describe the source level variable, not how to get to its location. (2) It makes it unsafe/hard to update the expressions when we call replaceExpression on the DIGLobalVariable. (3) It makes it impossible to represent a global variable that is in more than one location (e.g., a variable with multiple DW_OP_LLVM_fragment-s). We also moved away from attaching the DIExpression to DILocalVariable for the same reasons. This reapplies r289902 with additional testcase upgrades. <rdar://problem/29250149> https://llvm.org/bugs/show_bug.cgi?id=31013 Differential Revision: https://reviews.llvm.org/D26769 llvm-svn: 289920
* Revert "[IR] Remove the DIExpression field from DIGlobalVariable."Adrian Prantl2016-12-161-2/+1
| | | | | | This reverts commit 289902 while investigating bot berakage. llvm-svn: 289906
* [IR] Remove the DIExpression field from DIGlobalVariable.Adrian Prantl2016-12-161-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements PR31013 by introducing a DIGlobalVariableExpression that holds a pair of DIGlobalVariable and DIExpression. Currently, DIGlobalVariables holds a DIExpression. This is not the best way to model this: (1) The DIGlobalVariable should describe the source level variable, not how to get to its location. (2) It makes it unsafe/hard to update the expressions when we call replaceExpression on the DIGLobalVariable. (3) It makes it impossible to represent a global variable that is in more than one location (e.g., a variable with multiple DW_OP_LLVM_fragment-s). We also moved away from attaching the DIExpression to DILocalVariable for the same reasons. <rdar://problem/29250149> https://llvm.org/bugs/show_bug.cgi?id=31013 Differential Revision: https://reviews.llvm.org/D26769 llvm-svn: 289902
OpenPOWER on IntegriCloud