summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/Utils/InlineFunction.cpp
Commit message (Collapse)AuthorAgeFilesLines
* Inliner: Don't mark swifterror allocas with lifetime markersArnold Schwaighofer2016-09-091-0/+3
| | | | | | | | | This would create a bitcast use which fails the verifier: swifterror values may only be used by loads, stores, and as function arguments. rdar://28233244 llvm-svn: 281114
* Fix inliner funclet unwind memoizationJoseph Tremoulet2016-09-041-7/+79
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: The inliner may need to determine where a given funclet unwinds to, and this determination may depend on other funclets throughout the funclet tree. The code that performs this walk in getUnwindDestToken memoizes results to avoid redundant computations. In the case that a funclet's unwind destination is derived from its ancestor, there's code to walk back down the tree from the ancestor updating the memo map of its descendants to record the unwind destination. This change fixes that code to account for the case that some descendant has a different unwind destination, which can happen if that unwind dest is a descendant of the EHPad being queried and thus didn't determine its unwind destination. Also update test inline-funclets.ll, which is supposed to cover such scenarios, to include a case that fails an assertion without this fix but passes with it. Fixes PR29151. Reviewers: majnemer Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D24117 llvm-svn: 280610
* [PM] Port the always inliner to the new pass manager in a much moreChandler Carruth2016-08-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | minimal and boring form than the old pass manager's version. This pass does the very minimal amount of work necessary to inline functions declared as always-inline. It doesn't support a wide array of things that the legacy pass manager did support, but is alse ... about 20 lines of code. So it has that going for it. Notably things this doesn't support: - Array alloca merging - To support the above, bottom-up inlining with careful history tracking and call graph updates - DCE of the functions that become dead after this inlining. - Inlining through call instructions with the always_inline attribute. Instead, it focuses on inlining functions with that attribute. The first I've omitted because I'm hoping to just turn it off for the primary pass manager. If that doesn't pan out, I can add it here but it will be reasonably expensive to do so. The second should really be handled by running global-dce after the inliner. I don't want to re-implement the non-trivial logic necessary to do comdat-correct DCE of functions. This means the -O0 pipeline will have to be at least 'always-inline,global-dce', but that seems reasonable to me. If others are seriously worried about this I'd like to hear about it and understand why. Again, this is all solveable by factoring that logic into a utility and calling it here, but I'd like to wait to do that until there is a clear reason why the existing pass-based factoring won't work. The final point is a serious one. I can fairly easily add support for this, but it seems both costly and a confusing construct for the use case of the always inliner running at -O0. This attribute can of course still impact the normal inliner easily (although I find that a questionable re-use of the same attribute). I've started a discussion to sort out what semantics we want here and based on that can figure out if it makes sense ta have this complexity at O0 or not. One other advantage of this design is that it should be quite a bit faster due to checking for whether the function is a viable candidate for inlining exactly once per function instead of doing it for each call site. Anyways, hopefully a reasonable starting point for this pass. Differential Revision: https://reviews.llvm.org/D23299 llvm-svn: 278896
* Preserve the assumption cache more oftenDavid Majnemer2016-08-161-12/+22
| | | | | | | We were clearing it out in LoopUnswitch and InlineFunction instead of attempting to preserve it. llvm-svn: 278860
* [Inliner] Don't treat inalloca allocas as staticReid Kleckner2016-08-121-3/+10
| | | | | | | | | They aren't static, and moving them to the entry block across something else will only result in tears. Root cause of http://crbug.com/636558. llvm-svn: 278571
* Avoid using a raw AssumptionCacheTracker in various inliner functions.Sean Silva2016-07-231-5/+5
| | | | | | | | | | This unblocks the new PM part of River's patch in https://reviews.llvm.org/D22706 Conveniently, this same change was needed for D21921 and so these changes are just spun out from there. llvm-svn: 276515
* Apply clang-tidy's modernize-loop-convert to most of lib/Transforms.Benjamin Kramer2016-06-261-13/+10
| | | | | | Only minor manual fixes. No functionality change intended. llvm-svn: 273808
* Revert "[SimplifyCFG] Stop inserting calls to llvm.trap for UB"David Majnemer2016-06-251-1/+1
| | | | | | This reverts commit r273778, it seems to break UBSan :/ llvm-svn: 273779
* [SimplifyCFG] Stop inserting calls to llvm.trap for UBDavid Majnemer2016-06-251-1/+1
| | | | | | | | | | | | | | | | | SimplifyCFG had logic to insert calls to llvm.trap for two very particular IR patterns: stores and invokes of undef/null. While InstCombine canonicalizes certain undefined behavior IR patterns to stores of undef, phase ordering means that this cannot be relied upon in general. There are much better tools than llvm.trap: UBSan and ASan. N.B. I could be argued into reverting this change if a clear argument as to why it is important that we synthesize llvm.trap for stores, I'd be hard pressed to see why it'd be useful for invokes... llvm-svn: 273778
* Switch more loops to be range-basedDavid Majnemer2016-06-241-2/+1
| | | | | | | This makes the code a little more concise, no functional change is intended. llvm-svn: 273644
* Run clang-tidy's performance-unnecessary-copy-initialization over LLVM.Benjamin Kramer2016-06-121-1/+1
| | | | | | No functionality change intended. llvm-svn: 272516
* Pass DebugLoc and SDLoc by const ref.Benjamin Kramer2016-06-121-1/+2
| | | | | | | | This used to be free, copying and moving DebugLocs became expensive after the metadata rewrite. Passing by reference eliminates a ton of track/untrack operations. No functionality change intended. llvm-svn: 272512
* All llvm.deoptimize declarations must use the same calling conventionSanjoy Das2016-05-121-1/+7
| | | | | | | | | | | | | | | | | This new verifier rule lets us unambigously pick a calling convention when creating a new declaration for `@llvm.experimental.deoptimize.<ty>`. It is also congruent with our lowering strategy -- since all calls to `@llvm.experimental.deoptimize` are lowered to calls to `__llvm_deoptimize`, it is reasonable to enforce a unique calling convention. Some of the tests that were breaking this verifier rule have had to be split up into different .ll files. The inliner was violating this rule as well, and has been fixed to avoid producing invalid IR. llvm-svn: 269261
* [Inliner] Preserve llvm.mem.parallel_loop_access metadataHal Finkel2016-04-281-0/+31
| | | | | | | | | | | | | | | | | | | | | | | When inlining a call site with llvm.mem.parallel_loop_access metadata, this metadata needs to be propagated to all cloned memory-accessing instructions. Otherwise, inlining parts of the loop body will invalidate the annotation. With this functionality, we now vectorize the following as expected: void Body(int *res, int *c, int *d, int *p, int i) { res[i] = (p[i] == 0) ? res[i] : res[i] + d[i]; } void Test(int *res, int *c, int *d, int *p, int n) { int i; #pragma clang loop vectorize(assume_safety) for (i = 0; i < 1600; i++) { Body(res, c, d, p, i); } } llvm-svn: 267949
* Maintain calling convention when inling calls to llvm.deoptimizeSanjoy Das2016-04-091-1/+3
| | | | | | | The behavior here was buggy -- we'd forget the calling convention after inlining a callsite calling llvm.deoptimize. llvm-svn: 265867
* Don't insert stackrestore on deoptimizing returnsSanjoy Das2016-04-011-2/+4
| | | | | | | | They're not necessary (since the stack pointer is trivially restored on return), and the way LLVM inserts the stackrestore calls breaks the IR (we get a stackrestore between the deoptimize call and the return). llvm-svn: 265101
* Don't insert lifetime end markers on deoptimizing returnsSanjoy Das2016-04-011-2/+5
| | | | | | | | | They're not necessary (since the lifetime of the alloca is trivially over due to the return), and the way LLVM inserts the lifetime.end markers breaks the IR (we get a lifetime end marker between the deoptimize call and the return). llvm-svn: 265100
* Introduce a @llvm.experimental.guard intrinsicSanjoy Das2016-03-311-5/+7
| | | | | | | | | | | | | | | | | | | | | | | Summary: As discussed on llvm-dev[1]. This change adds the basic boilerplate code around having this intrinsic in LLVM: - Changes in Intrinsics.td, and the IR Verifier - A lowering pass to lower @llvm.experimental.guard to normal control flow - Inliner support [1]: http://lists.llvm.org/pipermail/llvm-dev/2016-February/095523.html Reviewers: reames, atrick, chandlerc, rnk, JosephTremoulet, echristo Subscribers: mcrosier, llvm-commits Differential Revision: http://reviews.llvm.org/D18527 llvm-svn: 264976
* Introduce @llvm.experimental.deoptimizeSanjoy Das2016-03-111-1/+64
| | | | | | | | | | | | | | | | | | | | | | | | Summary: This intrinsic, together with deoptimization operand bundles, allow frontends to express transfer of control and frame-local state from one (typically more specialized, hence faster) version of a function into another (typically more generic, hence slower) version. In languages with a fully integrated managed runtime this intrinsic can be used to implement "uncommon trap" like functionality. In unmanaged languages like C and C++, this intrinsic can be used to represent the slow paths of specialized functions. Note: this change does not address how `@llvm.experimental_deoptimize` is lowered. That will be done in a later change. Reviewers: chandlerc, rnk, atrick, reames Subscribers: llvm-commits, kmod, mjacob, maksfb, mcrosier, JosephTremoulet Differential Revision: http://reviews.llvm.org/D17732 llvm-svn: 263281
* Revert revisions 262636, 262643, 262679, and 262682.Easwaran Raman2016-03-081-6/+3
| | | | llvm-svn: 262883
* Fix a use-after-free bug introduced in r262636Easwaran Raman2016-03-041-1/+4
| | | | llvm-svn: 262679
* Infrastructure for PGO enhancements in inlinerEaswaran Raman2016-03-031-2/+2
| | | | | | | | | | | | This patch provides the following infrastructure for PGO enhancements in inliner: Enable the use of block level profile information in inliner Incremental update of block frequency information during inlining Update the function entry counts of callees when they get inlined into callers. Differential Revision: http://reviews.llvm.org/D16381 llvm-svn: 262636
* [WinEH] Don't inline an 'unwinds to caller' cleanupret into funclets which ↵David Majnemer2016-02-231-0/+21
| | | | | | | | | | | | | | | | | | | | | locally unwind It is problematic if the inlinee has a cleanupret which unwinds to caller and we inline it into a call site which doesn't unwind. If the funclet unwinds anywhere other than to the caller, then we will give the funclet two unwind destinations. This will result in a verifier failure. Seeing as how the caller wasn't an invoke (which would locally unwind) and that the funclet cannot unwind to caller, we must conclude that an 'unwind to caller' cleanupret is dynamically unreachable. This fixes PR26698. Differential Revision: http://reviews.llvm.org/D17536 llvm-svn: 261656
* [Inliner/WinEH] Honor implicit nounwindsJoseph Tremoulet2016-01-201-17/+314
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Funclet EH tables require that a given funclet have only one unwind destination for exceptional exits. The verifier will therefore reject e.g. two cleanuprets with different unwind dests for the same cleanup, or two invokes exiting the same funclet but to different unwind dests. Because catchswitch has no 'nounwind' variant, and because IR producers are not *required* to annotate calls which will not unwind as 'nounwind', it is legal to nest a call or an "unwind to caller" catchswitch within a funclet pad that has an unwind destination other than caller; it is undefined behavior for such a call or catchswitch to unwind. Normally when inlining an invoke, calls in the inlined sequence are rewritten to invokes that unwind to the callsite invoke's unwind destination, and "unwind to caller" catchswitches in the inlined sequence are rewritten to unwind to the callsite invoke's unwind destination. However, if such a call or "unwind to caller" catchswitch is located in a callee funclet that has another exceptional exit with an unwind destination within the callee, applying the normal transformation would give that callee funclet multiple unwind destinations for its exceptional exits. There would be no way for EH table generation to determine which is the "true" exit, and the verifier would reject the function accordingly. Add logic to the inliner to detect these cases and leave such calls and "unwind to caller" catchswitches as calls and "unwind to caller" catchswitches in the inlined sequence. This fixes PR26147. Reviewers: rnk, andrew.w.kaylor, majnemer Subscribers: alexcrichton, llvm-commits Differential Revision: http://reviews.llvm.org/D16319 llvm-svn: 258273
* [OperandBundles] Copy DebugLoc with calls/invokesJoseph Tremoulet2016-01-141-1/+0
| | | | | | | | | | | | | | | | | Summary: The overloads of CallInst::Create and InvokeInst::Create that are used to adjust operand bundles purport to create a new instruction "identical in every way except [for] the operand bundles", so copy the DebugLoc along with everything else. Reviewers: sanjoy, majnemer Subscribers: majnemer, dblaikie, llvm-commits Differential Revision: http://reviews.llvm.org/D16157 llvm-svn: 257745
* hasNUses(0) == use_empty() ; NFCISanjay Patel2016-01-131-4/+3
| | | | | | Also, improve variable name and remove unnecessary braces. llvm-svn: 257687
* rangify; NFCISanjay Patel2016-01-131-6/+5
| | | | llvm-svn: 257677
* Nonnull elements in OperandBundleCallSites are not all InstructionsSanjoy Das2015-12-191-3/+2
| | | | | | | | | | `CloneAndPruneIntoFromInst` sometimes RAUW's dead instructions with `undef` before erasing them (to avoid deleting instructions that still have uses). This changes the `WeakVH` in `OperandBundleCallSites` to hold an `undef`, and we need to guard for this situation in eventuality in `llvm::InlineFunction`. llvm-svn: 256110
* [WinEH] Use operand bundles to describe call sitesDavid Majnemer2015-12-151-32/+61
| | | | | | | | | | | | | | | | | SimplifyCFG allows tail merging with code which terminates in unreachable which, in turn, makes it possible for an invoke to end up in a funclet which it was not originally part of. Using operand bundles on invokes allows us to determine whether or not an invoke was part of a funclet in the source program. Furthermore, it allows us to unambiguously answer questions about the legality of inlining into call sites which the personality may have trouble with. Differential Revision: http://reviews.llvm.org/D15517 llvm-svn: 255674
* [IR] Remove terminatepadDavid Majnemer2015-12-141-13/+2
| | | | | | | | | | | | | It turns out that terminatepad gives little benefit over a cleanuppad which calls the termination function. This is not sufficient to implement fully generic filters but MSVC doesn't support them which makes terminatepad a little over-designed. Depends on D15478. Differential Revision: http://reviews.llvm.org/D15479 llvm-svn: 255522
* [IR] Reformulate LLVM's EH funclet IRDavid Majnemer2015-12-121-34/+108
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While we have successfully implemented a funclet-oriented EH scheme on top of LLVM IR, our scheme has some notable deficiencies: - catchendpad and cleanupendpad are necessary in the current design but they are difficult to explain to others, even to seasoned LLVM experts. - catchendpad and cleanupendpad are optimization barriers. They cannot be split and force all potentially throwing call-sites to be invokes. This has a noticable effect on the quality of our code generation. - catchpad, while similar in some aspects to invoke, is fairly awkward. It is unsplittable, starts a funclet, and has control flow to other funclets. - The nesting relationship between funclets is currently a property of control flow edges. Because of this, we are forced to carefully analyze the flow graph to see if there might potentially exist illegal nesting among funclets. While we have logic to clone funclets when they are illegally nested, it would be nicer if we had a representation which forbade them upfront. Let's clean this up a bit by doing the following: - Instead, make catchpad more like cleanuppad and landingpad: no control flow, just a bunch of simple operands; catchpad would be splittable. - Introduce catchswitch, a control flow instruction designed to model the constraints of funclet oriented EH. - Make funclet scoping explicit by having funclet instructions consume the token produced by the funclet which contains them. - Remove catchendpad and cleanupendpad. Their presence can be inferred implicitly using coloring information. N.B. The state numbering code for the CLR has been updated but the veracity of it's output cannot be spoken for. An expert should take a look to make sure the results are reasonable. Reviewers: rnk, JosephTremoulet, andrew.w.kaylor Differential Revision: http://reviews.llvm.org/D15139 llvm-svn: 255422
* Add arg_begin() and arg_end() to CallInst and InvokeInst; NFCISanjoy Das2015-12-101-3/+2
| | | | | | | | | | - This simplifies the CallSite class, arg_begin / arg_end are now simple wrapper getters. - In several places, we were creating CallSite instances solely to call arg_begin and arg_end. With this change, that's no longer required. llvm-svn: 255226
* Use WeakVH to keep track of calls with operand bundles in CloneCodeInfoSanjoy Das2015-12-091-1/+3
| | | | | | | | `CloneAndPruneIntoFromInst` can DCE instructions after cloning them into the new function, and so an AssertingVH is too strong. This change switches CloneCodeInfo to use a std::vector<WeakVH>. llvm-svn: 255148
* [OperandBundles] Remove unncessary constructorSanjoy Das2015-12-081-1/+1
| | | | | | | | The StringRef constructor is unnecessary (since we're converting to std::string anyway), and having it requires an explicit call to StringRef's or std::string's constructor. llvm-svn: 255000
* [OperandBundles] Extract duplicated code into a helper function, NFCSanjoy Das2015-11-251-5/+1
| | | | llvm-svn: 254047
* [Utils] Put includes in correct order. NFC.Weiming Zhao2015-11-241-1/+1
| | | | | | | | | | | | | | | | | | | Summary: Followed the guidelines in: http://llvm.org/docs/CodingStandards.html#include-style However, I noticed that uppercase named headers come before lowercase ones throughout the codebase. So kept them as is. Patch by Mandeep Singh Grang <mgrang@codeaurora.org> Reviewers: majnemer, davide, jmolloy, atrick Subscribers: sanjoy Differential Revision: http://reviews.llvm.org/D14939 llvm-svn: 254005
* Revert "Change memcpy/memset/memmove to have dest and source alignments."Pete Cooper2015-11-191-1/+1
| | | | | | | | | | This reverts commit r253511. This likely broke the bots in http://lab.llvm.org:8011/builders/clang-ppc64-elf-linux2/builds/20202 http://bb.pgr.jp/builders/clang-3stage-i686-linux/builds/3787 llvm-svn: 253543
* Change memcpy/memset/memmove to have dest and source alignments.Pete Cooper2015-11-181-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Note, this was reviewed (and more details are in) http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html These intrinsics currently have an explicit alignment argument which is required to be a constant integer. It represents the alignment of the source and dest, and so must be the minimum of those. This change allows source and dest to each have their own alignments by using the alignment attribute on their arguments. The alignment argument itself is removed. There are a few places in the code for which the code needs to be checked by an expert as to whether using only src/dest alignment is safe. For those places, they currently take the minimum of src/dest alignments which matches the current behaviour. For example, code which used to read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 500, i32 8, i1 false) will now read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 8 %dest, i8* align 8 %src, i32 500, i1 false) For out of tree owners, I was able to strip alignment from calls using sed by replacing: (call.*llvm\.memset.*)i32\ [0-9]*\,\ i1 false\) with: $1i1 false) and similarly for memmove and memcpy. I then added back in alignment to test cases which needed it. A similar commit will be made to clang which actually has many differences in alignment as now IRBuilder can generate different source/dest alignments on calls. In IRBuilder itself, a new argument was added. Instead of calling: CreateMemCpy(Dst, Src, getInt64(Size), DstAlign, /* isVolatile */ false) you now call CreateMemCpy(Dst, Src, getInt64(Size), DstAlign, SrcAlign, /* isVolatile */ false) There is a temporary class (IntegerAlignment) which takes the source alignment and rejects implicit conversion from bool. This is to prevent isVolatile here from passing its default parameter to the source alignment. Note, changes in future can now be made to codegen. I didn't change anything here, but this change should enable better memcpy code sequences. Reviewed by Hal Finkel. llvm-svn: 253511
* [OperandBundles] Tighten OperandBundleDef's interface; NFCSanjoy Das2015-11-181-1/+1
| | | | llvm-svn: 253446
* Teach the inliner to track deoptimization stateSanjoy Das2015-11-181-5/+80
| | | | | | | | | | | | | | | | Summary: This change teaches LLVM's inliner to track and suitably adjust deoptimization state (tracked via deoptimization operand bundles) as it inlines through call sites. The operation is described in more detail in the LangRef changes. Reviewers: reames, majnemer, chandlerc, dexonsmith Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D14552 llvm-svn: 253438
* [Inliner] Don't inline through callsites with operand bundlesSanjoy Das2015-10-231-0/+4
| | | | | | | | | | | | | | | | Summary: This change teaches the LLVM inliner to not inline through callsites with unknown operand bundles. Currently all operand bundles are "unknown" operand bundles but in the near future we will add support for inlining through some select kinds of operand bundles. Reviewers: reames, chandlerc, majnemer Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D14001 llvm-svn: 251141
* [InlineFunction] Correctly inline TerminatePadInstDavid Majnemer2015-10-131-5/+10
| | | | | | | | | | | We forgot to append the terminatepad's arguments which resulted in us treating the old terminatepad as an argument to the new terminatepad causing us to crash immediately. Instead, add the old terminatepad's arguments to the new terminatepad. This fixes PR25155. llvm-svn: 250234
* TransformUtils: Remove implicit ilist iterator conversions, NFCDuncan P. N. Exon Smith2015-10-131-42/+46
| | | | | | | | | | | Continuing the work from last week to remove implicit ilist iterator conversions. First related commit was probably r249767, with some more motivation in r249925. This edition gets LLVMTransformUtils compiling without the implicit conversions. No functional change intended. llvm-svn: 250142
* Fix Clang-tidy modernize-use-nullptr warnings in source directories and ↵Hans Wennborg2015-10-061-3/+3
| | | | | | | | | | generated files; other minor cleanups. Patch by Eugene Zelenko! Differential Revision: http://reviews.llvm.org/D13321 llvm-svn: 249482
* [Inline] Use AssumptionCache from the right FunctionVedant Kumar2015-09-231-1/+1
| | | | | | | | | | | | | | | This changes the behavior of AddAligntmentAssumptions to match its comment. I.e, prove the asserted alignment in the context of the caller, not the callee. Thanks to Mehdi Amini for seeing the issue here! Also to Artur Pilipenko who also saw a fix for the issue. rdar://22521387 Differential Revision: http://reviews.llvm.org/D12997 llvm-svn: 248390
* [PM/AA] Rebuild LLVM's alias analysis infrastructure in a way compatibleChandler Carruth2015-09-091-9/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | with the new pass manager, and no longer relying on analysis groups. This builds essentially a ground-up new AA infrastructure stack for LLVM. The core ideas are the same that are used throughout the new pass manager: type erased polymorphism and direct composition. The design is as follows: - FunctionAAResults is a type-erasing alias analysis results aggregation interface to walk a single query across a range of results from different alias analyses. Currently this is function-specific as we always assume that aliasing queries are *within* a function. - AAResultBase is a CRTP utility providing stub implementations of various parts of the alias analysis result concept, notably in several cases in terms of other more general parts of the interface. This can be used to implement only a narrow part of the interface rather than the entire interface. This isn't really ideal, this logic should be hoisted into FunctionAAResults as currently it will cause a significant amount of redundant work, but it faithfully models the behavior of the prior infrastructure. - All the alias analysis passes are ported to be wrapper passes for the legacy PM and new-style analysis passes for the new PM with a shared result object. In some cases (most notably CFL), this is an extremely naive approach that we should revisit when we can specialize for the new pass manager. - BasicAA has been restructured to reflect that it is much more fundamentally a function analysis because it uses dominator trees and loop info that need to be constructed for each function. All of the references to getting alias analysis results have been updated to use the new aggregation interface. All the preservation and other pass management code has been updated accordingly. The way the FunctionAAResultsWrapperPass works is to detect the available alias analyses when run, and add them to the results object. This means that we should be able to continue to respect when various passes are added to the pipeline, for example adding CFL or adding TBAA passes should just cause their results to be available and to get folded into this. The exception to this rule is BasicAA which really needs to be a function pass due to using dominator trees and loop info. As a consequence, the FunctionAAResultsWrapperPass directly depends on BasicAA and always includes it in the aggregation. This has significant implications for preserving analyses. Generally, most passes shouldn't bother preserving FunctionAAResultsWrapperPass because rebuilding the results just updates the set of known AA passes. The exception to this rule are LoopPass instances which need to preserve all the function analyses that the loop pass manager will end up needing. This means preserving both BasicAAWrapperPass and the aggregating FunctionAAResultsWrapperPass. Now, when preserving an alias analysis, you do so by directly preserving that analysis. This is only necessary for non-immutable-pass-provided alias analyses though, and there are only three of interest: BasicAA, GlobalsAA (formerly GlobalsModRef), and SCEVAA. Usually BasicAA is preserved when needed because it (like DominatorTree and LoopInfo) is marked as a CFG-only pass. I've expanded GlobalsAA into the preserved set everywhere we previously were preserving all of AliasAnalysis, and I've added SCEVAA in the intersection of that with where we preserve SCEV itself. One significant challenge to all of this is that the CGSCC passes were actually using the alias analysis implementations by taking advantage of a pretty amazing set of loop holes in the old pass manager's analysis management code which allowed analysis groups to slide through in many cases. Moving away from analysis groups makes this problem much more obvious. To fix it, I've leveraged the flexibility the design of the new PM components provides to just directly construct the relevant alias analyses for the relevant functions in the IPO passes that need them. This is a bit hacky, but should go away with the new pass manager, and is already in many ways cleaner than the prior state. Another significant challenge is that various facilities of the old alias analysis infrastructure just don't fit any more. The most significant of these is the alias analysis 'counter' pass. That pass relied on the ability to snoop on AA queries at different points in the analysis group chain. Instead, I'm planning to build printing functionality directly into the aggregation layer. I've not included that in this patch merely to keep it smaller. Note that all of this needs a nearly complete rewrite of the AA documentation. I'm planning to do that, but I'd like to make sure the new design settles, and to flesh out a bit more of what it looks like in the new pass manager first. Differential Revision: http://reviews.llvm.org/D12080 llvm-svn: 247167
* [WinEH] Add cleanupendpad instructionJoseph Tremoulet2015-09-031-0/+6
| | | | | | | | | | | | | | | | | | | | | | | Summary: Add a `cleanupendpad` instruction, used to mark exceptional exits out of cleanups (for languages/targets that can abort a cleanup with another exception). The `cleanupendpad` instruction is similar to the `catchendpad` instruction in that it is an EH pad which is the target of unwind edges in the handler and which itself has an unwind edge to the next EH action. The `cleanupendpad` instruction, similar to `cleanupret` has a `cleanuppad` argument indicating which cleanup it exits. The unwind successors of a `cleanuppad`'s `cleanupendpad`s must agree with each other and with its `cleanupret`s. Update WinEHPrepare (and docs/tests) to accomodate `cleanupendpad`. Reviewers: rnk, andrew.w.kaylor, majnemer Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D12433 llvm-svn: 246751
* [WinEH] Require token linkage in EH pad/ret signaturesJoseph Tremoulet2015-08-231-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: WinEHPrepare is going to require that cleanuppad and catchpad produce values of token type which are consumed by any cleanupret or catchret exiting the pad. This change updates the signatures of those operators to require/enforce that the type produced by the pads is token type and that the rets have an appropriate argument. The catchpad argument of a `CatchReturnInst` must be a `CatchPadInst` (and similarly for `CleanupReturnInst`/`CleanupPadInst`). To accommodate that restriction, this change adds a notion of an operator constraint to both LLParser and BitcodeReader, allowing appropriate sentinels to be constructed for forward references and appropriate error messages to be emitted for illegal inputs. Also add a verifier rule (noted in LangRef) that a catchpad with a catchpad predecessor must have no other predecessors; this ensures that WinEHPrepare will see the expected linear relationship between sibling catches on the same try. Lastly, remove some superfluous/vestigial casts from instruction operand setters operating on BasicBlocks. Reviewers: rnk, majnemer Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D12108 llvm-svn: 245797
* [IR] Add token typesDavid Majnemer2015-08-141-7/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This introduces the basic functionality to support "token types". The motivation stems from the need to perform operations on a Value whose provenance cannot be obscured. There are several applications for such a type but my immediate motivation stems from WinEH. Our personality routine enforces a single-entry - single-exit regime for cleanups. After several rounds of optimizations, we may be left with a terminator whose "cleanup-entry block" is not entirely clear because control flow has merged two cleanups together. We have experimented with using labels as operands inside of instructions which are not terminators to indicate where we came from but found that LLVM does not expect such exotic uses of BasicBlocks. Instead, we can use this new type to clearly associate the "entry point" and "exit point" of our cleanup. This is done by having the cleanuppad yield a Token and consuming it at the cleanupret. The token type makes it impossible to obscure or otherwise hide the Value, making it trivial to track the relationship between the two points. What is the burden to the optimizer? Well, it turns out we have already paid down this cost by accepting that there are certain calls that we are not permitted to duplicate, optimizations have to watch out for such instructions anyway. There are additional places in the optimizer that we will probably have to update but early examination has given me the impression that this will not be heroic. Differential Revision: http://reviews.llvm.org/D11861 llvm-svn: 245029
* New EH representation for MSVC compatibilityDavid Majnemer2015-07-311-21/+124
| | | | | | | | | | This introduces new instructions neccessary to implement MSVC-compatible exception handling support. Most of the middle-end and none of the back-end haven't been audited or updated to take them into account. Differential Revision: http://reviews.llvm.org/D11097 llvm-svn: 243766
OpenPOWER on IntegriCloud