summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/Utils/InlineFunction.cpp
Commit message (Collapse)AuthorAgeFilesLines
...
* [ProfileSummary] Make getProfileCount a non-static member function.Easwaran Raman2017-05-091-8/+10
| | | | | | | | | | This change is required because the notion of count is different for sample profiling and getProfileCount will need to determine the underlying profile type. Differential revision: https://reviews.llvm.org/D33012 llvm-svn: 302597
* Make it illegal for two Functions to point to the same DISubprogramAdrian Prantl2017-05-091-38/+5
| | | | | | | | | | | | | | | | | | | As recently discussed on llvm-dev [1], this patch makes it illegal for two Functions to point to the same DISubprogram and updates FunctionCloner to also clone the debug info of a function to conform to the new requirement. To simplify the implementation it also factors out the creation of inlineAt locations from the Inliner into a general-purpose utility in DILocation. [1] http://lists.llvm.org/pipermail/llvm-dev/2017-May/112661.html <rdar://problem/31926379> Differential Revision: https://reviews.llvm.org/D32975 This reapplies r302469 with a fix for a bot failure (reparentDebugInfo now checks for the case the orig and new function are identical). llvm-svn: 302576
* Revert r302469 "Make it illegal for two Functions to point to the same ↵Hans Wennborg2017-05-091-5/+38
| | | | | | | | | | | | | | | | | | | | | | | | DISubprogram" This caused PR32977. Original commit message: > Make it illegal for two Functions to point to the same DISubprogram > > As recently discussed on llvm-dev [1], this patch makes it illegal for > two Functions to point to the same DISubprogram and updates > FunctionCloner to also clone the debug info of a function to conform > to the new requirement. To simplify the implementation it also factors > out the creation of inlineAt locations from the Inliner into a > general-purpose utility in DILocation. > > [1] http://lists.llvm.org/pipermail/llvm-dev/2017-May/112661.html > <rdar://problem/31926379> > > Differential Revision: https://reviews.llvm.org/D32975 llvm-svn: 302533
* Make it illegal for two Functions to point to the same DISubprogramAdrian Prantl2017-05-081-38/+5
| | | | | | | | | | | | | | | | As recently discussed on llvm-dev [1], this patch makes it illegal for two Functions to point to the same DISubprogram and updates FunctionCloner to also clone the debug info of a function to conform to the new requirement. To simplify the implementation it also factors out the creation of inlineAt locations from the Inliner into a general-purpose utility in DILocation. [1] http://lists.llvm.org/pipermail/llvm-dev/2017-May/112661.html <rdar://problem/31926379> Differential Revision: https://reviews.llvm.org/D32975 llvm-svn: 302469
* Make getParamAlignment use argument numbersReid Kleckner2017-04-281-1/+1
| | | | | | | | | | | | | | | | | | The method is called "get *Param* Alignment", and is only used for return values exactly once, so it should take argument indices, not attribute indices. Avoids confusing code like: IsSwiftError = CS->paramHasAttr(ArgIdx, Attribute::SwiftError); Alignment = CS->getParamAlignment(ArgIdx + 1); Add getRetAlignment to handle the one case in Value.cpp that wants the return value alignment. This is a potentially breaking change for out-of-tree backends that do their own call lowering. llvm-svn: 301682
* Kill off the old SimplifyInstruction API by converting remaining users.Daniel Berlin2017-04-281-1/+1
| | | | llvm-svn: 301673
* Allow DataLayout to specify addrspace for allocas.Matt Arsenault2017-04-101-7/+7
| | | | | | | | | | | | | | | | | | | | | | | LLVM makes several assumptions about address space 0. However, alloca is presently constrained to always return this address space. There's no real way to avoid using alloca, so without this there is no way to opt out of these assumptions. The problematic assumptions include: - That the pointer size used for the stack is the same size as the code size pointer, which is also the maximum sized pointer. - That 0 is an invalid, non-dereferencable pointer value. These are problems for AMDGPU because alloca is used to implement the private address space, which uses a 32-bit index as the pointer value. Other pointers are 64-bit and behave more like LLVM's notion of generic address space. By changing the address space used for allocas, we can change our generic pointer type to be LLVM's generic pointer type which does have similar properties. llvm-svn: 299888
* Fix UB found by -Wtautological-undefined-compareDavid Blaikie2017-03-201-4/+3
| | | | llvm-svn: 298279
* Updates branch_weights annotation for call instructions during inlining.Dehao Chen2017-03-201-11/+40
| | | | | | | | | | | | | | Summary: Inliner should update the branch_weights annotation to scale it to proper value. Reviewers: davidxl, eraman Reviewed By: eraman Subscribers: zzheng, llvm-commits Differential Revision: https://reviews.llvm.org/D30767 llvm-svn: 298270
* Remove getArgumentList() in favor of arg_begin(), args(), etcReid Kleckner2017-03-161-1/+1
| | | | | | | | | | | | | | | | | Users often call getArgumentList().size(), which is a linear way to get the number of function arguments. arg_size(), on the other hand, is constant time. In general, the fact that arguments are stored in an iplist is an implementation detail, so I've removed it from the Function interface and moved all other users to the argument container APIs (arg_begin(), arg_end(), args(), arg_size()). Reviewed By: chandlerc Differential Revision: https://reviews.llvm.org/D31052 llvm-svn: 298010
* Revert "Strip debug info when inlining into a nodebug function."Adrian Prantl2017-03-071-30/+12
| | | | | | | | | | This reverts commit r296488. As noted by David Blaikie on llvm-commits, I overlooked the case of a debug function being inlined into a nodebug function being inlined into a debug function. llvm-svn: 297163
* Strip debug info when inlining into a nodebug function.Adrian Prantl2017-02-281-12/+30
| | | | | | | | | | | | | The LLVM backend cannot produce any debug info for an llvm::Function without a DISubprogram attachment. When inlining a debug-info-carrying function into a nodebug function, there is therefore no reason to keep any debug info intrinsic calls or debug locations on the instructions. This fixes a problem discovered in PR32042. rdar://problem/30679307 llvm-svn: 296488
* Revert r296366 "[InlineFunction] add nonnull assumptions based on argument ↵Hans Wennborg2017-02-271-36/+22
| | | | | | | | attributes" It causes miscompiles e.g. during self-host of Clang (PR32082). llvm-svn: 296398
* [InlineFunction] add nonnull assumptions based on argument attributesSanjay Patel2017-02-271-22/+36
| | | | | | | | | | | This was suggested in D27855: have the inliner add assumptions, so we don't lose nonnull info provided by argument attributes. This still doesn't solve PR28430 (dyn_cast), but this gets us closer. https://reviews.llvm.org/D29999 llvm-svn: 296366
* [InlineFunction] use getFunction(); NFCSanjay Patel2017-02-151-3/+3
| | | | llvm-svn: 295185
* [InlineFunction] use getCaller(); NFCISanjay Patel2017-02-151-3/+2
| | | | llvm-svn: 295181
* [InlineFunction] use range-for loop; NFCISanjay Patel2017-02-151-10/+8
| | | | llvm-svn: 295179
* Fix a bug in caller's BFI update code after inlining.Easwaran Raman2017-02-141-3/+10
| | | | | | | | | | | | | | | | Multiple blocks in the callee can be mapped to a single cloned block since we prune the callee as we clone it. The existing code iterates over the value map and clones the block frequency (and eventually scales the frequencies of the cloned blocks). Value map's iteration is not deterministic and so the cloned block might get the frequency of any of the original blocks. The fix is to set the max of the original frequencies to the cloned block. The first block in the sequence must have this max frequency and, in the call context, subsequent blocks must have its frequency. Differential Revision: https://reviews.llvm.org/D29696 llvm-svn: 295115
* Improve PGO support for the new inlinerEaswaran Raman2017-01-201-4/+68
| | | | | | | | | | | | | | | | | | | | | | | | This adds the following to the new PM based inliner in PGO mode: * Use block frequency analysis to derive callsite's profile count and use that to adjust thresholds of hot and cold callsites. * Incrementally update the BFI of the caller after a callee gets inlined into it. This incremental update is only within an invocation of the run method - BFI is not preserved across calls to run. Update the function entry count of the callee after inlining it into a caller. * I've tuned the thresholds for the hot and cold callsites using a hacked up version of the old inliner that explicitly computes BFI on a set of internal benchmarks and spec. Once the new PM based pipeline stabilizes (IIRC Chandler mentioned there are known issues) I'll benchmark this again and adjust the thresholds if required. Inliner PGO support. Differential revision: https://reviews.llvm.org/D28331 llvm-svn: 292666
* fix typo; NFCSanjay Patel2017-01-021-1/+1
| | | | llvm-svn: 290827
* [Inliner] remove unnecessary null checks from AddAlignmentAssumptions(); NFCISanjay Patel2016-12-311-5/+3
| | | | | | | We bail out on the 1st line if the assumption cache is not set, so there's no need to check it after that. llvm-svn: 290787
* [PM] Move the collection of call sites to a more appropriate placeChandler Carruth2016-12-271-9/+15
| | | | | | | | | | | | | | | | | | | | | inside of `InlineFunction`. Prior to this, call instructions are specifically being rewritten and replaced within the inlined region, invalidating some of the call sites. Several of these regions are using the same technique to walk the inlined region so this seems clearly safe up to this point. I've also added a short circuit to the scan for call sites based on what other code is doing. With this, the most common crash I've found in the new inliner code is fixed. I've turned it on for another test case that covers this scenario. I'll make my way through most of the other inliner test cases just to get some easy coverage next. llvm-svn: 290562
* [PM] Provide an initial, minimal port of the inliner to the new pass manager.Chandler Carruth2016-12-201-1/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This doesn't implement *every* feature of the existing inliner, but tries to implement the most important ones for building a functional optimization pipeline and beginning to sort out bugs, regressions, and other problems. Notable, but intentional omissions: - No alloca merging support. Why? Because it isn't clear we want to do this at all. Active discussion and investigation is going on to remove it, so for simplicity I omitted it. - No support for trying to iterate on "internally" devirtualized calls. Why? Because it adds what I suspect is inappropriate coupling for little or no benefit. We will have an outer iteration system that tracks devirtualization including that from function passes and iterates already. We should improve that rather than approximate it here. - Optimization remarks. Why? Purely to make the patch smaller, no other reason at all. The last one I'll probably work on almost immediately. But I wanted to skip it in the initial patch to try to focus the change as much as possible as there is already a lot of code moving around and both of these *could* be skipped without really disrupting the core logic. A summary of the different things happening here: 1) Adding the usual new PM class and rigging. 2) Fixing minor underlying assumptions in the inline cost analysis or inline logic that don't generally hold in the new PM world. 3) Adding the core pass logic which is in essence a loop over the calls in the nodes in the call graph. This is a bit duplicated from the old inliner, but only a handful of lines could realistically be shared. (I tried at first, and it really didn't help anything.) All told, this is only about 100 lines of code, and most of that is the mechanics of wiring up analyses from the new PM world. 4) Updating the LazyCallGraph (in the new PM) based on the *newly inlined* calls and references. This is very minimal because we cannot form cycles. 5) When inlining removes the last use of a function, eagerly nuking the body of the function so that any "one use remaining" inline cost heuristics are immediately refined, and queuing these functions to be completely deleted once inlining is complete and the call graph updated to reflect that they have become dead. 6) After all the inlining for a particular function, updating the LazyCallGraph and the CGSCC pass manager to reflect the function-local simplifications that are done immediately and internally by the inline utilties. These are the exact same fundamental set of CG updates done by arbitrary function passes. 7) Adding a bunch of test cases to specifically target CGSCC and other subtle aspects in the new PM world. Many thanks to the careful review from Easwaran and Sanjoy and others! Differential Revision: https://reviews.llvm.org/D24226 llvm-svn: 290161
* Revert @llvm.assume with operator bundles (r289755-r289757)Daniel Jasper2016-12-191-6/+26
| | | | | | | This creates non-linear behavior in the inliner (see more details in r289755's commit thread). llvm-svn: 290086
* Remove the AssumptionCacheHal Finkel2016-12-151-26/+6
| | | | | | | | | After r289755, the AssumptionCache is no longer needed. Variables affected by assumptions are now found by using the new operand-bundle-based scheme. This new scheme is more computationally efficient, and also we need much less code... llvm-svn: 289756
* [InlineFunction] Refactor code in function `fixupLineNumbers' as suggested ↵Andrea Di Biagio2016-12-071-16/+18
| | | | | | by David in D27462. NFC llvm-svn: 288901
* [InlineFunction] Do not propagate the callsite debug location to ↵Andrea Di Biagio2016-12-071-3/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | instructions inlined from functions with debug info. When a function F is inlined, InlineFunction extends the debug location of every instruction inlined from F by adding an InlinedAt. However, if an instruction has a 'null' debug location, InlineFunction would propagate the callsite debug location to it. This behavior existed since revision 210459. Revision 210459 was originally committed specifically to workaround the lack of debug information for instructions inlined from intrinsic functions (which are usually declared with attributes `__always_inline__, __nodebug__`). The problem with revision 210459 is that it doesn't make any sort of distinction between instructions inlined from a 'nodebug' function and instructions which are inlined from a function built with debug info. This issue may lead to incorrect stepping in the debugger. This patch works under the assumption that a nodebug function does not have a DISubprogram. When a function F is inlined into another function G, InlineFunction checks if F has debug info associated with it. For nodebug functions, the InlineFunction logic is unchanged (i.e. it would still propagate the callsite debugloc to the inlined instructions). Otherwise, InlineFunction no longer propagates the callsite debug location. Differential Revision: https://reviews.llvm.org/D27462 llvm-svn: 288895
* [tsan] Add support for C++ exceptions into TSan (call __tsan_func_exit ↵Kuba Brecka2016-11-141-31/+1
| | | | | | | | | | during unwinding), LLVM part This adds support for TSan C++ exception handling, where we need to add extra calls to __tsan_func_exit when a function is exitted via exception mechanisms. Otherwise the shadow stack gets corrupted (leaked). This patch moves and enhances the existing implementation of EscapeEnumerator that finds all possible function exit points, and adds extra EH cleanup blocks where needed. Differential Revision: https://reviews.llvm.org/D26177 llvm-svn: 286893
* Inliner: Don't mark swifterror allocas with lifetime markersArnold Schwaighofer2016-09-091-0/+3
| | | | | | | | | This would create a bitcast use which fails the verifier: swifterror values may only be used by loads, stores, and as function arguments. rdar://28233244 llvm-svn: 281114
* Fix inliner funclet unwind memoizationJoseph Tremoulet2016-09-041-7/+79
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: The inliner may need to determine where a given funclet unwinds to, and this determination may depend on other funclets throughout the funclet tree. The code that performs this walk in getUnwindDestToken memoizes results to avoid redundant computations. In the case that a funclet's unwind destination is derived from its ancestor, there's code to walk back down the tree from the ancestor updating the memo map of its descendants to record the unwind destination. This change fixes that code to account for the case that some descendant has a different unwind destination, which can happen if that unwind dest is a descendant of the EHPad being queried and thus didn't determine its unwind destination. Also update test inline-funclets.ll, which is supposed to cover such scenarios, to include a case that fails an assertion without this fix but passes with it. Fixes PR29151. Reviewers: majnemer Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D24117 llvm-svn: 280610
* [PM] Port the always inliner to the new pass manager in a much moreChandler Carruth2016-08-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | minimal and boring form than the old pass manager's version. This pass does the very minimal amount of work necessary to inline functions declared as always-inline. It doesn't support a wide array of things that the legacy pass manager did support, but is alse ... about 20 lines of code. So it has that going for it. Notably things this doesn't support: - Array alloca merging - To support the above, bottom-up inlining with careful history tracking and call graph updates - DCE of the functions that become dead after this inlining. - Inlining through call instructions with the always_inline attribute. Instead, it focuses on inlining functions with that attribute. The first I've omitted because I'm hoping to just turn it off for the primary pass manager. If that doesn't pan out, I can add it here but it will be reasonably expensive to do so. The second should really be handled by running global-dce after the inliner. I don't want to re-implement the non-trivial logic necessary to do comdat-correct DCE of functions. This means the -O0 pipeline will have to be at least 'always-inline,global-dce', but that seems reasonable to me. If others are seriously worried about this I'd like to hear about it and understand why. Again, this is all solveable by factoring that logic into a utility and calling it here, but I'd like to wait to do that until there is a clear reason why the existing pass-based factoring won't work. The final point is a serious one. I can fairly easily add support for this, but it seems both costly and a confusing construct for the use case of the always inliner running at -O0. This attribute can of course still impact the normal inliner easily (although I find that a questionable re-use of the same attribute). I've started a discussion to sort out what semantics we want here and based on that can figure out if it makes sense ta have this complexity at O0 or not. One other advantage of this design is that it should be quite a bit faster due to checking for whether the function is a viable candidate for inlining exactly once per function instead of doing it for each call site. Anyways, hopefully a reasonable starting point for this pass. Differential Revision: https://reviews.llvm.org/D23299 llvm-svn: 278896
* Preserve the assumption cache more oftenDavid Majnemer2016-08-161-12/+22
| | | | | | | We were clearing it out in LoopUnswitch and InlineFunction instead of attempting to preserve it. llvm-svn: 278860
* [Inliner] Don't treat inalloca allocas as staticReid Kleckner2016-08-121-3/+10
| | | | | | | | | They aren't static, and moving them to the entry block across something else will only result in tears. Root cause of http://crbug.com/636558. llvm-svn: 278571
* Avoid using a raw AssumptionCacheTracker in various inliner functions.Sean Silva2016-07-231-5/+5
| | | | | | | | | | This unblocks the new PM part of River's patch in https://reviews.llvm.org/D22706 Conveniently, this same change was needed for D21921 and so these changes are just spun out from there. llvm-svn: 276515
* Apply clang-tidy's modernize-loop-convert to most of lib/Transforms.Benjamin Kramer2016-06-261-13/+10
| | | | | | Only minor manual fixes. No functionality change intended. llvm-svn: 273808
* Revert "[SimplifyCFG] Stop inserting calls to llvm.trap for UB"David Majnemer2016-06-251-1/+1
| | | | | | This reverts commit r273778, it seems to break UBSan :/ llvm-svn: 273779
* [SimplifyCFG] Stop inserting calls to llvm.trap for UBDavid Majnemer2016-06-251-1/+1
| | | | | | | | | | | | | | | | | SimplifyCFG had logic to insert calls to llvm.trap for two very particular IR patterns: stores and invokes of undef/null. While InstCombine canonicalizes certain undefined behavior IR patterns to stores of undef, phase ordering means that this cannot be relied upon in general. There are much better tools than llvm.trap: UBSan and ASan. N.B. I could be argued into reverting this change if a clear argument as to why it is important that we synthesize llvm.trap for stores, I'd be hard pressed to see why it'd be useful for invokes... llvm-svn: 273778
* Switch more loops to be range-basedDavid Majnemer2016-06-241-2/+1
| | | | | | | This makes the code a little more concise, no functional change is intended. llvm-svn: 273644
* Run clang-tidy's performance-unnecessary-copy-initialization over LLVM.Benjamin Kramer2016-06-121-1/+1
| | | | | | No functionality change intended. llvm-svn: 272516
* Pass DebugLoc and SDLoc by const ref.Benjamin Kramer2016-06-121-1/+2
| | | | | | | | This used to be free, copying and moving DebugLocs became expensive after the metadata rewrite. Passing by reference eliminates a ton of track/untrack operations. No functionality change intended. llvm-svn: 272512
* All llvm.deoptimize declarations must use the same calling conventionSanjoy Das2016-05-121-1/+7
| | | | | | | | | | | | | | | | | This new verifier rule lets us unambigously pick a calling convention when creating a new declaration for `@llvm.experimental.deoptimize.<ty>`. It is also congruent with our lowering strategy -- since all calls to `@llvm.experimental.deoptimize` are lowered to calls to `__llvm_deoptimize`, it is reasonable to enforce a unique calling convention. Some of the tests that were breaking this verifier rule have had to be split up into different .ll files. The inliner was violating this rule as well, and has been fixed to avoid producing invalid IR. llvm-svn: 269261
* [Inliner] Preserve llvm.mem.parallel_loop_access metadataHal Finkel2016-04-281-0/+31
| | | | | | | | | | | | | | | | | | | | | | | When inlining a call site with llvm.mem.parallel_loop_access metadata, this metadata needs to be propagated to all cloned memory-accessing instructions. Otherwise, inlining parts of the loop body will invalidate the annotation. With this functionality, we now vectorize the following as expected: void Body(int *res, int *c, int *d, int *p, int i) { res[i] = (p[i] == 0) ? res[i] : res[i] + d[i]; } void Test(int *res, int *c, int *d, int *p, int n) { int i; #pragma clang loop vectorize(assume_safety) for (i = 0; i < 1600; i++) { Body(res, c, d, p, i); } } llvm-svn: 267949
* Maintain calling convention when inling calls to llvm.deoptimizeSanjoy Das2016-04-091-1/+3
| | | | | | | The behavior here was buggy -- we'd forget the calling convention after inlining a callsite calling llvm.deoptimize. llvm-svn: 265867
* Don't insert stackrestore on deoptimizing returnsSanjoy Das2016-04-011-2/+4
| | | | | | | | They're not necessary (since the stack pointer is trivially restored on return), and the way LLVM inserts the stackrestore calls breaks the IR (we get a stackrestore between the deoptimize call and the return). llvm-svn: 265101
* Don't insert lifetime end markers on deoptimizing returnsSanjoy Das2016-04-011-2/+5
| | | | | | | | | They're not necessary (since the lifetime of the alloca is trivially over due to the return), and the way LLVM inserts the lifetime.end markers breaks the IR (we get a lifetime end marker between the deoptimize call and the return). llvm-svn: 265100
* Introduce a @llvm.experimental.guard intrinsicSanjoy Das2016-03-311-5/+7
| | | | | | | | | | | | | | | | | | | | | | | Summary: As discussed on llvm-dev[1]. This change adds the basic boilerplate code around having this intrinsic in LLVM: - Changes in Intrinsics.td, and the IR Verifier - A lowering pass to lower @llvm.experimental.guard to normal control flow - Inliner support [1]: http://lists.llvm.org/pipermail/llvm-dev/2016-February/095523.html Reviewers: reames, atrick, chandlerc, rnk, JosephTremoulet, echristo Subscribers: mcrosier, llvm-commits Differential Revision: http://reviews.llvm.org/D18527 llvm-svn: 264976
* Introduce @llvm.experimental.deoptimizeSanjoy Das2016-03-111-1/+64
| | | | | | | | | | | | | | | | | | | | | | | | Summary: This intrinsic, together with deoptimization operand bundles, allow frontends to express transfer of control and frame-local state from one (typically more specialized, hence faster) version of a function into another (typically more generic, hence slower) version. In languages with a fully integrated managed runtime this intrinsic can be used to implement "uncommon trap" like functionality. In unmanaged languages like C and C++, this intrinsic can be used to represent the slow paths of specialized functions. Note: this change does not address how `@llvm.experimental_deoptimize` is lowered. That will be done in a later change. Reviewers: chandlerc, rnk, atrick, reames Subscribers: llvm-commits, kmod, mjacob, maksfb, mcrosier, JosephTremoulet Differential Revision: http://reviews.llvm.org/D17732 llvm-svn: 263281
* Revert revisions 262636, 262643, 262679, and 262682.Easwaran Raman2016-03-081-6/+3
| | | | llvm-svn: 262883
* Fix a use-after-free bug introduced in r262636Easwaran Raman2016-03-041-1/+4
| | | | llvm-svn: 262679
* Infrastructure for PGO enhancements in inlinerEaswaran Raman2016-03-031-2/+2
| | | | | | | | | | | | This patch provides the following infrastructure for PGO enhancements in inliner: Enable the use of block level profile information in inliner Incremental update of block frequency information during inlining Update the function entry counts of callees when they get inlined into callers. Differential Revision: http://reviews.llvm.org/D16381 llvm-svn: 262636
OpenPOWER on IntegriCloud