summaryrefslogtreecommitdiffstats
path: root/llvm/test/Transforms/Inline
Commit message (Collapse)AuthorAgeFilesLines
...
* Strip debug info when inlining into a nodebug function.Adrian Prantl2017-02-282-3/+40
| | | | | | | | | | | | | The LLVM backend cannot produce any debug info for an llvm::Function without a DISubprogram attachment. When inlining a debug-info-carrying function into a nodebug function, there is therefore no reason to keep any debug info intrinsic calls or debug locations on the instructions. This fixes a problem discovered in PR32042. rdar://problem/30679307 llvm-svn: 296488
* Revert r296366 "[InlineFunction] add nonnull assumptions based on argument ↵Hans Wennborg2017-02-271-5/+1
| | | | | | | | attributes" It causes miscompiles e.g. during self-host of Clang (PR32082). llvm-svn: 296398
* [InlineFunction] add nonnull assumptions based on argument attributesSanjay Patel2017-02-271-1/+5
| | | | | | | | | | | This was suggested in D27855: have the inliner add assumptions, so we don't lose nonnull info provided by argument attributes. This still doesn't solve PR28430 (dyn_cast), but this gets us closer. https://reviews.llvm.org/D29999 llvm-svn: 296366
* [Inline] add tests to show attribute information loss; NFCSanjay Patel2017-02-151-0/+50
| | | | llvm-svn: 295209
* Fix a bug in caller's BFI update code after inlining.Easwaran Raman2017-02-141-0/+93
| | | | | | | | | | | | | | | | Multiple blocks in the callee can be mapped to a single cloned block since we prune the callee as we clone it. The existing code iterates over the value map and clones the block frequency (and eventually scales the frequencies of the cloned blocks). Value map's iteration is not deterministic and so the cloned block might get the frequency of any of the original blocks. The fix is to set the max of the original frequencies to the cloned block. The first block in the sequence must have this max frequency and, in the call context, subsequent blocks must have its frequency. Differential Revision: https://reviews.llvm.org/D29696 llvm-svn: 295115
* Do not apply redundant LastCallToStaticBonusTaewook Oh2017-02-141-0/+52
| | | | | | | | | | | | | | | | | | | | | | Summary: As written in the comments above, LastCallToStaticBonus is already applied to the cost if Caller has only one user, so it is redundant to reapply the bonus here. If the only user is not a caller, TotalSecondaryCost will not be adjusted anyway because callerWillBeRemoved is false. If there's no caller at all, we don't need to care about TotalSecondaryCost because inliningPreventsSomeOuterInline is false. Reviewers: chandlerc, eraman Reviewed By: eraman Subscribers: haicheng, davidxl, davide, llvm-commits, mehdi_amini Differential Revision: https://reviews.llvm.org/D29169 llvm-svn: 295075
* [Inliner] Fold analysis remarks into missed remarksAdam Nemet2017-01-302-4/+2
| | | | | | This significantly reduces the noise level of these messages. llvm-svn: 293492
* [PH] Replace uses of AssertingVH from members of analysis results withChandler Carruth2017-01-241-5/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | a lazy-asserting PoisoningVH. AssertVH is fundamentally incompatible with cache-invalidation of analysis results. The invaliadtion happens after the AssertingVH has already fired. Instead, use a PoisoningVH that will assert if the dangling handle is ever used rather than merely be assigned or destroyed. This patch also removes all of the (numerous) doomed attempts to work around this fundamental incompatibility. It is a pretty significant simplification IMO. The most interesting change is in the Inliner where we still do some clearing because we don't want to rely on the coarse grained invalidation strategy of the containing pass manager. However, I prefer the approach that contains this logic to the cleanup phase of the Inliner, and I think we could enhance the CGSCC analysis management layer to make this even better in the future if desired. The rest is straight cleanup. I've also added a test for one of the harder cases to work around: when a *module analysis* contains many AssertingVHes pointing at functions. Differential Revision: https://reviews.llvm.org/D29006 llvm-svn: 292928
* [PM] Add a dedicated test case for the issue fixed in r292770.Chandler Carruth2017-01-231-0/+33
| | | | | | | | While this is covered by a clang test case, we should have something locally to LLVM that immediately checks the inliner doesn't leave analyses to dangling IR bodies. llvm-svn: 292772
* [PM] Fix a really nasty bug introduced when adding PGO support to theChandler Carruth2017-01-221-0/+111
| | | | | | | | | | | | | | | | | | | | | new PM's inliner. The bug happens when we refine an SCC after having computed a proxy for the FunctionAnalysisManager, and then proceed to compute fresh analyses for functions in the *new* SCC using the manager provided by the old SCC's proxy. *And* when we manage to mutate a function in this new SCC in a way that invalidates those analyses. This can be... challenging to reproduce. I've managed to contrive a set of functions that trigger this and added a test case, but it is a bit brittle. I've directly checked that the passes run in the expected ways to help avoid the test just becoming silently irrelevant. This gets the new PM back to passing the LLVM test suite after the PGO improvements landed. llvm-svn: 292757
* Improve PGO support for the new inlinerEaswaran Raman2017-01-207-2/+272
| | | | | | | | | | | | | | | | | | | | | | | | This adds the following to the new PM based inliner in PGO mode: * Use block frequency analysis to derive callsite's profile count and use that to adjust thresholds of hot and cold callsites. * Incrementally update the BFI of the caller after a callee gets inlined into it. This incremental update is only within an invocation of the run method - BFI is not preserved across calls to run. Update the function entry count of the callee after inlining it into a caller. * I've tuned the thresholds for the hot and cold callsites using a hacked up version of the old inliner that explicitly computes BFI on a set of internal benchmarks and spec. Once the new PM based pipeline stabilizes (IIRC Chandler mentioned there are known issues) I'll benchmark this again and adjust the thresholds if required. Inliner PGO support. Differential revision: https://reviews.llvm.org/D28331 llvm-svn: 292666
* Recommit "[InlineCost] Use TTI to check if GEP is free." #3Haicheng Wu2017-01-202-0/+32
| | | | | | | | | | | | This is the third attemp to recommit r292526. The original summary: Currently, a GEP is considered free only if its indices are all constant. TTI::getGEPCost() can give target-specific more accurate analysis. TTI is already used for the cost of many other instructions. llvm-svn: 292633
* Revert "Recommit "[InlineCost] Use TTI to check if GEP is free." #2"Haicheng Wu2017-01-201-30/+0
| | | | | | This reverts commit r292616 because the test case still has problem. llvm-svn: 292618
* Recommit "[InlineCost] Use TTI to check if GEP is free." #2Haicheng Wu2017-01-201-0/+30
| | | | | | | | | | | | This is the second attemp to recommit r292526. The original summary: Currently, a GEP is considered free only if its indices are all constant. TTI::getGEPCost() can give target-specific more accurate analysis. TTI is already used for the cost of many other instructions. llvm-svn: 292616
* Revert "Recommit "[InlineCost] Use TTI to check if GEP is free.""Haicheng Wu2017-01-201-30/+0
| | | | | | This reverts commit r292570. The test still has problem. llvm-svn: 292572
* Recommit "[InlineCost] Use TTI to check if GEP is free."Haicheng Wu2017-01-201-0/+30
| | | | | | | | | | | | This recommits r292526 which is reverted in r292529 after fixing the test case. The original summary: Currently, a GEP is considered free only if its indices are all constant. TTI::getGEPCost() can give target-specific more accurate analysis. TTI is already used for the cost of many other instructions. llvm-svn: 292570
* Revert "[InlineCost] Use TTI to check if GEP is free."Haicheng Wu2017-01-191-29/+0
| | | | | | This reverts commit r292526. The test case has problem. llvm-svn: 292529
* [InlineCost] Use TTI to check if GEP is free.Haicheng Wu2017-01-191-0/+29
| | | | | | | | | | Currently, a GEP is considered free only if its indices are all constant. TTI::getGEPCost() can give target-specific more accurate analysis. TTI is already used for the cost of many other instructions. Differential Revision: https://reviews.llvm.org/D28693 llvm-svn: 292526
* [Inliner] Fix a test where I typo'ed 'CHECK' as 'CHCEK' when convertingChandler Carruth2017-01-041-1/+1
| | | | | | | | | | to FileCheck. Fortunately, it passes. =] Spotted in review by Bob Wilson! llvm-svn: 290953
* [PM] Introduce a devirtualization iteration layer for the new PM.Chandler Carruth2016-12-281-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is an orthogonal and separated layer instead of being embedded inside the pass manager. While it adds a small amount of complexity, it is fairly minimal and the composability and control seems worth the cost. The logic for this ends up being nicely isolated and targeted. It should be easy to experiment with different iteration strategies wrapped around the CGSCC bottom-up walk using this kind of facility. The mechanism used to track devirtualization is the simplest one I came up with. I think it handles most of the cases the existing iteration machinery handles, but I haven't done a *very* in depth analysis. It does however match the basic intended semantics, and we can tweak or tune its exact behavior incrementally as necessary. One thing that we may want to revisit is freshly building the value handle set on each iteration. While I don't think this will be a significant cost (it is strictly fewer value handles but more churn of value handes than the old call graph), it is conceivable that we'll want a somewhat more clever tracking mechanism. My hope is to layer that on as a follow up patch with data supporting any implementation complexity it adds. This code also provides for a basic count heuristic: if the number of indirect calls decreases and the number of direct calls increases for a given function in the SCC, we assume devirtualization is responsible. This matches the heuristics currently used in the legacy pass manager. Differential Revision: https://reviews.llvm.org/D23114 llvm-svn: 290665
* [PM] Teach the CGSCC's CG update utility to more carefully invalidateChandler Carruth2016-12-281-0/+104
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | analyses when we're about to break apart an SCC. We can't wait until after breaking apart the SCC to invalidate things: 1) Which SCC do we then invalidate? All of them? 2) Even if we invalidate all of them, a newly created SCC may not have a proxy that will convey the invalidation to functions! Previously we only invalidated one of the SCCs and too late. This led to stale analyses remaining in the cache. And because the caching strategy actually works, they would get used and chaos would ensue. Doing invalidation early is somewhat pessimizing though if we *know* that the SCC structure won't change. So it turns out that the design to make the mutation API force the caller to know the *kind* of mutation in advance was indeed 100% correct and we didn't do enough of it. So this change also splits two cases of switching a call edge to a ref edge into two separate APIs so that callers can clearly test for this and take the easy path without invalidating when appropriate. This is particularly important in this case as we expect most inlines to be between functions in separate SCCs and so the common case is that we don't have to so aggressively invalidate analyses. The LCG API change in turn needed some basic cleanups and better testing in its unittest. No interesting functionality changed there other than more coverage of the returned sequence of SCCs. While this seems like an obvious improvement over the current state, I'd like to revisit the core concept of invalidating within the CG-update layer at all. I'm wondering if we would be better served forcing the callers to handle the invalidation beforehand in the cases that they can handle it. An interesting example is when we want to teach the inliner to *update and preserve* analyses. But we can cross that bridge when we get there. With this patch, the new pass manager an build all of the LLVM test suite at -O3 and everything passes. =D I haven't bootstrapped yet and I'm sure there are still plenty of bugs, but this gives a nice baseline so I'm going to increasingly focus on fleshing out the missing functionality, especially the bits that are just turned off right now in order to let us establish this baseline. llvm-svn: 290664
* [PM] Teach the inliner's call graph update to handle inserting new edgesChandler Carruth2016-12-281-0/+39
| | | | | | | | | | | | | | | | | when they are call edges at the leaf but may (transitively) be reached via ref edges. It turns out there is a simple rule: insert everything as a ref edge which is a safe conservative default. Then we let the existing update logic handle promoting some of those to call edges. Note that it would be fairly cheap to make these call edges right away if that is desirable by testing whether there is some existing call path from the source to the target. It just seemed like slightly more complexity in this code path that isn't strictly necessary. If anyone feels strongly about handling this differently I'm happy to change it. llvm-svn: 290649
* [PM] Turn on the new PM's inliner in addition to the current one forChandler Carruth2016-12-2760-5/+63
| | | | | | | | | | | | | | | most of the inliner test cases. The inliner involves a bunch of interesting code and tends to be where most of the issues I've seen experimenting with the new PM lie. All of these test cases pass, but I'd like to keep some more thorough coverage here so doing a fairly blanket enabling. There are a handful of interesting tests I've not enabled yet because they're focused on the always inliner, or on functionality that doesn't (yet) exist in the inliner. llvm-svn: 290592
* [PM] Add one of the features left out of the initial inliner patch:Chandler Carruth2016-12-271-0/+1
| | | | | | | | | | | | | | | | | skipping indirectly recursive inline chains. To do this, we implicitly build an inline stack for each callsite and check prior to inlining that doing so would not form a cycle. This uses the exact same technique and even shares some code with the legacy PM inliner. This solution remains deeply unsatisfying to me because it means we cannot actually iterate the inliner externally. Doing so would not be able to easily detect and avoid such cycles. Some day I would very much like to have a solution that works without this internal state to detect cycles, but this is not that day. llvm-svn: 290590
* [PM] Wire up another test to the new pass manager.Chandler Carruth2016-12-271-6/+8
| | | | | | | | Nothing really interesting here, but I had to improve the test to use variables rather than hard coding value names as we happen to end up with different value names in the new PM. llvm-svn: 290589
* [PM] Teach the inliner in the new PM to merge attributes after inlining.Chandler Carruth2016-12-271-1/+2
| | | | | | | Also enable the new PM in the attributes test case which caught this issue. llvm-svn: 290572
* [Inliner] Modernize all of the inliner tests that were using grep.Chandler Carruth2016-12-2714-213/+320
| | | | | | | | | | | This mostly involved converting from grep to FileCheck and tidying up the IR used. In one case (invoke_test-3.ll) the test had become completely pointless as we use 'resume' rather than 'unwind' now, and even then it did not occur at the end of the line. llvm-svn: 290570
* [PM] Move the collection of call sites to a more appropriate placeChandler Carruth2016-12-271-0/+1
| | | | | | | | | | | | | | | | | | | | | inside of `InlineFunction`. Prior to this, call instructions are specifically being rewritten and replaced within the inlined region, invalidating some of the call sites. Several of these regions are using the same technique to walk the inlined region so this seems clearly safe up to this point. I've also added a short circuit to the scan for call sites based on what other code is doing. With this, the most common crash I've found in the new inliner code is fixed. I've turned it on for another test case that covers this scenario. I'll make my way through most of the other inliner test cases just to get some easy coverage next. llvm-svn: 290562
* [PM] Teach the always inliner in the new pass manager to supportChandler Carruth2016-12-261-0/+27
| | | | | | | | | | | | | | | | removing fully-dead comdats without removing dead entries in comdats with live members. This factors the core logic out of the current inliner's internals to a reusable utility and leverages that in both places. The factored out code should also be (minorly) more efficient in cases where we have very few dead functions or dead comdats to consider. I've added a test case to cover this behavior of the always inliner. This is the last significant bug in the new PM's always inliner I've found (so far). llvm-svn: 290557
* [PM] Remove a bunch of junk that snuck in when I failed at manipulatingChandler Carruth2016-12-231-21/+0
| | | | | | my editor to close and commit the patch. Sorry for the noise. llvm-svn: 290460
* [PM] Teach the always inlining test case to be much more strict aboutChandler Carruth2016-12-231-8/+154
| | | | | | | | | | | | | | | | | | | | whether functions are removed, and fix the new PM's always inliner to actually pass this test. Without this, the new PM's always inliner leaves all the functions kicking around which won't work out very well given the semantics of always inline. Doing this really highlights how frustrating the current alwaysinline semantic contract is though -- why can we put it on *external* functions, etc? Also I've added a number of tricky and interesting test cases for removing functions with the always inliner. There is one remaining case not handled -- fully removing comdats -- and I've left a FIXME about this. llvm-svn: 290457
* [PM] Clean up test case and comments a bit. NFC.Chandler Carruth2016-12-231-2/+5
| | | | llvm-svn: 290456
* Renumber testcase metadata nodes after r290153.Adrian Prantl2016-12-221-85/+75
| | | | | | | | | | | | | This patch renumbers the metadata nodes in debug info testcases after https://reviews.llvm.org/D26769. This is a separate patch because it causes so much churn. This was implemented with a python script that pipes the testcases through llvm-as - | llvm-dis - and then goes through the original and new output side-by side to insert all comments at a close-enough location. Differential Revision: https://reviews.llvm.org/D27765 llvm-svn: 290292
* [PM] Provide an initial, minimal port of the inliner to the new pass manager.Chandler Carruth2016-12-204-0/+416
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This doesn't implement *every* feature of the existing inliner, but tries to implement the most important ones for building a functional optimization pipeline and beginning to sort out bugs, regressions, and other problems. Notable, but intentional omissions: - No alloca merging support. Why? Because it isn't clear we want to do this at all. Active discussion and investigation is going on to remove it, so for simplicity I omitted it. - No support for trying to iterate on "internally" devirtualized calls. Why? Because it adds what I suspect is inappropriate coupling for little or no benefit. We will have an outer iteration system that tracks devirtualization including that from function passes and iterates already. We should improve that rather than approximate it here. - Optimization remarks. Why? Purely to make the patch smaller, no other reason at all. The last one I'll probably work on almost immediately. But I wanted to skip it in the initial patch to try to focus the change as much as possible as there is already a lot of code moving around and both of these *could* be skipped without really disrupting the core logic. A summary of the different things happening here: 1) Adding the usual new PM class and rigging. 2) Fixing minor underlying assumptions in the inline cost analysis or inline logic that don't generally hold in the new PM world. 3) Adding the core pass logic which is in essence a loop over the calls in the nodes in the call graph. This is a bit duplicated from the old inliner, but only a handful of lines could realistically be shared. (I tried at first, and it really didn't help anything.) All told, this is only about 100 lines of code, and most of that is the mechanics of wiring up analyses from the new PM world. 4) Updating the LazyCallGraph (in the new PM) based on the *newly inlined* calls and references. This is very minimal because we cannot form cycles. 5) When inlining removes the last use of a function, eagerly nuking the body of the function so that any "one use remaining" inline cost heuristics are immediately refined, and queuing these functions to be completely deleted once inlining is complete and the call graph updated to reflect that they have become dead. 6) After all the inlining for a particular function, updating the LazyCallGraph and the CGSCC pass manager to reflect the function-local simplifications that are done immediately and internally by the inline utilties. These are the exact same fundamental set of CG updates done by arbitrary function passes. 7) Adding a bunch of test cases to specifically target CGSCC and other subtle aspects in the new PM world. Many thanks to the careful review from Easwaran and Sanjoy and others! Differential Revision: https://reviews.llvm.org/D24226 llvm-svn: 290161
* [IR] Remove the DIExpression field from DIGlobalVariable.Adrian Prantl2016-12-201-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements PR31013 by introducing a DIGlobalVariableExpression that holds a pair of DIGlobalVariable and DIExpression. Currently, DIGlobalVariables holds a DIExpression. This is not the best way to model this: (1) The DIGlobalVariable should describe the source level variable, not how to get to its location. (2) It makes it unsafe/hard to update the expressions when we call replaceExpression on the DIGLobalVariable. (3) It makes it impossible to represent a global variable that is in more than one location (e.g., a variable with multiple DW_OP_LLVM_fragment-s). We also moved away from attaching the DIExpression to DILocalVariable for the same reasons. This reapplies r289902 with additional testcase upgrades and a change to the Bitcode record for DIGlobalVariable, that makes upgrading the old format unambiguous also for variables without DIExpressions. <rdar://problem/29250149> https://llvm.org/bugs/show_bug.cgi?id=31013 Differential Revision: https://reviews.llvm.org/D26769 llvm-svn: 290153
* Revert "[IR] Remove the DIExpression field from DIGlobalVariable."Adrian Prantl2016-12-161-2/+2
| | | | | | | | | | | | | | | | | This reverts commit 289920 (again). I forgot to implement a Bitcode upgrade for the case where a DIGlobalVariable has not DIExpression. Unfortunately it is not possible to safely upgrade these variables without adding a flag to the bitcode record indicating which version they are. My plan of record is to roll the planned follow-up patch that adds a unit: field to DIGlobalVariable into this patch before recomitting. This way we only need one Bitcode upgrade for both changes (with a version flag in the bitcode record to safely distinguish the record formats). Sorry for the churn! llvm-svn: 289982
* [IR] Remove the DIExpression field from DIGlobalVariable.Adrian Prantl2016-12-161-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements PR31013 by introducing a DIGlobalVariableExpression that holds a pair of DIGlobalVariable and DIExpression. Currently, DIGlobalVariables holds a DIExpression. This is not the best way to model this: (1) The DIGlobalVariable should describe the source level variable, not how to get to its location. (2) It makes it unsafe/hard to update the expressions when we call replaceExpression on the DIGLobalVariable. (3) It makes it impossible to represent a global variable that is in more than one location (e.g., a variable with multiple DW_OP_LLVM_fragment-s). We also moved away from attaching the DIExpression to DILocalVariable for the same reasons. This reapplies r289902 with additional testcase upgrades. <rdar://problem/29250149> https://llvm.org/bugs/show_bug.cgi?id=31013 Differential Revision: https://reviews.llvm.org/D26769 llvm-svn: 289920
* Revert "[IR] Remove the DIExpression field from DIGlobalVariable."Adrian Prantl2016-12-161-2/+2
| | | | | | This reverts commit 289902 while investigating bot berakage. llvm-svn: 289906
* [IR] Remove the DIExpression field from DIGlobalVariable.Adrian Prantl2016-12-161-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements PR31013 by introducing a DIGlobalVariableExpression that holds a pair of DIGlobalVariable and DIExpression. Currently, DIGlobalVariables holds a DIExpression. This is not the best way to model this: (1) The DIGlobalVariable should describe the source level variable, not how to get to its location. (2) It makes it unsafe/hard to update the expressions when we call replaceExpression on the DIGLobalVariable. (3) It makes it impossible to represent a global variable that is in more than one location (e.g., a variable with multiple DW_OP_LLVM_fragment-s). We also moved away from attaching the DIExpression to DILocalVariable for the same reasons. <rdar://problem/29250149> https://llvm.org/bugs/show_bug.cgi?id=31013 Differential Revision: https://reviews.llvm.org/D26769 llvm-svn: 289902
* [DIExpression] Introduce a dedicated DW_OP_LLVM_fragment operationAdrian Prantl2016-12-051-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | so we can stop using DW_OP_bit_piece with the wrong semantics. The entire back story can be found here: http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20161114/405934.html The gist is that in LLVM we've been misinterpreting DW_OP_bit_piece's offset field to mean the offset into the source variable rather than the offset into the location at the top the DWARF expression stack. In order to be able to fix this in a subsequent patch, this patch introduces a dedicated DW_OP_LLVM_fragment operation with the semantics that we used to apply to DW_OP_bit_piece, which is what we actually need while inside of LLVM. This patch is complete with a bitcode upgrade for expressions using the old format. It does not yet fix the DWARF backend to use DW_OP_bit_piece correctly. Implementation note: We discussed several options for implementing this, including reserving a dedicated field in DIExpression for the fragment size and offset, but using an custom operator at the end of the expression works just fine and is more efficient because we then only pay for it when we need it. Differential Revision: https://reviews.llvm.org/D27361 rdar://problem/29335809 llvm-svn: 288683
* [InlineCost] Reduce inline thresholds to compensate for cost changesJames Molloy2016-11-282-10/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | In r286814, the algorithm for calculating inline costs changed. This caused more inlining to take place which is especially apparent in optsize and minsize modes. As the cost calculation removed a skewed behaviour (we were inconsistent about the cost of calls) it isn't possible to update the thresholds to get exactly the same behaviour as before. However, this threshold change accounts for the very common case where an inline candidate has no calls within it. In this case, r286814 would inline around 5-6 more (IR) instructions. The changes to -Oz have been heavily benchmarked. The "obvious" value for the inline threshold at -Oz is zero, but due to inaccuracies in the inline heuristics this can actually cause code size increases due to not inlining key thunk functions (that then disappear). Experimentally, 5 was the sweet spot for code size over the test-suite. For -Os, this change removes the outlier results shown up by green dragon (http://104.154.54.203/db_default/v4/nts/13248). Fixes D26848. llvm-svn: 288024
* [InlineCost] Remove skew when calculating call costsJames Molloy2016-11-149-11/+38
| | | | | | | | | | | | | | | When calculating the cost of a call instruction we were applying a heuristic penalty as well as the cost of the instruction itself. However, when calculating the benefit from inlining we weren't discounting the equivalent penalty for the call instruction that would be removed! This caused skew in the calculation and meant we wouldn't inline in the following, trivial case: int g() { h(); } int f() { g(); } llvm-svn: 286814
* [OptDiag] Remove non-printable chars from function nameAdam Nemet2016-11-101-1/+1
| | | | | | | The r283656 did this in the remark arguments. We also need to do this in the main function attribute as that is written to YAML as well. llvm-svn: 286482
* [OptDiag, opt-viewer] Save callee's location and display as linkAdam Nemet2016-11-072-0/+6
| | | | | | | | | | | | | With this we get a new field in the YAML record if the value being streamed out has a debug location. For examples, please see the changes to the tests. This is then used in opt-viewer to display a link for the callee function in the inlining remarks. Differential Revision: https://reviews.llvm.org/D26366 llvm-svn: 286169
* [llvm] Remove redundant --check-prefix=CHECK from testsMandeep Singh Grang2016-10-241-1/+1
| | | | | | | | Reviewers: MatzeB, mcrosier, rengolin Differential Revision: https://reviews.llvm.org/D25894 llvm-svn: 285003
* [OptRemarks] Remove non-printable chars from function nameAdam Nemet2016-10-081-2/+2
| | | | | | | | | | | Value names may be prefixed with a binary '1' to indicate that the backend should not modify the symbols due to any platform naming convention. This should not show up in the YAML opt record file because it breaks the YAML parser. llvm-svn: 283656
* Don't filter diagnostics written as YAML to the output fileHal Finkel2016-10-041-0/+2
| | | | | | | | | | | | | | | The purpose of the YAML diagnostic output file is to collect information on optimizations performed, or not performed, for later processing by tools that help users (and compiler developers) understand how code was optimized. As such, the diagnostics that appear in the file should not be coupled to what a user might want to see summarized for them as the compiler runs, and in fact, because the user likely does not know what optimization diagnostics their tools might want to use, the user cannot provide a useful filter regardless. As such, we shouldn't filter the diagnostics going to the output file. Differential Revision: https://reviews.llvm.org/D25224 llvm-svn: 283236
* Serialize remark argument as a mapping to get proper quotation for the value.Adam Nemet2016-10-042-11/+11
| | | | llvm-svn: 283231
* [Inliner] Port all opt remarks to new streaming APIAdam Nemet2016-09-271-0/+83
| | | | llvm-svn: 282559
* Pass -S to opt in this test to avoid printing binary on mismatchAdam Nemet2016-09-271-1/+1
| | | | | | The purpose of the test is to verify diagnostics. llvm-svn: 282558
OpenPOWER on IntegriCloud