summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Analysis/IPA/InlineCost.cpp
Commit message (Collapse)AuthorAgeFilesLines
* [PM/AA] Remove the last relics of the separate IPA library from LLVM,Chandler Carruth2015-08-181-1451/+0
| | | | | | | | | | | | | | | | | | | | | folding the code into the main Analysis library. There already wasn't much of a distinction between Analysis and IPA. A number of the passes in Analysis are actually IPA passes, and there doesn't seem to be any advantage to separating them. Moreover, it makes it hard to have interactions between analyses that are both local and interprocedural. In trying to make the Alias Analysis infrastructure work with the new pass manager, it becomes particularly awkward to navigate this split. I've tried to find all the places where we referenced this, but I may have missed some. I have also adjusted the C API to continue to be equivalently functional after this change. Differential Revision: http://reviews.llvm.org/D12075 llvm-svn: 245318
* New EH representation for MSVC compatibilityDavid Majnemer2015-07-311-0/+14
| | | | | | | | | | This introduces new instructions neccessary to implement MSVC-compatible exception handling support. Most of the middle-end and none of the back-end haven't been audited or updated to take them into account. Differential Revision: http://reviews.llvm.org/D11097 llvm-svn: 243766
* Rename hasCompatibleFunctionAttributes->areInlineCompatible basedEric Christopher2015-07-291-1/+1
| | | | | | | on suggestions. Currently the function is only used for inline purposes and this is more descriptive for the use. llvm-svn: 243578
* Revert the new EH instructionsDavid Majnemer2015-07-101-14/+0
| | | | | | This reverts commits r241888-r241891, I didn't mean to commit them. llvm-svn: 241893
* New EH representation for MSVC compatibilityDavid Majnemer2015-07-101-0/+14
| | | | | | | | | | | | | | | Summary: This introduces new instructions neccessary to implement MSVC-compatible exception handling support. Most of the middle-end and none of the back-end haven't been audited or updated to take them into account. Reviewers: rnk, JosephTremoulet, reames, nlewycky, rjmccall Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11041 llvm-svn: 241888
* Rename llvm.frameescape and llvm.framerecover to localescape and localrecoverReid Kleckner2015-07-071-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Initially, these intrinsics seemed like part of a family of "frame" related intrinsics, but now I think that's more confusing than helpful. Initially, the LangRef specified that this would create a new kind of allocation that would be allocated at a fixed offset from the frame pointer (EBP/RBP). We ended up dropping that design, and leaving the stack frame layout alone. These intrinsics are really about sharing local stack allocations, not frame pointers. I intend to go further and add an `llvm.localaddress()` intrinsic that returns whatever register (EBP, ESI, ESP, RBX) is being used to address locals, which should not be confused with the frame pointer. Naming suggestions at this point are welcome, I'm happy to re-run sed. Reviewers: majnemer, nicholas Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11011 llvm-svn: 241633
* Add a routine to TargetTransformInfo that will allow targets to lookEric Christopher2015-07-021-4/+5
| | | | | | | at the attributes on a function to determine whether or not to allow inlining. llvm-svn: 241220
* Teach InlineCost to account for a null check which can be folded awayPhilip Reames2015-06-261-17/+56
| | | | | | | | | | If we have a caller that knows a particular argument can never be null, we can exploit this fact while simplifying values in the inline cost analysis. This has the effect of reducing the cost for inlining when a null check is present in the callee, but the value is known non null in the caller. In particular, any dependent control flow can be discounted from the cost estimate. Note that we use the parameter attributes at the call site to memoize the analysis within the caller's code. The setting of this attribute is done in InstCombine, the inline cost analysis just consumes it. This is intentional and important because we want the inline cost analysis results to be easily cachable themselves. We're not currently doing so, but initial results on LTO indicate this will quickly become important. Differential Revision: http://reviews.llvm.org/D9129 llvm-svn: 240828
* [inliner] Fix the early-exit of the inline cost analysis to correctlyChandler Carruth2015-05-271-25/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | model the dense vector instruction bonuses. Previously, this code really didn't effectively compute the density of inlined vector instructions and apply the intended inliner bonus. It would try to compute it repeatedly while analyzing the function and didn't handle the case where future vector instructions would tip the scales back towards the bonus. Instead, speculatively apply all possible bonuses to the threshold initially. Once we *know* that a certain bonus can not be applied, subtract it. This should delay early bailout enough to get much more consistent results without actually causing us to analyze huge swaths of code. I expect some (hopefully mild) compile time hit here, and some swings in performance, but this was definitely the intended behavior of these bonuses. This also dramatically simplifies the computation of the bonuses to not interact with each other in confusing ways. The previous code didn't do a good job of this and the values for bonuses may be surprising but are at least now clearly written in the code. Finally, fix code to be in line with comments and use zero as the bailout condition. Patch by Easwaran Raman, with some comment tweaks by me to try and further clarify what is going on with this code. http://reviews.llvm.org/D8267 llvm-svn: 238276
* [Inliner] Don't inline functions with frameescape callsReid Kleckner2015-04-141-8/+19
| | | | | | | | | | | Inlining such intrinsics is very difficult, since you need to simultaneously transform many calls to llvm.framerecover and potentially duplicate the functions containing them. Normally this intrinsic isn't added until EH preparation, which is part of the backend pass pipeline after inlining. However, if it were to get fed through the inliner, this change will ensure that it doesn't break the code. llvm-svn: 234937
* [inliner] Don't inline a function if it doesn't have exactly the sameAkira Hatanaka2015-04-131-4/+6
| | | | | | | | target-cpu and target-features attribute strings as the caller. Differential Revision: http://reviews.llvm.org/D8984 llvm-svn: 234773
* Correctly estimate SROA savings for store operands in inline cost analysis.Wei Mi2015-03-201-2/+2
| | | | | | | | | | | | When estimating SROA savings, we want to see if an address is derived off an alloca in the caller. For store instructions, operand 1 is the address operand, but the current code uses operand 0. Use getPointerOperand for loads and stores to fix this. Patch by Easwaran Raman. http://reviews.llvm.org/D8425 llvm-svn: 232827
* DataLayout is mandatory, update the API to reflect it with references.Mehdi Amini2015-03-101-27/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Now that the DataLayout is a mandatory part of the module, let's start cleaning the codebase. This patch is a first attempt at doing that. This patch is not exactly NFC as for instance some places were passing a nullptr instead of the DataLayout, possibly just because there was a default value on the DataLayout argument to many functions in the API. Even though it is not purely NFC, there is no change in the validation. I turned as many pointer to DataLayout to references, this helped figuring out all the places where a nullptr could come up. I had initially a local version of this patch broken into over 30 independant, commits but some later commit were cleaning the API and touching part of the code modified in the previous commits, so it seemed cleaner without the intermediate state. Test Plan: Reviewers: echristo Subscribers: llvm-commits From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 231740
* Make DataLayout Non-Optional in the ModuleMehdi Amini2015-03-041-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: DataLayout keeps the string used for its creation. As a side effect it is no longer needed in the Module. This is "almost" NFC, the string is no longer canonicalized, you can't rely on two "equals" DataLayout having the same string returned by getStringRepresentation(). Get rid of DataLayoutPass: the DataLayout is in the Module The DataLayout is "per-module", let's enforce this by not duplicating it more than necessary. One more step toward non-optionality of the DataLayout in the module. Make DataLayout Non-Optional in the Module Module->getDataLayout() will never returns nullptr anymore. Reviewers: echristo Subscribers: resistor, llvm-commits, jholewinski Differential Revision: http://reviews.llvm.org/D7992 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 231270
* Remove getDataLayout() from Instruction/GlobalValue/BasicBlock/FunctionMehdi Amini2015-03-031-3/+3
| | | | | | | | | | | | | | | | | Summary: This does not conceptually belongs here. Instead provide a shortcut getModule() that provides access to the DataLayout. Reviewers: chandlerc, echristo Reviewed By: echristo Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D8027 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 231147
* Analysis: Canonicalize access to function attributes, NFCDuncan P. N. Exon Smith2015-02-141-5/+2
| | | | | | | | | | | | Canonicalize access to function attributes to use the simpler API. getAttributes().getAttribute(AttributeSet::FunctionIndex, Kind) => getFnAttribute(Kind) getAttributes().hasAttribute(AttributeSet::FunctionIndex, Kind) => hasFnAttribute(Kind) llvm-svn: 229192
* Fix a crash in the assumption cache when inlining indirect function callsBjorn Steinbrink2015-02-121-6/+6
| | | | | | | | | | | | | | | | | Summary: Instances of the AssumptionCache are per function, so we can't re-use the same AssumptionCache instance when recursing in the CallAnalyzer to analyze a different function. Instead we have to pass the AssumptionCacheTracker to the CallAnalyzer so it can get the right AssumptionCache on demand. Reviewers: hfinkel Subscribers: llvm-commits, hans Differential Revision: http://reviews.llvm.org/D7533 llvm-svn: 228957
* [InstSimplify] Add SimplifyFPBinOp function.Michael Zolotukhin2015-02-061-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | It is a variation of SimplifyBinOp, but it takes into account FastMathFlags. It is needed in inliner and loop-unroller to accurately predict the transformation's outcome (previously we dropped the flags and were too conservative in some cases). Example: float foo(float *a, float b) { float r; if (a[1] * b) r = /* a lot of expensive computations */; else r = 1; return r; } float boo(float *a) { return foo(a, 0.0); } Without this patch, we don't inline 'foo' into 'boo'. llvm-svn: 228432
* Value soft float calls as more expensive in the inliner.Cameron Esfahani2015-02-051-0/+19
| | | | | | | | | | | | | | Summary: When evaluating floating point instructions in the inliner, ask the TTI whether it is an expensive operation. By default, it's not an expensive operation. This keeps the default behavior the same as before. The ARM TTI has been updated to return back TCC_Expensive for targets which don't have hardware floating point. Reviewers: chandlerc, echristo Reviewed By: echristo Subscribers: t.p.northover, aemerson, llvm-commits Differential Revision: http://reviews.llvm.org/D6936 llvm-svn: 228263
* [multiversion] Thread a function argument through all the callers of theChandler Carruth2015-02-011-2/+2
| | | | | | | | | | | | | | getTTI method used to get an actual TTI object. No functionality changed. This just threads the argument and ensures code like the inliner can correctly look up the callee's TTI rather than using a fixed one. The next change will use this to implement per-function subtarget usage by TTI. The changes after that should eliminate the need for FTTI as that will have become the default. llvm-svn: 227730
* [PM] Change the core design of the TTI analysis to use a polymorphicChandler Carruth2015-01-311-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | type erased interface and a single analysis pass rather than an extremely complex analysis group. The end result is that the TTI analysis can contain a type erased implementation that supports the polymorphic TTI interface. We can build one from a target-specific implementation or from a dummy one in the IR. I've also factored all of the code into "mix-in"-able base classes, including CRTP base classes to facilitate calling back up to the most specialized form when delegating horizontally across the surface. These aren't as clean as I would like and I'm planning to work on cleaning some of this up, but I wanted to start by putting into the right form. There are a number of reasons for this change, and this particular design. The first and foremost reason is that an analysis group is complete overkill, and the chaining delegation strategy was so opaque, confusing, and high overhead that TTI was suffering greatly for it. Several of the TTI functions had failed to be implemented in all places because of the chaining-based delegation making there be no checking of this. A few other functions were implemented with incorrect delegation. The message to me was very clear working on this -- the delegation and analysis group structure was too confusing to be useful here. The other reason of course is that this is *much* more natural fit for the new pass manager. This will lay the ground work for a type-erased per-function info object that can look up the correct subtarget and even cache it. Yet another benefit is that this will significantly simplify the interaction of the pass managers and the TargetMachine. See the future work below. The downside of this change is that it is very, very verbose. I'm going to work to improve that, but it is somewhat an implementation necessity in C++ to do type erasure. =/ I discussed this design really extensively with Eric and Hal prior to going down this path, and afterward showed them the result. No one was really thrilled with it, but there doesn't seem to be a substantially better alternative. Using a base class and virtual method dispatch would make the code much shorter, but as discussed in the update to the programmer's manual and elsewhere, a polymorphic interface feels like the more principled approach even if this is perhaps the least compelling example of it. ;] Ultimately, there is still a lot more to be done here, but this was the huge chunk that I couldn't really split things out of because this was the interface change to TTI. I've tried to minimize all the other parts of this. The follow up work should include at least: 1) Improving the TargetMachine interface by having it directly return a TTI object. Because we have a non-pass object with value semantics and an internal type erasure mechanism, we can narrow the interface of the TargetMachine to *just* do what we need: build and return a TTI object that we can then insert into the pass pipeline. 2) Make the TTI object be fully specialized for a particular function. This will include splitting off a minimal form of it which is sufficient for the inliner and the old pass manager. 3) Add a new pass manager analysis which produces TTI objects from the target machine for each function. This may actually be done as part of #2 in order to use the new analysis to implement #2. 4) Work on narrowing the API between TTI and the targets so that it is easier to understand and less verbose to type erase. 5) Work on narrowing the API between TTI and its clients so that it is easier to understand and less verbose to forward. 6) Try to improve the CRTP-based delegation. I feel like this code is just a bit messy and exacerbating the complexity of implementing the TTI in each target. Many thanks to Eric and Hal for their help here. I ended up blocked on this somewhat more abruptly than I expected, and so I appreciate getting it sorted out very quickly. Differential Revision: http://reviews.llvm.org/D7293 llvm-svn: 227669
* [cleanup] Re-sort all the #include lines in LLVM usingChandler Carruth2015-01-141-1/+1
| | | | | | | | | | | utils/sort_includes.py. I clearly haven't done this in a while, so more changed than usual. This even uncovered a missing include from the InstrProf library that I've added. No functionality changed here, just mechanical cleanup of the include order. llvm-svn: 225974
* [PM] Split the AssumptionTracker immutable pass into two separate APIs:Chandler Carruth2015-01-041-10/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | a cache of assumptions for a single function, and an immutable pass that manages those caches. The motivation for this change is two fold. Immutable analyses are really hacks around the current pass manager design and don't exist in the new design. This is usually OK, but it requires that the core logic of an immutable pass be reasonably partitioned off from the pass logic. This change does precisely that. As a consequence it also paves the way for the *many* utility functions that deal in the assumptions to live in both pass manager worlds by creating an separate non-pass object with its own independent API that they all rely on. Now, the only bits of the system that deal with the actual pass mechanics are those that actually need to deal with the pass mechanics. Once this separation is made, several simplifications become pretty obvious in the assumption cache itself. Rather than using a set and callback value handles, it can just be a vector of weak value handles. The callers can easily skip the handles that are null, and eventually we can wrap all of this up behind a filter iterator. For now, this adds boiler plate to the various passes, but this kind of boiler plate will end up making it possible to port these passes to the new pass manager, and so it will end up factored away pretty reasonably. llvm-svn: 225131
* Update SetVector to rely on the underlying set's insert to return a ↵David Blaikie2014-11-191-1/+1
| | | | | | | | | | | | | pair<iterator, bool> This is to be consistent with StringSet and ultimately with the standard library's associative container insert function. This lead to updating SmallSet::insert to return pair<iterator, bool>, and then to update SmallPtrSet::insert to return pair<iterator, bool>, and then to update all the existing users of those functions... llvm-svn: 222334
* Add functions for finding ephemeral valuesHal Finkel2014-09-071-7/+26
| | | | | | | | | | | | | | | | This adds a set of utility functions for collecting 'ephemeral' values. These are LLVM IR values that are used only by @llvm.assume intrinsics (directly or indirectly), and thus will be removed prior to code generation, implying that they should be considered free for certain purposes (like inlining). The inliner's cost analysis, and a few other passes, have been updated to account for ephemeral values using the provided functionality. This functionality is important for the usability of @llvm.assume, because it limits the "non-local" side-effects of adding llvm.assume on inlining, loop unrolling, etc. (these are hints, and do not generate code, so they should not directly contribute to estimates of execution cost). llvm-svn: 217335
* Suppress inlining when the block address is takenGerolf Hoflehner2014-07-011-6/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Inlining functions with block addresses can cause many problem and requires a rich infrastructure to support including escape analysis. At this point the safest approach to address these problems is by blocking inlining from happening. Background: There have been reports on Ruby segmentation faults triggered by inlining functions with block addresses like //Ruby code snippet vm_exec_core() { finish_insn_seq_0 = &&INSN_LABEL_finish; INSN_LABEL_finish: ; } This kind of scenario can also happen when LLVM picks a subset of blocks for inlining, which is the case with the actual code in the Ruby environment. LLVM suppresses inlining for such functions when there is an indirect branch. The attached patch does so even when there is no indirect branch. Note that user code like above would not make much sense: using the global for jumping across function boundaries would be illegal. Why was there a segfault: In the snipped above the block with the label is recognized as dead So it is eliminated. Instead of a block address the cloner stores a constant (sic!) into the global resulting in the segfault (when the global is used in a goto). Why had it worked in the past then: By luck. In older versions vm_exec_core was also inlined but the label address used was the block label address in vm_exec_core. So the global jump ended up in the original function rather than in the caller which accidentally happened to work. Test case ./tools/clang/test/CodeGen/indirect-goto.c will fail as a result of this commit. rdar://17245966 llvm-svn: 212077
* Check the alwaysinline attribute on the call as well as on the caller.Peter Collingbourne2014-05-191-1/+1
| | | | | | Differential Revision: http://reviews.llvm.org/D3815 llvm-svn: 209150
* [inliner] Significantly improve the compile time in cases like PR19499Chandler Carruth2014-04-281-3/+23
| | | | | | | | | | | | | | | by avoiding inlining massive switches merely because they have no instructions in them. These switches still show up where we fail to form lookup tables, and in those cases they are actually going to cause a very significant code size hit anyways, so inlining them is not the right call. The right way to fix any performance regressions stemming from this is to enhance the switch-to-lookup-table logic to fire in more places. This makes PR19499 about 5x less bad. It uncovers a second compile time problem in that test case that is unrelated (surprisingly!). llvm-svn: 207403
* [C++] Use 'nullptr'.Craig Topper2014-04-241-2/+2
| | | | llvm-svn: 207083
* [Modules] Fix potential ODR violations by sinking the DEBUG_TYPEChandler Carruth2014-04-221-1/+2
| | | | | | | | | | definition below all the header #include lines, lib/Analysis/... edition. This one has a bit extra as there were *other* #define's before #include lines in addition to DEBUG_TYPE. I've sunk all of them as a block. llvm-svn: 206843
* remove some dead codeNuno Lopes2014-04-171-18/+0
| | | | | | | | | | | | | | | lib/Analysis/IPA/InlineCost.cpp | 18 ------------------ lib/Analysis/RegionPass.cpp | 1 - lib/Analysis/TypeBasedAliasAnalysis.cpp | 1 - lib/Transforms/Scalar/LoopUnswitch.cpp | 21 --------------------- lib/Transforms/Utils/LCSSA.cpp | 2 -- lib/Transforms/Utils/LoopSimplify.cpp | 6 ------ utils/TableGen/AsmWriterEmitter.cpp | 13 ------------- utils/TableGen/DFAPacketizerEmitter.cpp | 7 ------- utils/TableGen/IntrinsicEmitter.cpp | 2 -- 9 files changed, 71 deletions(-) llvm-svn: 206506
* Reverse 206485.Gerolf Hoflehner2014-04-171-8/+2
| | | | | | | | | After some discussions the preferred semantics of the always_inline attribute is inline always when the compiler can determine that it it safe to do so. llvm-svn: 206487
* Inline a function when the always_inline attributeGerolf Hoflehner2014-04-171-2/+8
| | | | | | | | | | is set even when it contains a indirect branch. The attribute overrules correctness concerns like the escape of a local block address. This is for rdar://16501761 llvm-svn: 206429
* Handle vlas during inline cost computation if they'll be turnedEric Christopher2014-04-071-1/+10
| | | | | | | | | | | into a constant size alloca by inlining. Ran a run over the testsuite, no results out of the noise, fixes the testcase in the PR. PR19115. llvm-svn: 205710
* Consistent use of the noduplicate attribute.Eli Bendersky2014-03-171-1/+1
| | | | | | | | | The "noduplicate" attribute of call instructions is sometimes queried directly and sometimes through the cannotDuplicate() predicate. This patch streamlines all queries to use the cannotDuplicate() predicate. It also adds this predicate to InvokeInst, to mirror what CallInst has. llvm-svn: 204049
* [C++11] Add range based accessors for the Use-Def chain of a Value.Chandler Carruth2014-03-091-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This requires a number of steps. 1) Move value_use_iterator into the Value class as an implementation detail 2) Change it to actually be a *Use* iterator rather than a *User* iterator. 3) Add an adaptor which is a User iterator that always looks through the Use to the User. 4) Wrap these in Value::use_iterator and Value::user_iterator typedefs. 5) Add the range adaptors as Value::uses() and Value::users(). 6) Update *all* of the callers to correctly distinguish between whether they wanted a use_iterator (and to explicitly dig out the User when needed), or a user_iterator which makes the Use itself totally opaque. Because #6 requires churning essentially everything that walked the Use-Def chains, I went ahead and added all of the range adaptors and switched them to range-based loops where appropriate. Also because the renaming requires at least churning every line of code, it didn't make any sense to split these up into multiple commits -- all of which would touch all of the same lies of code. The result is still not quite optimal. The Value::use_iterator is a nice regular iterator, but Value::user_iterator is an iterator over User*s rather than over the User objects themselves. As a consequence, it fits a bit awkwardly into the range-based world and it has the weird extra-dereferencing 'operator->' that so many of our iterators have. I think this could be fixed by providing something which transforms a range of T&s into a range of T*s, but that *can* be separated into another patch, and it isn't yet 100% clear whether this is the right move. However, this change gets us most of the benefit and cleans up a substantial amount of code around Use and User. =] llvm-svn: 203364
* [Layering] Move InstVisitor.h into the IR library as it is prettyChandler Carruth2014-03-061-1/+1
| | | | | | obviously coupled to the IR. llvm-svn: 203064
* [Modules] Move CallSite into the IR library where it belogs. It isChandler Carruth2014-03-041-1/+1
| | | | | | | abstracting between a CallInst and an InvokeInst, both of which are IR concepts. llvm-svn: 202816
* [Modules] Move GetElementPtrTypeIterator into the IR library. As itsChandler Carruth2014-03-041-1/+1
| | | | | | | | | name might indicate, it is an iterator over the types in an instruction in the IR.... You see where this is going. Another step of modularizing the support library. llvm-svn: 202815
* [C++11] Replace llvm::tie with std::tie.Benjamin Kramer2014-03-021-4/+4
| | | | | | The old implementation is no longer needed in C++11. llvm-svn: 202644
* Remove unnecessary llvm:: qualification.Eric Christopher2014-02-261-1/+1
| | | | llvm-svn: 202316
* Use DataLayout from the module when easily available.Rafael Espindola2014-02-251-4/+4
| | | | | | | | | | | | | | | | | Eventually DataLayoutPass should go away, but for now that is the only easy way to get a DataLayout in some APIs. This patch only changes the ones that have easy access to a Module. One interesting issue with sometimes using DataLayoutPass and sometimes fetching it from the Module is that we have to make sure they are equivalent. We can get most of the way there by always constructing the pass with a Module. In fact, the pass could be changed to point to an external DataLayout instead of owning one to make this stricter. Unfortunately, the C api passes a DataLayout, so it has to be up to the caller to make sure the pass and the module are in sync. llvm-svn: 202204
* Make DataLayout a plain object, not a pass.Rafael Espindola2014-02-251-1/+2
| | | | | | | Instead, have a DataLayoutPass that holds one. This will allow parts of LLVM don't don't handle passes to also use DataLayout. llvm-svn: 202168
* Rename many DataLayout variables from TD to DL.Rafael Espindola2014-02-211-20/+20
| | | | | | | | | I am really sorry for the noise, but the current state where some parts of the code use TD (from the old name: TargetData) and other parts use DL makes it hard to write a patch that changes where those variables come from and how they are passed along. llvm-svn: 201827
* Rename some member variables from TD to DL.Rafael Espindola2014-02-181-3/+3
| | | | | | TargetData was renamed DataLayout back in r165242. llvm-svn: 201581
* [inliner] Skip debug intrinsics even earlier in computing the inlineChandler Carruth2014-02-011-0/+10
| | | | | | | | | | | | | | | | | | | | | | | cost so that they don't impact the vector bonus. Fundamentally, counting unsimplified instructions is just *wrong*; it will continue to introduce instability as things which do not generate code bizarrely impact inlining. For example, sufficiently nested inlined functions could turn off the vector bonus with lifetime markers just like the debug intrinsics do. =/ This is a short-term tactical fix. Long term, I think we need to remove the vector bonus entirely. That's a separate patch and discussion though. The patch to fix this provided by Dario Domizioli. I've added some comments about the planned direction and used a heavily pruned form of debug info intrinsics for the test case. While this debug info doesn't work or "do" anything useful, it lets us easily test all manner of interference easily, and I suspect this will not be the last time we want to craft a pattern where debug info interferes with the inliner in a problematic way. llvm-svn: 200609
* [inliner] Print out extra stats about the cost, threshold, and vectorChandler Carruth2014-01-311-0/+3
| | | | | | | | bonus in the inline cost analysis. Split out of a patch by Dario Domizioli to commit separately. llvm-svn: 200586
* [inliner] Fix PR18206 by preventing inlining functions that call setjmpChandler Carruth2013-12-131-1/+1
| | | | | | | | | | | | | through an invoke instruction. The original patch for this was written by Mark Seaborn, but I've reworked his test case into the existing returns_twice test case and implemented the fix by the prior refactoring to actually run the cost analysis over invoke instructions, and then here fixing our detection of the returns_twice attribute to work for both calls and invokes. We never noticed because we never saw an invoke. =[ llvm-svn: 197216
* [inliner] Completely change (and fix) how the inline cost analysisChandler Carruth2013-12-131-37/+76
| | | | | | | | | | | | | | | | | | | | | | | | | | | | handles terminator instructions. The inline cost analysis inheritted some pretty rough handling of terminator insts from the original cost analysis, and then made it much, much worse by factoring all of the important analyses into a separate instruction visitor. That instruction visitor never visited the terminator. This works fine for things like conditional branches, but for many other things we simply computed The Wrong Value. First example are unconditional branches, which should be free but were counted as full cost. This is most significant for conditional branches where the condition simplifies and folds during inlining. We paid a 1 instruction tax on every branch in a straight line specialized path. =[ Oh, we also claimed that the unreachable instruction had cost. But it gets worse. Let's consider invoke. We never applied the call penalty. We never accounted for the cost of the arguments. Nope. Worse still, we didn't handle the *correctness* constraints of not inlining recursive invokes, or exception throwing returns_twice functions. Oops. See PR18206. Sadly, PR18206 requires yet another fix, but this refactoring is at least a huge step in that direction. llvm-svn: 197215
* [cleanup] Remove trailing whitespace before I start changing this file.Chandler Carruth2013-12-121-1/+1
| | | | llvm-svn: 197149
OpenPOWER on IntegriCloud