summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms
Commit message (Collapse)AuthorAgeFilesLines
* PGO branch weight: keep halving the weights until they can fit intoManman Ren2014-01-271-12/+13
| | | | | | | | | | uint32. When folding branches to common destination, the updated branch weights can exceed uint32 by more than factor of 2. We should keep halving the weights until they can fit into uint32. llvm-svn: 200262
* [vectorize] Initial version of respecting PGO in the vectorizer: treatChandler Carruth2014-01-271-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cold loops as-if they were being optimized for size. Nothing fancy here. Simply test case included. The nice thing is that we can now incrementally build on top of this to drive other heuristics. All of the infrastructure work is done to get the profile information into this layer. The remaining work necessary to make this a fully general purpose loop unroller for very hot loops is to make it a fully general purpose loop unroller. Things I know of but am not going to have time to benchmark and fix in the immediate future: 1) Don't disable the entire pass when the target is lacking vector registers. This really doesn't make any sense any more. 2) Teach the unroller at least and the vectorizer potentially to handle non-if-converted loops. This is trivial for the unroller but hard for the vectorizer. 3) Compute the relative hotness of the loop and thread that down to the various places that make cost tradeoffs (very likely only the unroller makes sense here, and then only when dealing with loops that are small enough for unrolling to not completely blow out the LSD). I'm still dubious how useful hotness information will be. So far, my experiments show that if we can get the correct logic for determining when unrolling actually helps performance, the code size impact is completely unimportant and we can unroll in all cases. But at least we'll no longer burn code size on cold code. One somewhat unrelated idea that I've had forever but not had time to implement: mark all functions which are only reachable via the global constructors rigging in the module as optsize. This would also decrease the impact of any more aggressive heuristics here on code size. llvm-svn: 200219
* ConstantHoisting: We can't insert instructions directly in front of a PHI node.Benjamin Kramer2014-01-271-3/+18
| | | | | | Insert before the terminating instruction of the dominating block instead. llvm-svn: 200218
* [vectorizer] Add an override for the target instruction cost and use itChandler Carruth2014-01-271-0/+11
| | | | | | | to stabilize a test that really is trying to test generic behavior and not a specific target's behavior. llvm-svn: 200215
* [vectorizer] Simplify code to use existing helpers on the FunctionChandler Carruth2014-01-271-8/+8
| | | | | | | | | | object and fewer pointless variables. Also, add a clarifying comment and a FIXME because the code which disables *all* vectorization if we can't use implicit floating point instructions just makes no sense at all. llvm-svn: 200214
* [vectorizer] Teach the loop vectorizer's unroller to only unroll byChandler Carruth2014-01-271-3/+6
| | | | | | | | | | | | | | | | | | | | | | | | | powers of two. This is essentially always the correct thing given the impact on alignment, scaling factors that can be used in addressing modes, etc. Also, fix the management of the unroll vs. small loop cost to more accurately model things with this world. Enhance a test case to actually exercise more of the unroll machinery if using synthetic constants rather than a specific target model. Before this change, with the added flags this test will unroll 3 times instead of either 2 or 4 (the two sensible answers). While I don't expect this to make a huge difference, if there are lots of loops sitting right on the edge of hitting the 'small unroll' factor, they might change behavior. However, I've benchmarked moving the small loop cost up and down in many various ways and by a huge factor (2x) without seeing more than 0.2% code size growth. Small adjustments such as the series that led up here have led to about 1% improvement on some benchmarks, but it is very close to the noise floor so I mostly checked that nothing regressed. Let me know if you see bad behavior on other targets but I don't expect this to be a sufficiently dramatic change to trigger anything. llvm-svn: 200213
* [vectorizer] Add some flags which are useful for conducting experimentsChandler Carruth2014-01-271-2/+38
| | | | | | | | | | | | with the unrolling behavior in the loop vectorizer. No functionality changed at this point. These are a bit hack-y, but talking with Hal, there doesn't seem to be a cleaner way to easily experiment with different thresholds here and he was also interested in them so I wanted to commit them. Suggestions for improvement are very welcome here. llvm-svn: 200212
* [vectorizer] Fix a trivial oversight where we always requested theChandler Carruth2014-01-271-4/+4
| | | | | | | | | | | | | | number of vector registers rather than toggling between vector and scalar register number based on VF. I don't have a test case as I spotted this by inspection and on X86 it only makes a difference if your target is lacking SSE and thus has *no* vector registers. If someone wants to add a test case for this for ARM or somewhere else where this is more significant, that would be awesome. Also made the variable name a bit more sensible while I'm here. llvm-svn: 200211
* [vectorizer] Clean up the handling of unvectorized loop unrolling in theChandler Carruth2014-01-271-13/+3
| | | | | | | | | | | | | | | | | LoopVectorize pass. The logic here doesn't make much sense. We *only* unrolled if the unvectorized loop was a reduction loop with a single basic block *and* small loop body. The reduction part in particular doesn't make much sense. Instead, if we just fall through to the vectorized unroll logic it makes more sense of unrolling if there is a vectorized reduction that could be hacked on by the SLP vectorizer *or* if the loop is small. This is mostly a cleanup and nothing in the test suite really exercises this, but I did run benchmarks across this change and saw no really significant changes. llvm-svn: 200198
* [LPM] Conclude my immediate work by making the LoopVectorizerChandler Carruth2014-01-251-8/+37
| | | | | | | | | | | | a FunctionPass. With this change the loop vectorizer no longer is a loop pass and can readily depend on function analyses. In particular, with this change we no longer have to form a loop pass manager to run the loop vectorizer which simplifies the entire pass management of LLVM. The next step here is to teach the loop vectorizer to leverage profile information through the profile information providing analysis passes. llvm-svn: 200074
* [LPM] Make LCSSA a utility with a FunctionPass that applies it to allChandler Carruth2014-01-252-168/+262
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the loops in a function, and teach LICM to work in the presance of LCSSA. Previously, LCSSA was a loop pass. That made passes requiring it also be loop passes and unable to depend on function analysis passes easily. It also caused outer loops to have a different "canonical" form from inner loops during analysis. Instead, we go into LCSSA form and preserve it through the loop pass manager run. Note that this has the same problem as LoopSimplify that prevents enabling its verification -- loop passes which run at the end of the loop pass manager and don't preserve these are valid, but the subsequent loop pass runs of outer loops that do preserve this pass trigger too much verification and fail because the inner loop no longer verifies. The other problem this exposed is that LICM was completely unable to handle LCSSA form. It didn't preserve it and it actually would give up on moving instructions in many cases when they were used by an LCSSA phi node. I've taught LICM to support detecting LCSSA-form PHI nodes and to hoist and sink around them. This may actually let LICM fire significantly more because we put everything into LCSSA form to rotate the loop before running LICM. =/ Now LICM should handle that fine and preserve it correctly. The down side is that LICM has to require LCSSA in order to preserve it. This is just a fact of life for LCSSA. It's entirely possible we should completely remove LCSSA from the optimizer. The test updates are essentially accomodating LCSSA phi nodes in the output of LICM, and the fact that we now completely sink every instruction in ashr-crash below the loop bodies prior to unrolling. With this change, LCSSA is computed only three times in the pass pipeline. One of them could be removed (and potentially a SCEV run and a separate LoopPassManager entirely!) if we had a LoopPass variant of InstCombine that ran InstCombine on the loop body but refused to combine away LCSSA PHI nodes. Currently, this also prevents loop unrolling from being in the same loop pass manager is rotate, LICM, and unswitch. There is one thing that I *really* don't like -- preserving LCSSA in LICM is quite expensive. We end up having to re-run LCSSA twice for some loops after LICM runs because LICM can undo LCSSA both in the current loop and the parent loop. I don't really see good solutions to this other than to completely move away from LCSSA and using tools like SSAUpdater instead. llvm-svn: 200067
* Revert "Revert "Add Constant Hoisting Pass" (r200034)"Juergen Ributzka2014-01-254-2/+440
| | | | | | | This reverts commit r200058 and adds the using directive for ARMTargetTransformInfo to silence two g++ overload warnings. llvm-svn: 200062
* Revert "Add Constant Hoisting Pass" (r200034)Hans Wennborg2014-01-254-440/+2
| | | | | | | | | | | | | | | This commit caused -Woverloaded-virtual warnings. The two new TargetTransformInfo::getIntImmCost functions were only added to the superclass, and to the X86 subclass. The other targets were not updated, and the warning highlighted this by pointing out that e.g. ARMTTI::getIntImmCost was hiding the two new getIntImmCost variants. We could pacify the warning by adding "using TargetTransformInfo::getIntImmCost" to the various subclasses, or turning it off, but I suspect that it's wrong to leave the functions unimplemnted in those targets. The default implementations return TCC_Free, which I don't think is right e.g. for ARM. llvm-svn: 200058
* Add Constant Hoisting PassJuergen Ributzka2014-01-244-2/+440
| | | | | | | | Retry commit r200022 with a fix for the build bot errors. Constant expressions have (unlike instructions) module scope use lists and therefore may have users in different functions. The fix is to simply ignore these out-of-function uses. llvm-svn: 200034
* InstCombine: Don't try to use aggregate elements of ConstantExprs.Benjamin Kramer2014-01-241-5/+7
| | | | | | PR18600. llvm-svn: 200028
* Revert "Add Constant Hoisting Pass"Juergen Ributzka2014-01-244-433/+2
| | | | | | This reverts commit r200022 to unbreak the build bots. llvm-svn: 200024
* Add Constant Hoisting PassJuergen Ributzka2014-01-244-2/+433
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This pass identifies expensive constants to hoist and coalesces them to better prepare it for SelectionDAG-based code generation. This works around the limitations of the basic-block-at-a-time approach. First it scans all instructions for integer constants and calculates its cost. If the constant can be folded into the instruction (the cost is TCC_Free) or the cost is just a simple operation (TCC_BASIC), then we don't consider it expensive and leave it alone. This is the default behavior and the default implementation of getIntImmCost will always return TCC_Free. If the cost is more than TCC_BASIC, then the integer constant can't be folded into the instruction and it might be beneficial to hoist the constant. Similar constants are coalesced to reduce register pressure and materialization code. When a constant is hoisted, it is also hidden behind a bitcast to force it to be live-out of the basic block. Otherwise the constant would be just duplicated and each basic block would have its own copy in the SelectionDAG. The SelectionDAG recognizes such constants as opaque and doesn't perform certain transformations on them, which would create a new expensive constant. This optimization is only applied to integer constants in instructions and simple (this means not nested) constant cast experessions. For example: %0 = load i64* inttoptr (i64 big_constant to i64*) Reviewed by Eric llvm-svn: 200022
* Fix known typosAlp Toker2014-01-2416-35/+35
| | | | | | | Sweep the codebase for common typos. Includes some changes to visible function names that were misspelt. llvm-svn: 200018
* [LPM] Fix a logic error in LICM spotted by inspection.Chandler Carruth2014-01-241-1/+1
| | | | | | | | | | | | We completely skipped promotion in LICM if the loop has a preheader or dedicated exits, but not *both*. We hoist if there is a preheader, and sink if there are dedicated exits, but either hoisting or sinking can move loop invariant code out of the loop! I have no idea if this has a practical consequence. If anyone has ideas for a test case, let me know. llvm-svn: 199966
* [cleanup] Use the type-based preservation method rather than a stringChandler Carruth2014-01-241-1/+2
| | | | | | | literal that bakes a pass name and forces parsing it in the pass manager. llvm-svn: 199963
* Remove tail marker when changing an argument to an alloca.Rafael Espindola2014-01-231-0/+10
| | | | | | | | | | Argument promotion can replace an argument of a call with an alloca. This requires clearing the tail marker as it is very likely that the callee is now using an alloca in the caller. This fixes pr14710. llvm-svn: 199909
* [LPM] Make LoopSimplify no longer a LoopPass and instead both a utilityChandler Carruth2014-01-233-386/+453
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | function and a FunctionPass. This has many benefits. The motivating use case was to be able to compute function analysis passes *after* running LoopSimplify (to avoid invalidating them) and then to run other passes which require LoopSimplify. Specifically passes like unrolling and vectorization are critical to wire up to BranchProbabilityInfo and BlockFrequencyInfo so that they can be profile aware. For the LoopVectorize pass the only things in the way are LoopSimplify and LCSSA. This fixes LoopSimplify and LCSSA is next on my list. There are also a bunch of other benefits of doing this: - It is now very feasible to make more passes *preserve* LoopSimplify because they can simply run it after changing a loop. Because subsequence passes can assume LoopSimplify is preserved we can reduce the runs of this pass to the times when we actually mutate a loop structure. - The new pass manager should be able to more easily support loop passes factored in this way. - We can at long, long last observe that LoopSimplify is preserved across SCEV. This *halves* the number of times we run LoopSimplify!!! Now, getting here wasn't trivial. First off, the interfaces used by LoopSimplify are all over the map regarding how analysis are updated. We end up with weird "pass" parameters as a consequence. I'll try to clean at least some of this up later -- I'll have to have it all clean for the new pass manager. Next up I discovered a really frustrating bug. LoopUnroll *claims* to preserve LoopSimplify. That's actually a lie. But the way the LoopPassManager ends up running the passes, it always ran LoopSimplify on the unrolled-into loop, rectifying this oversight before any verification could kick in and point out that in fact nothing was preserved. So I've added code to the unroller to *actually* simplify the surrounding loop when it succeeds at unrolling. The only functional change in the test suite is that we now catch a case that was previously missed because SCEV and other loop transforms see their containing loops as simplified and thus don't miss some opportunities. One test case has been converted to check that we catch this case rather than checking that we miss it but at least don't get the wrong answer. Note that I have #if-ed out all of the verification logic in LoopSimplify! This is a temporary workaround while extracting these bits from the LoopPassManager. Currently, there is no way to have a pass in the LoopPassManager which preserves LoopSimplify along with one which does not. The LPM will try to verify on each loop in the nest that LoopSimplify holds but the now-Function-pass cannot distinguish what loop is being verified and so must try to verify all of them. The inner most loop is clearly no longer simplified as there is a pass which didn't even *attempt* to preserve it. =/ Once I get LCSSA out (and maybe LoopVectorize and some other fixes) I'll be able to re-enable this check and catch any places where we are still failing to preserve LoopSimplify. If this causes problems I can back this out and try to commit *all* of this at once, but so far this seems to work and allow much more incremental progress. llvm-svn: 199884
* Handle an addrspacecast case in memcpyoptMatt Arsenault2014-01-221-1/+1
| | | | llvm-svn: 199836
* Loop strength reduce: fix function name.Tim Northover2014-01-221-8/+8
| | | | llvm-svn: 199801
* [SROA] Fix a bug which could cause the common type finding to returnChandler Carruth2014-01-211-22/+19
| | | | | | | | | | | | | inconsistent results for different orderings of alloca slices. The fundamental issue is that it is just always a mistake to return early from this function. There is no effective early exit to leverage. This patch stops trynig to do so and simplifies the code a bit as a consequence. Original diagnosis and patch by James Molloy with some name tweaks by me in part reflecting feedback from Duncan Smith on the mailing list. llvm-svn: 199771
* Fix all the remaining lost-fast-math-flags bugs I've been able to find. The ↵Owen Anderson2014-01-203-10/+49
| | | | | | | | most important of these are cases in the generic logic for combining BinaryOperators. This logic hadn't been updated to handle FastMathFlags, and it took me a while to detect it because it doesn't show up in a simple search for CreateFAdd. llvm-svn: 199629
* InstCombine: Modernize a bunch of cast combines.Benjamin Kramer2014-01-191-44/+23
| | | | | | Also make them vector-aware. llvm-svn: 199608
* InstCombine: Hoist 3 copies of AddOne/SubOne into a header.Benjamin Kramer2014-01-194-30/+9
| | | | llvm-svn: 199605
* InstCombine: Replace a hand-rolled version of isKnownToBeAPowerOfTwo with ↵Benjamin Kramer2014-01-191-21/+4
| | | | | | the real thing. llvm-svn: 199604
* InstCombine: Teach most integer add/sub/mul/div combines how to deal with ↵Benjamin Kramer2014-01-192-76/+82
| | | | | | vectors. llvm-svn: 199602
* InstCombine: Refactor fmul/fdiv combines to handle vectors.Benjamin Kramer2014-01-192-65/+78
| | | | llvm-svn: 199598
* Fix a really nasty SROA bug with how we handled out-of-bounds memcpyChandler Carruth2014-01-191-12/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | intrinsics. Reported on the list by Evan with a couple of attempts to fix, but it took a while to dig down to the root cause. There are two overlapping bugs here, both centering around the circumstance of discovering a memcpy operand which is known to be completely outside the bounds of the alloca. First, we need to kill the *other* side of the memcpy if it was added to this alloca. Otherwise we'll factor it into our slicing and try to rewrite it even though we know for a fact that it is dead. This is made more tricky because we can visit the sides in either order. So we have to both kill the other side and skip instructions marked as dead. The latter really should be goodness in every case, but here is a matter of correctness. Second, we need to actually remove the *uses* of the alloca by the memcpy when queuing it for later deletion. Otherwise it may still be using the alloca when we go to promote it (if the rewrite re-uses the existing alloca instruction). Do this by factoring out the use-clobbering used when for nixing a Phi argument and re-using it across the operands of a to-be-deleted instruction. llvm-svn: 199590
* LoopVectorizer: A reduction that has multiple uses of the reduction value is notArnold Schwaighofer2014-01-191-2/+11
| | | | | | | | | | | | | | a reduction. Really. Under certain circumstances (the use list of an instruction has to be set up right - hence the extra pass in the test case) we would not recognize when a value in a potential reduction cycle was used multiple times by the reduction cycle. Fixes PR18526. radar://15851149 llvm-svn: 199570
* Don't refuse to transform constexpr(call(arg, ...)) to call(constexpr(arg), ↵Nick Lewycky2014-01-181-3/+4
| | | | | | ...)) just because the function has multiple return values even if their return types are the same. Patch by Eduard Burtescu! llvm-svn: 199564
* InstCombine: Make the (fmul X, -1.0) -> (fsub -0.0, X) transform handle ↵Benjamin Kramer2014-01-181-6/+4
| | | | | | | | vectors too. PR18532. llvm-svn: 199553
* Fix more instances of dropped fast math flags when optimizing FADD ↵Owen Anderson2014-01-183-7/+33
| | | | | | instructions. All found by inspection (aka grep). llvm-svn: 199528
* [asan] extend asan-coverage (still experimental).Kostya Serebryany2014-01-171-31/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - add a mode for collecting per-block coverage (-asan-coverage=2). So far the implementation is naive (all blocks are instrumented), the performance overhead on top of asan could be as high as 30%. - Make sure the one-time calls to __sanitizer_cov are moved to function buttom, which in turn required to copy the original debug info into the call insn. Here is the performance data on SPEC 2006 (train data, comparing asan with asan-coverage={0,1,2}): asan+cov0 asan+cov1 diff 0-1 asan+cov2 diff 0-2 diff 1-2 400.perlbench, 65.60, 65.80, 1.00, 76.20, 1.16, 1.16 401.bzip2, 65.10, 65.50, 1.01, 75.90, 1.17, 1.16 403.gcc, 1.64, 1.69, 1.03, 2.04, 1.24, 1.21 429.mcf, 21.90, 22.60, 1.03, 23.20, 1.06, 1.03 445.gobmk, 166.00, 169.00, 1.02, 205.00, 1.23, 1.21 456.hmmer, 88.30, 87.90, 1.00, 91.00, 1.03, 1.04 458.sjeng, 210.00, 222.00, 1.06, 258.00, 1.23, 1.16 462.libquantum, 1.73, 1.75, 1.01, 2.11, 1.22, 1.21 464.h264ref, 147.00, 152.00, 1.03, 160.00, 1.09, 1.05 471.omnetpp, 115.00, 116.00, 1.01, 140.00, 1.22, 1.21 473.astar, 133.00, 131.00, 0.98, 142.00, 1.07, 1.08 483.xalancbmk, 118.00, 120.00, 1.02, 154.00, 1.31, 1.28 433.milc, 19.80, 20.00, 1.01, 20.10, 1.02, 1.01 444.namd, 16.20, 16.20, 1.00, 17.60, 1.09, 1.09 447.dealII, 41.80, 42.20, 1.01, 43.50, 1.04, 1.03 450.soplex, 7.51, 7.82, 1.04, 8.25, 1.10, 1.05 453.povray, 14.00, 14.40, 1.03, 15.80, 1.13, 1.10 470.lbm, 33.30, 34.10, 1.02, 34.10, 1.02, 1.00 482.sphinx3, 12.40, 12.30, 0.99, 13.00, 1.05, 1.06 llvm-svn: 199488
* [opt][PassInfo] Allow opt to run passes that need target machine.Quentin Colombet2014-01-161-5/+13
| | | | | | | | | | | | | | | | | | When registering a pass, a pass can now specify a second construct that takes as argument a pointer to TargetMachine. The PassInfo class has been updated to reflect that possibility. If such a constructor exists opt will use it instead of the default constructor when instantiating the pass. Since such IR passes are supposed to be rare, no specific support has been added to this commit to allow an easy registration of such a pass. In other words, for such pass, the initialization function has to be hand-written (see CodeGenPrepare for instance). Now, codegenprepare can be tested using opt: opt -codegenprepare -mtriple=mytriple input.ll llvm-svn: 199430
* Fix two cases where we could lose fast math flags when optimizing FADD ↵Owen Anderson2014-01-161-4/+10
| | | | | | expressions. llvm-svn: 199427
* Fix an instance where we would drop fast math flags when performing an fdiv ↵Owen Anderson2014-01-161-1/+3
| | | | | | to reciprocal multiply transformation. llvm-svn: 199425
* Fix a bug in InstCombine where we failed to preserve fast math flags when ↵Owen Anderson2014-01-161-2/+5
| | | | | | optimizing an FMUL expression. llvm-svn: 199424
* Teach InstCombine that (fmul X, -1.0) can be simplified to (fneg X), which ↵Owen Anderson2014-01-161-0/+10
| | | | | | LLVM expresses as (fsub -0.0, X). llvm-svn: 199420
* [asan] Remove -fsanitize-address-zero-base-shadow command lineEvgeniy Stepanov2014-01-161-22/+14
| | | | | | | | | | | | | | | | flag from clang, and disable zero-base shadow support on all platforms where it is not the default behavior. - It is completely unused, as far as we know. - It is ABI-incompatible with non-zero-base shadow, which means all objects in a process must be built with the same setting. Failing to do so results in a segmentation fault at runtime. - It introduces a backward dependency of compiler-rt on user code, which is uncommon and complicates testing. This is the LLVM part of a larger change. llvm-svn: 199371
* Switch-to-lookup tables: set threshold to 3 casesHans Wennborg2014-01-151-5/+3
| | | | | | | | | | | | | | | There has been an old FIXME to find the right cut-off for when it's worth analyzing and potentially transforming a switch to a lookup table. The switches always have two or more cases. I could not measure any speed-up by transforming a switch with two cases. A switch with three cases gets a nice speed-up, and I couldn't measure any compile-time regression, so I think this is the right threshold. In a Clang self-host, this causes 480 new switches to be transformed, and reduces the final binary size with 8 KB. llvm-svn: 199294
* LoopVectorize: Only strip casts from integer types when replacing symbolicArnold Schwaighofer2014-01-151-4/+5
| | | | | | | | strides Fixes PR18480. llvm-svn: 199291
* Do pointer cast simplifications on addrspacecastMatt Arsenault2014-01-141-1/+1
| | | | llvm-svn: 199254
* Remove a check for an illegal condition.Matt Arsenault2014-01-141-5/+0
| | | | | | Bitcasts can't be between address spaces anymore. llvm-svn: 199253
* Make nocapture analysis work with addrspacecastMatt Arsenault2014-01-141-0/+2
| | | | llvm-svn: 199246
* Reapply "LTO: add API to set strategy for -internalize"Duncan P. N. Exon Smith2014-01-141-16/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Reapply r199191, reverted in r199197 because it carelessly broke Other/link-opts.ll. The problem was that calling createInternalizePass("main") would select createInternalizePass(bool("main")) instead of createInternalizePass(ArrayRef<const char *>("main")). This commit fixes the bug. The original commit message follows. Add API to LTOCodeGenerator to specify a strategy for the -internalize pass. This is a new attempt at Bill's change in r185882, which he reverted in r188029 due to problems with the gold linker. This puts the onus on the linker to decide whether (and what) to internalize. In particular, running internalize before outputting an object file may change a 'weak' symbol into an internal one, even though that symbol could be needed by an external object file --- e.g., with arclite. This patch enables three strategies: - LTO_INTERNALIZE_FULL: the default (and the old behaviour). - LTO_INTERNALIZE_NONE: skip -internalize. - LTO_INTERNALIZE_HIDDEN: only -internalize symbols with hidden visibility. LTO_INTERNALIZE_FULL should be used when linking an executable. Outputting an object file (e.g., via ld -r) is more complicated, and depends on whether hidden symbols should be internalized. E.g., for ld -r, LTO_INTERNALIZE_NONE can be used when -keep_private_externs, and LTO_INTERNALIZE_HIDDEN can be used otherwise. However, LTO_INTERNALIZE_FULL is inappropriate, since the output object file will eventually need to link with others. lto_codegen_set_internalize_strategy() sets the strategy for subsequent calls to lto_codegen_write_merged_modules() and lto_codegen_compile*(). <rdar://problem/14334895> llvm-svn: 199244
* Decouple dllexport/dllimport from linkageNico Rieck2014-01-141-1/+1
| | | | | | | | | | | | | | | | | | | Representing dllexport/dllimport as distinct linkage types prevents using these attributes on templates and inline functions. Instead of introducing further mixed linkage types to include linkonce and weak ODR, the old import/export linkage types are replaced with a new separate visibility-like specifier: define available_externally dllimport void @f() {} @Var = dllexport global i32 1, align 4 Linkage for dllexported globals and functions is now equal to their linkage without dllexport. Imported globals and functions must be either declarations with external linkage, or definitions with AvailableExternallyLinkage. llvm-svn: 199218
OpenPOWER on IntegriCloud