summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/Scalar
Commit message (Collapse)AuthorAgeFilesLines
* Fix ndebug-build unused variable in loop rerollingHal Finkel2013-11-171-1/+1
| | | | llvm-svn: 194941
* Add a loop rerolling passHal Finkel2013-11-163-0/+1190
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds a loop rerolling pass: the opposite of (partial) loop unrolling. The transformation aims to take loops like this: for (int i = 0; i < 3200; i += 5) { a[i] += alpha * b[i]; a[i + 1] += alpha * b[i + 1]; a[i + 2] += alpha * b[i + 2]; a[i + 3] += alpha * b[i + 3]; a[i + 4] += alpha * b[i + 4]; } and turn them into this: for (int i = 0; i < 3200; ++i) { a[i] += alpha * b[i]; } and loops like this: for (int i = 0; i < 500; ++i) { x[3*i] = foo(0); x[3*i+1] = foo(0); x[3*i+2] = foo(0); } and turn them into this: for (int i = 0; i < 1500; ++i) { x[i] = foo(0); } There are two motivations for this transformation: 1. Code-size reduction (especially relevant, obviously, when compiling for code size). 2. Providing greater choice to the loop vectorizer (and generic unroller) to choose the unrolling factor (and a better ability to vectorize). The loop vectorizer can take vector lengths and register pressure into account when choosing an unrolling factor, for example, and a pre-unrolled loop limits that choice. This is especially problematic if the manual unrolling was optimized for a machine different from the current target. The current implementation is limited to single basic-block loops only. The rerolling recognition should work regardless of how the loop iterations are intermixed within the loop body (subject to dependency and side-effect constraints), but the significant restriction is that the order of the instructions in each iteration must be identical. This seems sufficient to capture all current use cases. This pass is not currently enabled by default at any optimization level. llvm-svn: 194939
* Fix -Wdelete-non-virtual-dtor warnings by making SampleProfile methods ↵Alexey Samsonov2013-11-131-4/+4
| | | | | | non-virtual llvm-svn: 194568
* SampleProfileLoader pass. Initial setup.Diego Novillo2013-11-133-0/+481
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds a new scalar pass that reads a file with samples generated by 'perf' during runtime. The samples read from the profile are incorporated and emmited as IR metadata reflecting that profile. The profile file is assumed to have been generated by an external profile source. The profile information is converted into IR metadata, which is later used by the analysis routines to estimate block frequencies, edge weights and other related data. External profile information files have no fixed format, each profiler is free to define its own. This includes both the on-disk representation of the profile and the kind of profile information stored in the file. A common kind of profile is based on sampling (e.g., perf), which essentially counts how many times each line of the program has been executed during the run. The SampleProfileLoader pass is organized as a scalar transformation. On startup, it reads the file given in -sample-profile-file to determine what kind of profile it contains. This file is assumed to contain profile information for the whole application. The profile data in the file is read and incorporated into the internal state of the corresponding profiler. To facilitate testing, I've organized the profilers to support two file formats: text and native. The native format is whatever on-disk representation the profiler wants to support, I think this will mostly be bitcode files, but it could be anything the profiler wants to support. To do this, every profiler must implement the SampleProfile::loadNative() function. The text format is mostly meant for debugging. Records are separated by newlines, but each profiler is free to interpret records as it sees fit. Profilers must implement the SampleProfile::loadText() function. Finally, the pass will call SampleProfile::emitAnnotations() for each function in the current translation unit. This function needs to translate the loaded profile into IR metadata, which the analyzer will later be able to use. This patch implements the first steps towards the above design. I've implemented a sample-based flat profiler. The format of the profile is fairly simplistic. Each sampled function contains a list of relative line locations (from the start of the function) together with a count representing how many samples were collected at that line during execution. I generate this profile using perf and a separate converter tool. Currently, I have only implemented a text format for these profiles. I am interested in initial feedback to the whole approach before I send the other parts of the implementation for review. This patch implements: - The SampleProfileLoader pass. - The base ExternalProfile class with the core interface. - A SampleProfile sub-class using the above interface. The profiler generates branch weight metadata on every branch instructions that matches the profiles. - A text loader class to assist the implementation of SampleProfile::loadText(). - Basic unit tests for the pass. Additionally, the patch uses profile information to compute branch weights based on instruction samples. This patch converts instruction samples into branch weights. It does a fairly simplistic conversion: Given a multi-way branch instruction, it calculates the weight of each branch based on the maximum sample count gathered from each target basic block. Note that this assignment of branch weights is somewhat lossy and can be misleading. If a basic block has more than one incoming branch, all the incoming branches will get the same weight. In reality, it may be that only one of them is the most heavily taken branch. I will adjust this assignment in subsequent patches. llvm-svn: 194566
* Correct a glitch in r194424 which may invalidate iterator.Shuxin Yang2013-11-121-1/+3
| | | | llvm-svn: 194457
* Fix PR17952.Shuxin Yang2013-11-111-6/+175
| | | | | | | | | | | | | | | | | | | | | | | The symptom is that an assertion is triggered. The assertion was added by me to detect the situation when value is propagated from dead blocks. (We can certainly get rid of assertion; it is safe to do so, because propagating value from dead block to alive join node is certainly ok.) The root cause of this bug is : edge-splitting is conducted on the fly, the edge being split could be a dead edge, therefore the block that split the critial edge needs to be flagged "dead" as well. There are 3 ways to fix this bug: 1) Get rid of the assertion as I mentioned eariler 2) When an dead edge is split, flag the inserted block "dead". 3) proactively split the critical edges connecting dead and live blocks when new dead blocks are revealed. This fix go for 3) with additional 2 LOC. Testing case was added by Rafael the other day. llvm-svn: 194424
* Revert "Resurrect r191017 " GVN proceeds in the presence of dead code" plus ↵Bill Wendling2013-11-101-168/+6
| | | | | | | | | | | | | a fix to PR17307 & 17308." This causes PR17852. This reverts commit d93e8a06b2ca09ab18f390cd514b7443e2e571f7. Conflicts: test/Transforms/GVN/cond_br2.ll llvm-svn: 194348
* Remove dead code from LoopUnswitchHal Finkel2013-11-081-127/+0
| | | | | | | | | | | | LoopUnswitch's code simplification routine has logic to convert conditional branches into unconditional branches, after unswitching makes the condition constant, and then remove any blocks that renders dead. Unfortunately, this code is dead, currently broken, and furthermore, has never been alive (at least as far back at 2006). No functionality change intended. llvm-svn: 194277
* Add a runtime unrolling parameter to the LoopUnroll pass constructorHal Finkel2013-11-051-6/+10
| | | | | | | | | | | As with the other loop unrolling parameters (the unrolling threshold, partial unrolling, etc.) runtime unrolling can now also be controlled via the constructor. This will be necessary for moving non-trivial unrolling late in the pass manager (after loop vectorization). No functionality change intended. llvm-svn: 194027
* Teach scalarrepl about address spacesMatt Arsenault2013-10-301-1/+1
| | | | llvm-svn: 193720
* Fix GVN creating bitcast between address spacesMatt Arsenault2013-10-301-5/+7
| | | | llvm-svn: 193710
* Fix SCEVExpander: don't try to expand quadratic recurrences outside a loop.Andrew Trick2013-10-252-3/+21
| | | | | | | | | | | | Partial fix for PR17459: wrong code at -O3 on x86_64-linux-gnu (affecting trunk and 3.3) When SCEV expands a recurrence outside of a loop it attempts to scale by the stride of the recurrence. Chained recurrences don't work that way. We could compute binomial coefficients, but would hve to guarantee that the chained AddRec's are in a perfectly reduced form. llvm-svn: 193438
* Fix a bug in LinearFunctionTestReplace that created invalid loop exit checks.Juergen Ributzka2013-10-241-1/+7
| | | | | | Reviewed by Andy llvm-svn: 193303
* Clarify comments in genLoopLimit.Andrew Trick2013-10-241-3/+4
| | | | llvm-svn: 193292
* Use more type helper functionsMatt Arsenault2013-10-211-1/+1
| | | | llvm-svn: 193109
* Don't eliminate a partially redundant load if it's in a landing pad.Bill Wendling2013-10-211-1/+6
| | | | | | | | | | | | A landing pad can be jumped to only by the unwind edge of an invoke instruction. If we eliminate a partially redundant load in a landing pad, it will create a basic block that violates this constraint. It then leads to other problems down the line if it tries to merge that basic block with the landing pad. Avoid this by not eliminating the load in a landing pad. PR17621 llvm-svn: 193064
* StructurizeCFG: Add dependency on LowerSwitch passTom Stellard2013-10-021-1/+3
| | | | | | | | Switch instructions were crashing the StructurizeCFG pass, and it's probably easier anyway if we don't need to handle them in this pass. Reviewed-by: Christian König <christian.koenig@amd.com> llvm-svn: 191841
* Remove the very substantial, largely unmaintained legacy PGOChandler Carruth2013-10-021-8/+0
| | | | | | | | | | | | | | | | | | | | infrastructure. This was essentially work toward PGO based on a design that had several flaws, partially dating from a time when LLVM had a different architecture, and with an effort to modernize it abandoned without being completed. Since then, it has bitrotted for several years further. The result is nearly unusable, and isn't helping any of the modern PGO efforts. Instead, it is getting in the way, adding confusion about PGO in LLVM and distracting everyone with maintenance on essentially dead code. Removing it paves the way for modern efforts around PGO. Among other effects, this removes the last of the runtime libraries from LLVM. Those are being developed in the separate 'compiler-rt' project now, with somewhat different licensing specifically more approriate for runtimes. llvm-svn: 191835
* Even more spelling fixes for "instruction".Robert Wilhelm2013-09-281-1/+1
| | | | llvm-svn: 191611
* Push analysis passes to InstSimplify when they're around anyways.Benjamin Kramer2013-09-241-1/+2
| | | | llvm-svn: 191309
* Drop spurious handle in comment.Benjamin Kramer2013-09-221-1/+1
| | | | llvm-svn: 191172
* SROA: Handle casts involving vectors of pointers and integer scalars.Benjamin Kramer2013-09-211-11/+47
| | | | | | | | | | SROA wants to convert any types of equivalent widths but it's not possible to convert vectors of pointers to an integer scalar with a single cast. As a workaround we add a bitcast to the corresponding int ptr type first. This type of cast used to be an edge case but has become common with SLP vectorization. Fixes PR17271. llvm-svn: 191143
* Resurrect r191017 " GVN proceeds in the presence of dead code" plus a fix to ↵Shuxin Yang2013-09-201-6/+168
| | | | | | | | | PR17307 & 17308. The problem of r191017 is that when GVN fabricate a val-number for a dead instruction (in order to make following expr-PRE happy), it forget to fabricate a leader-table entry for it as well. llvm-svn: 191118
* Revert r191017, it results in segmentation faults in Qt.Joerg Sonnenberger2013-09-201-164/+6
| | | | llvm-svn: 191104
* GVN proceeds in the presence of dead code.Shuxin Yang2013-09-191-6/+164
| | | | | | | | | | | | | | | | | | | | | | This is how it ignores the dead code: 1) When a dead branch target, say block B, is identified, all the blocks dominated by B is dead as well. 2) The PHIs of those blocks in dominance-frontier(B) is updated such that the operands corresponding to dead predecessors are replaced by "UndefVal". Using lattice's jargon, the "UndefVal" is the "Top" in essence. Phi node like this "phi(v1 bb1, undef xx)" will be optimized into "v1" if v1 is constant, or v1 is an instruction which dominate this PHI node. 3) When analyzing the availability of a load L, all dead mem-ops which L depends on disguise as a load which evaluate exactly same value as L. 4) The dead mem-ops will be materialized as "UndefVal" during code motion. llvm-svn: 191017
* MemCpyOptimizer: Use max legal int size instead of pointer sizeMatt Arsenault2013-09-161-5/+8
| | | | | | | | | | | | If there are no legal integers, assume 1 byte. This makes more sense than using the pointer size as a guess for the maximum GPR width. It is conceivable to want to use some 64-bit pointers on a target where 64-bit integers aren't legal. llvm-svn: 190817
* Remove the long, long defunct IR block placement pass.Chandler Carruth2013-09-143-154/+0
| | | | | | | | | | | | | | | | | This pass was based on the previous (essentially unused) profiling infrastructure and the assumption that by ordering the basic blocks at the IR level in a particular way, the correct layout would happen in the end. This sometimes worked, and mostly didn't. It also was a really naive implementation of the classical paper that dates from when branch predictors were primarily directional and when loop structure wasn't commonly available. It also didn't factor into the equation non-fallthrough branches and other machine level details. Anyways, for all of these reasons and more, I wrote MachineBlockPlacement, which completely supercedes this pass. It both uses modern profile information infrastructure, and actually works. =] llvm-svn: 190748
* Add getUnrollingPreferences to TTIHal Finkel2013-09-111-7/+25
| | | | | | | | | Allow targets to customize the default behavior of the generic loop unrolling transformation. This will be used by the PowerPC backend when targeting the A2 core (which is in-order with a deep pipeline), and using more aggressive defaults is important. llvm-svn: 190542
* Teach loop-idiom about address space pointer sizesMatt Arsenault2013-09-111-12/+21
| | | | llvm-svn: 190491
* Add bracesMatt Arsenault2013-09-111-6/+9
| | | | llvm-svn: 190490
* Get rid of unused isPodLike definitions.Eli Friedman2013-09-111-10/+0
| | | | llvm-svn: 190461
* Fix mistake in r190442.Eli Friedman2013-09-101-0/+7
| | | | llvm-svn: 190446
* Remove unused functions.Eli Friedman2013-09-101-5/+0
| | | | llvm-svn: 190442
* Teach ScalarEvolution about pointer address spacesMatt Arsenault2013-09-101-1/+1
| | | | llvm-svn: 190425
* Use type helper functions.Matt Arsenault2013-09-061-2/+1
| | | | llvm-svn: 190113
* Teach CodeGenPrepare about address spacesMatt Arsenault2013-09-061-4/+2
| | | | llvm-svn: 190112
* Revert: r189565 - Add getUnrollingPreferences to TTIHal Finkel2013-08-291-17/+5
| | | | | | | | | | | | | | | Revert unintentional commit (of an unreviewed change). Original commit message: Add getUnrollingPreferences to TTI Allow targets to customize the default behavior of the generic loop unrolling transformation. This will be used by the PowerPC backend when targeting the A2 core (which is in-order with a deep pipeline), and using more aggressive defaults is important. llvm-svn: 189566
* Add getUnrollingPreferences to TTIHal Finkel2013-08-291-5/+17
| | | | | | | | | Allow targets to customize the default behavior of the generic loop unrolling transformation. This will be used by the PowerPC backend when targeting the A2 core (which is in-order with a deep pipeline), and using more aggressive defaults is important. llvm-svn: 189565
* Turn MipsOptimizeMathLibCalls into a target-independent scalar transformRichard Sandiford2013-08-233-0/+162
| | | | | | | | | | ...so that it can be used for z too. Most of the code is the same. The only real change is to use TargetTransformInfo to test when a sqrt instruction is available. The pass is opt-in because at the moment it only handles sqrt. llvm-svn: 189097
* Revert r187191, which broke opt -mem2reg on the testcases included in PR16867.Nick Lewycky2013-08-132-83/+14
| | | | | | | | | | | | However, opt -O2 doesn't run mem2reg directly so nobody noticed until r188146 when SROA started sending more things directly down the PromoteMemToReg path. In order to revert r187191, I also revert dependent revisions r187296, r187322 and r188146. Fixes PR16867. Does not add the testcases from that PR, but both of them should get added for both mem2reg and sroa when this revert gets unreverted. llvm-svn: 188327
* Reapply r188119 now that the bug it exposed is fixed.Peter Collingbourne2013-08-121-160/+5
| | | | llvm-svn: 188217
* Re-instate r187323 which fast-tracks promotable allocas as soon as theChandler Carruth2013-08-111-12/+81
| | | | | | | | | | | | | | | | | | | | | | | | | SROA-based analysis has enough information. This should work now that both mem2reg *and* the SSAUpdater-based AllocaPromoter have been updated to be able to promote the types of allocas that the SROA analysis detects. I've included tests for the AllocaPromoter that were only possible to write once we fast-tracked promotable allocas without rewriting them. This includes a test both for r187347 and r188145. Original commit log for r187323: """ Now that mem2reg understands how to cope with a slightly wider set of uses of an alloca, we can pre-compute promotability while analyzing an alloca for splitting in SROA. That lets us short-circuit the common case of a bunch of trivially promotable allocas. This cuts 20% to 30% off the run time of SROA for typical frontend-generated IR sequneces I'm seeing. It gets the new SROA to within 20% of ScalarRepl for such code. My current benchmark for these numbers is PR15412, but it fits the general pattern of IR emitted by Clang so it should be widely applicable. """ llvm-svn: 188146
* Finish fixing the SSAUpdater-based AllocaPromoter strategy in SROA to cope withChandler Carruth2013-08-111-2/+23
| | | | | | | | | | | | | | | | | | | | the more general set of patterns that are now handled by mem2reg and that we can detect quickly while doing SROA's initial analysis. Notably, this allows it to promote through no-op bitcast and GEP sequences. A core part of the SSAUpdater approach is the ability to test whether a particular instruction is part of the set being promoted. Testing this becomes significantly more complex in the world where the operand to every load and store isn't the alloca itself. I ended up using the approach of walking up the def-chain until we find the alloca. I benchmarked this against keeping a set of pointer operands and keeping a set of the loads and stores we care about, and this one seemed faster although the difference was very small. No test case yet because currently the rewriting always "fixes" the inputs to not require this. The next patch which re-enables early promotion of easy cases in SROA will include a test case that specifically exercises this aspect of the alloca promoter. llvm-svn: 188145
* Reformat some bits of AllocaPromoter and simplify the name and type ofChandler Carruth2013-08-111-21/+20
| | | | | | | | | our visiting datastructures in the AllocaPromoter/SSAUpdater path of SROA. Also shift the order if clears around to be more consistent. No functionality changed here, this is just a cleanup. llvm-svn: 188144
* Revert r188119 "Kill some duplicated code for removing unreachable BBs."Arnold Schwaighofer2013-08-101-5/+160
| | | | | | | | | | | | | | It is breaking builbots with libgmalloc enabled on Mac OS X. $ cd llvm ; mkdir release ; cd release $ ../configure --enable-optimized —prefix=$PWD/install $ make $ make check $ Release+Asserts/bin/llvm-lit -v --param use_gmalloc=1 --param \ gmalloc_path=/usr/lib/libgmalloc.dylib \ ../test/Instrumentation/DataFlowSanitizer/args-unreachable-bb.ll llvm-svn: 188142
* Kill some duplicated code for removing unreachable BBs.Peter Collingbourne2013-08-091-160/+5
| | | | | | | | | | | This moves removeUnreachableBlocksFromFn from SimplifyCFGPass.cpp to Utils/Local.cpp and uses it to replace the implementation of llvm::removeUnreachableBlocks, which appears to do a strict subset of what removeUnreachableBlocksFromFn does. Differential Revision: http://llvm-reviews.chandlerc.com/D1334 llvm-svn: 188119
* JumpThreading: Turn a select instruction into branching if it allows to ↵Benjamin Kramer2013-08-071-0/+83
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | thread one half of the select. This is a common pattern coming out of simplifycfg generating gross code. a: ; preds = %entry %sel = select i1 %cmp1, double %add, double 0.000000e+00 br label %b b: %cond5 = phi double [ %sel, %a ], [ %sub, %entry ] %cmp6 = fcmp oeq double %cond5, 0.000000e+00 br i1 %cmp6, label %if.then, label %if.end becomes a: br i1 %cmp1, label %b, label %if.then b: %cond5 = phi double [ %sub, %entry ], [ %add, %a ] %cmp6 = fcmp oeq double %cond5, 0.000000e+00 br i1 %cmp6, label %if.then, label %if.end Skipping block b completely if possible. llvm-svn: 187880
* Adjust file to the coding standard.Jakub Staszak2013-08-061-53/+49
| | | | llvm-svn: 187808
* Factor FlattenCFG out from SimplifyCFGTom Stellard2013-08-064-53/+94
| | | | | | Patch by: Mei Ye llvm-svn: 187764
* Teach the AllocaPromoter which is wrapped around the SSAUpdaterChandler Carruth2013-07-291-15/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | infrastructure to do promotion without a domtree the same smarts about looking through GEPs, bitcasts, etc., that I just taught mem2reg about. This way, if SROA chooses to promote an alloca which still has some noisy instructions this code can cope with them. I've not used as principled of an approach here for two reasons: 1) This code doesn't really need it as we were already set up to zip through the instructions used by the alloca. 2) I view the code here as more of a hack, and hopefully a temporary one. The SSAUpdater path in SROA is a real sore point for me. It doesn't make a lot of architectural sense for many reasons: - We're likely to end up needing the domtree anyways in a subsequent pass, so why not compute it earlier and use it. - In the future we'll likely end up needing the domtree for parts of the inliner itself. - If we need to we could teach the inliner to preserve the domtree. Part of the re-work of the pass manager will allow this to be very powerful even in large SCCs with many functions. - Ultimately, computing a domtree has gotten significantly faster since the original SSAUpdater-using code went into ScalarRepl. We no longer use domfrontiers, and much of domtree is lazily done based on queries rather than eagerly. - At this point keeping the SSAUpdater-based promotion saves a total of 0.7% on a build of the 'opt' tool for me. That's not a lot of performance given the complexity! So I'm leaving this a bit ugly in the hope that eventually we just remove all of this nonsense. I can't even readily test this because this code isn't reachable except through SROA. When I re-instate the patch that fast-tracks allocas already suitable for promotion, I'll add a testcase there that failed before this change. Before that, SROA will fix any test case I give it. llvm-svn: 187347
OpenPOWER on IntegriCloud