summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms
Commit message (Collapse)AuthorAgeFilesLines
* Add a loop rerolling flag to the PassManagerBuilderHal Finkel2013-11-171-1/+2
| | | | | | | | | This adds a boolean member variable to the PassManagerBuilder to control loop rerolling (just like we have for unrolling and the various vectorization options). This is necessary for control by the frontend. Loop rerolling remains disabled by default at all optimization levels. llvm-svn: 194966
* Add the cold attribute to error-reporting call sitesHal Finkel2013-11-171-0/+72
| | | | | | | | | | | Generally speaking, control flow paths with error reporting calls are cold. So far, error reporting calls are calls to perror and calls to fprintf, fwrite, etc. with stderr as the stream. This can be extended in the future. The primary motivation is to improve block placement (the cold attribute affects the static branch prediction heuristics). llvm-svn: 194943
* Fix ndebug-build unused variable in loop rerollingHal Finkel2013-11-171-1/+1
| | | | llvm-svn: 194941
* Add a loop rerolling passHal Finkel2013-11-164-0/+1196
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds a loop rerolling pass: the opposite of (partial) loop unrolling. The transformation aims to take loops like this: for (int i = 0; i < 3200; i += 5) { a[i] += alpha * b[i]; a[i + 1] += alpha * b[i + 1]; a[i + 2] += alpha * b[i + 2]; a[i + 3] += alpha * b[i + 3]; a[i + 4] += alpha * b[i + 4]; } and turn them into this: for (int i = 0; i < 3200; ++i) { a[i] += alpha * b[i]; } and loops like this: for (int i = 0; i < 500; ++i) { x[3*i] = foo(0); x[3*i+1] = foo(0); x[3*i+2] = foo(0); } and turn them into this: for (int i = 0; i < 1500; ++i) { x[i] = foo(0); } There are two motivations for this transformation: 1. Code-size reduction (especially relevant, obviously, when compiling for code size). 2. Providing greater choice to the loop vectorizer (and generic unroller) to choose the unrolling factor (and a better ability to vectorize). The loop vectorizer can take vector lengths and register pressure into account when choosing an unrolling factor, for example, and a pre-unrolled loop limits that choice. This is especially problematic if the manual unrolling was optimized for a machine different from the current target. The current implementation is limited to single basic-block loops only. The rerolling recognition should work regardless of how the loop iterations are intermixed within the loop body (subject to dependency and side-effect constraints), but the significant restriction is that the order of the instructions in each iteration must be identical. This seems sufficient to capture all current use cases. This pass is not currently enabled by default at any optimization level. llvm-svn: 194939
* Apply the InstCombine fptrunc sqrt optimization to llvm.sqrtHal Finkel2013-11-161-6/+11
| | | | | | | | | | | | | | | | | | | InstCombine, in visitFPTrunc, applies the following optimization to sqrt calls: (fptrunc (sqrt (fpext x))) -> (sqrtf x) but does not apply the same optimization to llvm.sqrt. This is a problem because, to enable vectorization, Clang generates llvm.sqrt instead of sqrt in fast-math mode, and because this optimization is being applied to sqrt and not applied to llvm.sqrt, sometimes the fast-math code is slower. This change makes InstCombine apply this optimization to llvm.sqrt as well. This fixes the specific problem in PR17758, although the same underlying issue (optimizations applied to libcalls are not applied to intrinsics) exists for other optimizations in SimplifyLibCalls. llvm-svn: 194935
* InstCombine: fold (A >> C) == (B >> C) --> (A^B) < (1 << C) for constant Cs.Benjamin Kramer2013-11-161-0/+18
| | | | | | This is common in bitfield code. llvm-svn: 194925
* LoopVectorizer: Use abi alignment for accesses with no alignmentArnold Schwaighofer2013-11-151-0/+4
| | | | | | | | | | | When we vectorize a scalar access with no alignment specified, we have to set the target's abi alignment of the scalar access on the vectorized access. Using the same alignment of zero would be wrong because most targets will have a bigger abi alignment for vector types. This probably fixes PR17878. llvm-svn: 194876
* ArgumentPromotion: correctly transfer TBAA tags and alignments.Manman Ren2013-11-151-3/+5
| | | | | | | | | | | | | | We used to use std::map<IndicesVector, LoadInst*> for OriginalLoads, and when we try to promote two arguments, they will both write to OriginalLoads causing created loads for the two arguments to have the same original load. And the same tbaa tag and alignment will be put to the created loads for the two arguments. The fix is to use std::map<std::pair<Argument*, IndicesVector>, LoadInst*> for OriginalLoads, so each Argument will write to different parts of the map. PR17906 llvm-svn: 194846
* [asan] use GlobalValue::PrivateLinkage for coverage guard to save quite a ↵Kostya Serebryany2013-11-151-1/+1
| | | | | | bit of code size llvm-svn: 194800
* Reapply "[asan] Poor man's coverage that works with ASan"Bob Wilson2013-11-151-0/+52
| | | | | | | | I was able to successfully run a bootstrapped LTO build of clang with r194701, so this change does not seem to be the cause of our failing buildbots. llvm-svn: 194789
* Add instcombine visitor for addrspacecastMatt Arsenault2013-11-152-0/+5
| | | | llvm-svn: 194786
* Revert "[asan] Poor man's coverage that works with ASan"Bob Wilson2013-11-151-52/+0
| | | | | | | | | This reverts commit 194701. Apple's bootstrapped LTO builds have been failing, and this change (along with compiler-rt 194702-194704) is the only thing on the blamelist. I will either reappy these changes or help debug the problem, depending on whether this fixes the buildbots. llvm-svn: 194780
* [asan] Poor man's coverage that works with ASanKostya Serebryany2013-11-141-0/+52
| | | | llvm-svn: 194701
* [msan] Fast path optimization for wrap-indirect-calls feature of ↵Evgeniy Stepanov2013-11-141-12/+65
| | | | | | | | | | | | | | | | | | | MemorySanitizer. Indirect call wrapping helps MSanDR (dynamic instrumentation companion tool for MSan) to catch all cases where execution leaves a compiler-instrumented module by allowing the tool to rewrite targets of indirect calls. This change is an optimization that skips wrapping for calls when target is inside the current module. This relies on the linker providing symbols at the begin and end of the module code (or code + data, does not really matter). Gold linker provides such symbols by default. GNU (BFD) linker needs a link flag: -Wl,--defsym=__executable_start=0. More info: https://code.google.com/p/memory-sanitizer/wiki/MSanDR#Native_exec llvm-svn: 194697
* Use StringRef instead of std::stringJakub Staszak2013-11-131-1/+1
| | | | llvm-svn: 194601
* Fix -Wdelete-non-virtual-dtor warnings by making SampleProfile methods ↵Alexey Samsonov2013-11-131-4/+4
| | | | | | non-virtual llvm-svn: 194568
* SampleProfileLoader pass. Initial setup.Diego Novillo2013-11-133-0/+481
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds a new scalar pass that reads a file with samples generated by 'perf' during runtime. The samples read from the profile are incorporated and emmited as IR metadata reflecting that profile. The profile file is assumed to have been generated by an external profile source. The profile information is converted into IR metadata, which is later used by the analysis routines to estimate block frequencies, edge weights and other related data. External profile information files have no fixed format, each profiler is free to define its own. This includes both the on-disk representation of the profile and the kind of profile information stored in the file. A common kind of profile is based on sampling (e.g., perf), which essentially counts how many times each line of the program has been executed during the run. The SampleProfileLoader pass is organized as a scalar transformation. On startup, it reads the file given in -sample-profile-file to determine what kind of profile it contains. This file is assumed to contain profile information for the whole application. The profile data in the file is read and incorporated into the internal state of the corresponding profiler. To facilitate testing, I've organized the profilers to support two file formats: text and native. The native format is whatever on-disk representation the profiler wants to support, I think this will mostly be bitcode files, but it could be anything the profiler wants to support. To do this, every profiler must implement the SampleProfile::loadNative() function. The text format is mostly meant for debugging. Records are separated by newlines, but each profiler is free to interpret records as it sees fit. Profilers must implement the SampleProfile::loadText() function. Finally, the pass will call SampleProfile::emitAnnotations() for each function in the current translation unit. This function needs to translate the loaded profile into IR metadata, which the analyzer will later be able to use. This patch implements the first steps towards the above design. I've implemented a sample-based flat profiler. The format of the profile is fairly simplistic. Each sampled function contains a list of relative line locations (from the start of the function) together with a count representing how many samples were collected at that line during execution. I generate this profile using perf and a separate converter tool. Currently, I have only implemented a text format for these profiles. I am interested in initial feedback to the whole approach before I send the other parts of the implementation for review. This patch implements: - The SampleProfileLoader pass. - The base ExternalProfile class with the core interface. - A SampleProfile sub-class using the above interface. The profiler generates branch weight metadata on every branch instructions that matches the profiles. - A text loader class to assist the implementation of SampleProfile::loadText(). - Basic unit tests for the pass. Additionally, the patch uses profile information to compute branch weights based on instruction samples. This patch converts instruction samples into branch weights. It does a fairly simplistic conversion: Given a multi-way branch instruction, it calculates the weight of each branch based on the maximum sample count gathered from each target basic block. Note that this assignment of branch weights is somewhat lossy and can be misleading. If a basic block has more than one incoming branch, all the incoming branches will get the same weight. In reality, it may be that only one of them is the most heavily taken branch. I will adjust this assignment in subsequent patches. llvm-svn: 194566
* Update the docs to match the function name.Nadav Rotem2013-11-131-1/+1
| | | | llvm-svn: 194537
* Fold (iszero(A&K1) | iszero(A&K2)) -> (A&(K1|K2)) != (K1|K2) if we know ↵Nadav Rotem2013-11-121-3/+50
| | | | | | that K1 and K2 are 'one-hot' (only one bit is on). llvm-svn: 194525
* FoldBranchToCommonDest merges branches into a single branch with or/and of ↵Nadav Rotem2013-11-121-2/+7
| | | | | | the condition. It has a heuristics for estimating when some of the dependencies are processed by out-of-order processors. This patch adds another rule to the heuristics that says that if the "BonusInstruction" that we speculatively execute is used by the condition of the second branch then it is okay to hoist it. This change exposes more opportunities for other passes to transform the code. It does not matter that much that we if-convert the code because the selectiondag builder splits or/and branches into multiple branches when profitable. llvm-svn: 194524
* Corruptly merge constants with explicit and implicit alignments.Rafael Espindola2013-11-121-4/+7
| | | | | | | | | | | | | Constant merge can merge a constant with implicit alignment with one that has explicit alignment. Before this change it was assuming that the explicit alignment was higher than the implicit one, causing the result to be under aligned in some cases. Fixes pr17815. Patch by Chris Smowton! llvm-svn: 194506
* SimplifyCFG: Use existing constant folding logic when forming switch tables.Benjamin Kramer2013-11-121-31/+20
| | | | | | Both simpler and more powerful than the hand-rolled folding logic. llvm-svn: 194475
* Correct a glitch in r194424 which may invalidate iterator.Shuxin Yang2013-11-121-1/+3
| | | | llvm-svn: 194457
* llvm-cov: Added call to update run/program counts.Yuchen Wu2013-11-121-0/+8
| | | | | | Also updated test files that were generated from this change. llvm-svn: 194453
* Fix PR17952.Shuxin Yang2013-11-111-6/+175
| | | | | | | | | | | | | | | | | | | | | | | The symptom is that an assertion is triggered. The assertion was added by me to detect the situation when value is propagated from dead blocks. (We can certainly get rid of assertion; it is safe to do so, because propagating value from dead block to alive join node is certainly ok.) The root cause of this bug is : edge-splitting is conducted on the fly, the edge being split could be a dead edge, therefore the block that split the critial edge needs to be flagged "dead" as well. There are 3 ways to fix this bug: 1) Get rid of the assertion as I mentioned eariler 2) When an dead edge is split, flag the inserted block "dead". 3) proactively split the critical edges connecting dead and live blocks when new dead blocks are revealed. This fix go for 3) with additional 2 LOC. Testing case was added by Rafael the other day. llvm-svn: 194424
* Move debug message in vectorizerRenato Golin2013-11-111-4/+1
| | | | | | No functional change, just better reporting. llvm-svn: 194388
* [msan] Propagate origin for insertvalue, extractvalue.Evgeniy Stepanov2013-11-111-2/+2
| | | | llvm-svn: 194374
* Revert "Resurrect r191017 " GVN proceeds in the presence of dead code" plus ↵Bill Wendling2013-11-101-168/+6
| | | | | | | | | | | | | a fix to PR17307 & 17308." This causes PR17852. This reverts commit d93e8a06b2ca09ab18f390cd514b7443e2e571f7. Conflicts: test/Transforms/GVN/cond_br2.ll llvm-svn: 194348
* Use type form of getIntPtrType.Matt Arsenault2013-11-101-1/+1
| | | | | | | | This should be inconsequential and is work towards removing the default address space arguments. llvm-svn: 194347
* SimplifyCFG has a heuristics for out-of-order processors that decides when ↵Nadav Rotem2013-11-101-1/+1
| | | | | | | | it is worthwhile to merge branches. It tries to estimate if the operands of the instruction that we want to hoist are ready. This commit marks function arguments as 'ready' because they require no calculation. This boosts libquantum and a few other workloads from the testsuite. llvm-svn: 194346
* Teach MergeFunctions about address spacesMatt Arsenault2013-11-101-11/+19
| | | | llvm-svn: 194342
* Remove dead code from LoopUnswitchHal Finkel2013-11-081-127/+0
| | | | | | | | | | | | LoopUnswitch's code simplification routine has logic to convert conditional branches into unconditional branches, after unswitching makes the condition constant, and then remove any blocks that renders dead. Unfortunately, this code is dead, currently broken, and furthermore, has never been alive (at least as far back at 2006). No functionality change intended. llvm-svn: 194277
* [objc-arc] Convert the one directional retain/release relation assert to a ↵Michael Gottesman2013-11-051-3/+18
| | | | | | | | | | | | | | | | | conditional check + fail. Due to the previously added overflow checks, we can have a retain/release relation that is one directional. This occurs specifically when we run into an additive overflow causing us to drop state in only one direction. If that occurs, we should bail and not optimize that retain/release instead of asserting. Apologies for the size of the testcase. It is necessary to cause the additive cfg overflow to trigger. rdar://15377890 llvm-svn: 194083
* Add a runtime unrolling parameter to the LoopUnroll pass constructorHal Finkel2013-11-051-6/+10
| | | | | | | | | | | As with the other loop unrolling parameters (the unrolling threshold, partial unrolling, etc.) runtime unrolling can now also be controlled via the constructor. This will be necessary for moving non-trivial unrolling late in the pass manager (after loop vectorization). No functionality change intended. llvm-svn: 194027
* Remove dead codeShuxin Yang2013-11-041-6/+0
| | | | llvm-svn: 194017
* SLPVectorizer: Use properlyDominates to satisfy the irreflexivity of a ↵Benjamin Kramer2013-11-041-1/+1
| | | | | | | | strict weak ordering. STL debug mode checks this. llvm-svn: 194015
* Scalarize select vector arguments when extracted.Matt Arsenault2013-11-041-0/+32
| | | | | | | | When the elements are extracted from a select on vectors or a vector select, do the select on the extracted scalars from the input if there is only one use. llvm-svn: 194013
* SLPVectorizer: Add a missing pair of parens. No functionality change.Benjamin Kramer2013-11-031-1/+1
| | | | llvm-svn: 193958
* SLPVectorizer: When CSEing generated gathers only scan blocks containing them.Benjamin Kramer2013-11-031-20/+37
| | | | | | | | | | | Instead of doing a RPO traversal of the whole function remember the blocks containing gathers (typically <= 2) and scan them in dominator-first order. The actual CSE is still quadratic, but I'm not confident that adding a scoped hash table here is worth it as we're only looking at the generated instructions and not arbitrary code. llvm-svn: 193956
* Revert "Inliner: Handle readonly attribute per argument when adding memcpy"David Majnemer2013-11-031-13/+10
| | | | | | | | This reverts commit r193356, it caused PR17781. A reduced test case covering this regression has been added to the test suite. llvm-svn: 193955
* Spell "Actual" correctlyDavid Majnemer2013-11-031-1/+1
| | | | llvm-svn: 193954
* Convert calls to __sinpi and __cospi into __sincospi_stretBob Wilson2013-11-031-0/+156
| | | | | | | | | | This adds an SimplifyLibCalls case which converts the special __sinpi and __cospi (float & double variants) into a __sincospi_stret where appropriate to remove duplicated work. Patch by Tim Northover llvm-svn: 193943
* SLPVectorizer: Remove duplicated function.Benjamin Kramer2013-11-021-10/+2
| | | | llvm-svn: 193927
* LoopVectorize: Remove quadratic behavior the local CSE.Benjamin Kramer2013-11-021-26/+40
| | | | | | | | Doing this with a hash map doesn't change behavior and avoids calling isIdenticalTo O(n^2) times. This should probably eventually move into a utility class shared with EarlyCSE and the limited CSE in the SLPVectorizer. llvm-svn: 193926
* LoopVectorizer: Move cse code into its own functionArnold Schwaighofer2013-11-011-32/+37
| | | | llvm-svn: 193895
* LoopVectorizer: Perform redundancy elimination on induction variablesArnold Schwaighofer2013-11-011-1/+34
| | | | | | | | | | | | | | | | | | | | When the loop vectorizer was part of the SCC inliner pass manager gvn would run after the loop vectorizer followed by instcombine. This way redundancy (multiple uses) were removed and instcombine could perform scalarization on the induction variables. Having moved the loop vectorizer to later we no longer run any form of redundancy elimination before we perform instcombine. This caused vectorized induction variables to survive that did not before. On a recent iMac this helps linpack back from 6000Mflops to 7000Mflops. This should also help lpbench and paq8p. I ran a Release (without Asserts) build over the test-suite and did not see any negative impact on compile time. radar://15339680 llvm-svn: 193891
* LoopVectorize: Look for consecutive acces in GEPs with trailing zero indicesBenjamin Kramer2013-11-011-11/+38
| | | | | | | If we have a pointer to a single-element struct we can still build wide loads and stores to it (if there is no padding). llvm-svn: 193860
* LoopVectorizer: If dependency checks fail try runtime checksArnold Schwaighofer2013-11-011-5/+47
| | | | | | | | | | | | When a dependence check fails we can still try to vectorize loops with runtime array bounds checks. This helps linpack to vectorize a loop in dgefa. And we are back to 2x of the scalar performance on a corei7-avx. radar://15339680 llvm-svn: 193853
* LoopVectorizer: Clear all member data structures in RuntimeCheck.reset()Arnold Schwaighofer2013-11-011-0/+2
| | | | | | | | Clear all data structures when resetting the RuntimeCheck data structure. No test case. This was exposed by an upcomming change. llvm-svn: 193852
* Do not convert "call asm" to "invoke asm" in Inliner.Manman Ren2013-10-311-1/+2
| | | | | | | | | | | | | | | | Given that backend does not handle "invoke asm" correctly ("invoke asm" will be handled by SelectionDAGBuilder::visitInlineAsm, which does not have the right setup for LPadToCallSiteMap) and we already made the assumption that inline asm does not throw in InstCombiner::visitCallSite, we are going to make the same assumption in Inliner to make sure we don't convert "call asm" to "invoke asm". If it becomes necessary to add support for "invoke asm" later on, we will need to modify the backend as well as remove the assumptions that inline asm does not throw. Fix rdar://15317907 llvm-svn: 193808
OpenPOWER on IntegriCloud