summaryrefslogtreecommitdiffstats
path: root/llvm/test/Transforms/LoopVectorize
Commit message (Collapse)AuthorAgeFilesLines
* [LV] Statistics numbers for LoopVectorize introduced: a number of analyzed ↵Alexander Musman2014-04-231-0/+66
| | | | | | | | | | | loops & a number of vectorized loops. Use -stats to see how many loops were analyzed for possible vectorization and how many of them were actually vectorized. Patch by Zinovy Nis Differential Revision: http://reviews.llvm.org/D3438 llvm-svn: 206956
* Add missing config file for newly added test case introduced by r206563.Jiangning Liu2014-04-181-0/+6
| | | | llvm-svn: 206567
* This commit allows vectorized loops to be unrolled by a factor of 2 for AArch64.Jiangning Liu2014-04-182-0/+84
| | | | | | | | A new test case is also added for ARM64. Patched by Z.Zheng llvm-svn: 206563
* vect.omp.persistence.ll REQUIRES asserts due to -debug-only.NAKAMURA Takumi2014-04-151-0/+1
| | | | llvm-svn: 206271
* D3348 - [BUG] "Rotate Loop" pass kills "llvm.vectorizer.enable" metadataAlexey Bataev2014-04-151-0/+87
| | | | llvm-svn: 206266
* [LoopVectorizer] Count dependencies of consecutive pointers as uniformsHal Finkel2014-04-022-0/+55
| | | | | | | | | | | | | | | | | | | | | For the purpose of calculating the cost of the loop at various vectorization factors, we need to count dependencies of consecutive pointers as uniforms (which means that the VF = 1 cost is used for all overall VF values). For example, the TSVC benchmark function s173 has: ... %3 = add nsw i64 %indvars.iv, 16000 %arrayidx8 = getelementptr inbounds %struct.GlobalData* @global_data, i64 0, i32 0, i64 %3 ... and we must realize that the add will be a scalar in order to correctly deduce it to be profitable to vectorize this on PowerPC with VSX enabled. In fact, all dependencies of a consecutive pointer must be a scalar (uniform), and so we simply need to add all consecutive pointers to the worklist that currently detects collects uniforms. Fixes PR19296. llvm-svn: 205387
* Implement X86TTI::getUnrollingPreferencesHal Finkel2014-04-011-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | This provides an initial implementation of getUnrollingPreferences for x86. getUnrollingPreferences is used by the generic (concatenation) unroller, which is distinct from the unrolling done by the loop vectorizer. Many modern x86 cores have some kind of uop cache and loop-stream detector (LSD) used to efficiently dispatch small loops, and taking full advantage of this requires unrolling small loops (small here means 10s of uops). These caches also have limits on the number of taken branches in the loop, and so we also cap the loop unrolling factor based on the maximum "depth" of the loop. This is currently calculated with a partial DFS traversal (partial because it will stop early if the path length grows too much). This is still an approximation, and one that is both conservative (because it does not account for branches eliminated via block placement) and optimistic (because it is only recording the maximum depth over minimum paths). Nevertheless, because the loops that fit in these uop caches are so small, it is not clear how much the details matter. The original set of patches posted for review produced the following test-suite performance results (from the TSVC benchmark) at that time: ControlLoops-dbl - 13% speedup ControlLoops-flt - 15% speedup Reductions-dbl - 7.5% speedup llvm-svn: 205348
* Move partial/runtime unrolling late in the pipelineHal Finkel2014-03-311-1/+1
| | | | | | | | | | | | | | | | The generic (concatenation) loop unroller is currently placed early in the standard optimization pipeline. This is a good place to perform full unrolling, but not the right place to perform partial/runtime unrolling. However, most targets don't enable partial/runtime unrolling, so this never mattered. However, even some x86 cores benefit from partial/runtime unrolling of very small loops, and follow-up commits will enable this. First, we need to move partial/runtime unrolling late in the optimization pipeline (importantly, this is after SLP and loop vectorization, as vectorization can drastically change the size of a loop), while keeping the full unrolling where it is now. This change does just that. llvm-svn: 205264
* [X86] Adjust cost of FP_TO_UINT v4f64->v4i32 as wellAdam Nemet2014-03-311-0/+40
| | | | | | | | | Pretty obvious follow-on to r205159 to also handle conversion from double besides float. Fixes <rdar://problem/16373208> llvm-svn: 205253
* [X86] Adjust cost of FP_TO_UINT v8f32->v8i32Adam Nemet2014-03-301-0/+39
| | | | | | | | | | | | | | | There is no direct AVX instruction to convert to unsigned. I have some ideas how we may be able to do this with three vector instructions but the current backend just bails on this to get it scalarized. See the comment why we need to adjust the cost returned by BasicTTI. The test is a bit roundabout (and checks assembly rather than bit code) because I'd like it to work even if at some point we could vectorize this conversion. Fixes <rdar://problem/16371920> llvm-svn: 205159
* ARM64: initial backend importTim Northover2014-03-292-0/+91
| | | | | | | | | | | | This adds a second implementation of the AArch64 architecture to LLVM, accessible in parallel via the "arm64" triple. The plan over the coming weeks & months is to merge the two into a single backend, during which time thorough code review should naturally occur. Everything will be easier with the target in-tree though, hence this commit. llvm-svn: 205090
* [X86][Vectorizer Cost Model] Correct vectorization cost model for v2i64->v2f64Quentin Colombet2014-03-271-0/+26
| | | | | | | | | | and v4i64->v4f64. The new costs match what we did for SSE2 and reflect the reality of our codegen. <rdar://problem/16381225> llvm-svn: 204884
* add 'requires asserts' to test that needs itJim Grosbach2014-03-271-0/+1
| | | | llvm-svn: 204882
* X86: Correct vectorization cost model for v8f32->v8i8.Jim Grosbach2014-03-271-0/+24
| | | | | | | | Fix the cost model to reflect the reality of our codegen. rdar://16370633 llvm-svn: 204880
* LoopVectorizer: Preserve fast-math flagsArnold Schwaighofer2014-03-052-1/+27
| | | | | | Fixes PR19045. llvm-svn: 203008
* LoopVectorizer: Keep track of conditional store basic blocksArnold Schwaighofer2014-02-081-0/+40
| | | | | | | | | | | | | Before conditional store vectorization/unrolling we had only one vectorized/unrolled basic block. After adding support for conditional store vectorization this will not only be one block but multiple basic blocks. The last block would have the back-edge. I updated the code to use a vector of basic blocks instead of a single basic block and fixed the users to use the last entry in this vector. But, I forgot to add the basic blocks to this vector! Fixes PR18724. llvm-svn: 201028
* LoopVectorizer: Enable unrolling of conditional stores and the load/storeArnold Schwaighofer2014-02-021-0/+3
| | | | | | | | | unrolling heuristic per default Benchmarking on x86_64 (thanks Chandler!) and ARM has shown those options speed up some benchmarks while not causing any interesting regressions. llvm-svn: 200621
* ARMTTI: We don't have 16 allocatable scalar registersArnold Schwaighofer2014-02-011-0/+36
| | | | | | | This caused an regression on libquantum after enabling the new loop vectorizer unroll heuristics. llvm-svn: 200616
* [vectorizer] Tweak the way we do small loop runtime unrolling in theChandler Carruth2014-01-311-15/+42
| | | | | | | | | | | | | | loop vectorizer to not do so when runtime pointer checks are needed and share code with the new (not yet enabled) load/store saturation runtime unrolling. Also ensure that we only consider the runtime checks when the loop hasn't already been vectorized. If it has, the runtime check cost has already been paid. I've fleshed out a test case to cover the scalar unrolling as well as the vector unrolling and comment clearly why we are or aren't following the pattern. llvm-svn: 200530
* LoopVectorizer: Add a test case for unrolling of small loops that need a runtimeArnold Schwaighofer2014-01-291-0/+23
| | | | | | check. llvm-svn: 200408
* [vectorizer] Completely disable the block frequency guidance of the loopChandler Carruth2014-01-281-1/+1
| | | | | | | | | | | | | | | vectorizer, placing it behind an off-by-default flag. It turns out that block frequency isn't what we want at all, here or elsewhere. This has been I think a nagging feeling for several of us working with it, but Arnold has given some really nice simple examples where the results are so comprehensively wrong that they aren't useful. I'm planning to email the dev list with a summary of why its not really useful and a couple of ideas about how to better structure these types of heuristics. llvm-svn: 200294
* LoopVectorize: Support conditional stores by scalarizingArnold Schwaighofer2014-01-281-0/+86
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The vectorizer takes a loop like this and widens all instructions except for the store. The stores are scalarized/unrolled and hidden behind an "if" block. for (i = 0; i < 128; ++i) { if (a[i] < 10) a[i] += val; } for (i = 0; i < 128; i+=2) { v = a[i:i+1]; v0 = (extract v, 0) + 10; v1 = (extract v, 1) + 10; if (v0 < 10) a[i] = v0; if (v1 < 10) a[i] = v1; } The vectorizer relies on subsequent optimizations to sink instructions into the conditional block where they are anticipated. The flag "vectorize-num-stores-pred" controls whether and how many stores to handle this way. Vectorization of conditional stores is disabled per default for now. This patch also adds a change to the heuristic when the flag "enable-loadstore-runtime-unroll" is enabled (off by default). It unrolls small loops until load/store ports are saturated. This heuristic uses TTI's getMaxUnrollFactor as a measure for load/store ports. I also added a second flag -enable-cond-stores-vec. It will enable vectorization of conditional stores. But there is no cost model for vectorization of conditional stores in place yet so this will not do good at the moment. rdar://15892953 Results for x86-64 -O3 -mavx +/- -mllvm -enable-loadstore-runtime-unroll -vectorize-num-stores-pred=1 (before the BFI change): Performance Regressions: Benchmarks/Ptrdist/yacr2/yacr2 7.35% (maze3() is identical but 10% slower) Applications/siod/siod 2.18% Performance improvements: mesa -4.42% libquantum -4.15% With a patch that slightly changes the register heuristics (by subtracting the induction variable on both sides of the register pressure equation, as the induction variable is probably not really unrolled): Performance Regressions: Benchmarks/Ptrdist/yacr2/yacr2 7.73% Applications/siod/siod 1.97% Performance Improvements: libquantum -13.05% (we now also unroll quantum_toffoli) mesa -4.27% llvm-svn: 200270
* [vectorize] Initial version of respecting PGO in the vectorizer: treatChandler Carruth2014-01-271-0/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cold loops as-if they were being optimized for size. Nothing fancy here. Simply test case included. The nice thing is that we can now incrementally build on top of this to drive other heuristics. All of the infrastructure work is done to get the profile information into this layer. The remaining work necessary to make this a fully general purpose loop unroller for very hot loops is to make it a fully general purpose loop unroller. Things I know of but am not going to have time to benchmark and fix in the immediate future: 1) Don't disable the entire pass when the target is lacking vector registers. This really doesn't make any sense any more. 2) Teach the unroller at least and the vectorizer potentially to handle non-if-converted loops. This is trivial for the unroller but hard for the vectorizer. 3) Compute the relative hotness of the loop and thread that down to the various places that make cost tradeoffs (very likely only the unroller makes sense here, and then only when dealing with loops that are small enough for unrolling to not completely blow out the LSD). I'm still dubious how useful hotness information will be. So far, my experiments show that if we can get the correct logic for determining when unrolling actually helps performance, the code size impact is completely unimportant and we can unroll in all cases. But at least we'll no longer burn code size on cold code. One somewhat unrelated idea that I've had forever but not had time to implement: mark all functions which are only reachable via the global constructors rigging in the module as optsize. This would also decrease the impact of any more aggressive heuristics here on code size. llvm-svn: 200219
* [vectorizer] Add an override for the target instruction cost and use itChandler Carruth2014-01-271-1/+1
| | | | | | | to stabilize a test that really is trying to test generic behavior and not a specific target's behavior. llvm-svn: 200215
* [vectorizer] Teach the loop vectorizer's unroller to only unroll byChandler Carruth2014-01-271-1/+11
| | | | | | | | | | | | | | | | | | | | | | | | | powers of two. This is essentially always the correct thing given the impact on alignment, scaling factors that can be used in addressing modes, etc. Also, fix the management of the unroll vs. small loop cost to more accurately model things with this world. Enhance a test case to actually exercise more of the unroll machinery if using synthetic constants rather than a specific target model. Before this change, with the added flags this test will unroll 3 times instead of either 2 or 4 (the two sensible answers). While I don't expect this to make a huge difference, if there are lots of loops sitting right on the edge of hitting the 'small unroll' factor, they might change behavior. However, I've benchmarked moving the small loop cost up and down in many various ways and by a huge factor (2x) without seeing more than 0.2% code size growth. Small adjustments such as the series that led up here have led to about 1% improvement on some benchmarks, but it is very close to the noise floor so I mostly checked that nothing regressed. Let me know if you see bad behavior on other targets but I don't expect this to be a sufficiently dramatic change to trigger anything. llvm-svn: 200213
* Fix known typosAlp Toker2014-01-241-1/+1
| | | | | | | Sweep the codebase for common typos. Includes some changes to visible function names that were misspelt. llvm-svn: 200018
* InstCombine: Teach most integer add/sub/mul/div combines how to deal with ↵Benjamin Kramer2014-01-191-4/+4
| | | | | | vectors. llvm-svn: 199602
* LoopVectorizer: A reduction that has multiple uses of the reduction value is notArnold Schwaighofer2014-01-191-0/+42
| | | | | | | | | | | | | | a reduction. Really. Under certain circumstances (the use list of an instruction has to be set up right - hence the extra pass in the test case) we would not recognize when a value in a potential reduction cycle was used multiple times by the reduction cycle. Fixes PR18526. radar://15851149 llvm-svn: 199570
* LoopVectorize: Only strip casts from integer types when replacing symbolicArnold Schwaighofer2014-01-151-0/+37
| | | | | | | | strides Fixes PR18480. llvm-svn: 199291
* Fix broken CHECK lines.Benjamin Kramer2014-01-111-2/+2
| | | | llvm-svn: 199016
* LoopVectorizer: Handle strided memory accesses by versioningArnold Schwaighofer2014-01-102-5/+57
| | | | | | | | | | | | | | | for (i = 0; i < N; ++i) A[i * Stride1] += B[i * Stride2]; We take loops like this and check that the symbolic strides 'Strided1/2' are one and drop to the scalar loop if they are not. This is currently disabled by default and hidden behind the flag 'enable-mem-access-versioning'. radar://13075509 llvm-svn: 198950
* LoopVectorizer: Don't if-convert constant expressions that can trapArnold Schwaighofer2013-12-171-0/+63
| | | | | | | | | | A phi node operand or an instruction operand could be a constant expression that can trap (division). Check that we don't vectorize such cases. PR16729 radar://15653590 llvm-svn: 197449
* force vector width via cpu on vectorizer metadata enableRenato Golin2013-12-071-11/+11
| | | | llvm-svn: 196669
* Move test to X86 dirRenato Golin2013-12-051-0/+0
| | | | | | | Test is platform independent, but I don't want to force vector-width, or that could spoil the pragma test. llvm-svn: 196539
* Add #pragma vectorize enable/disable to LLVMRenato Golin2013-12-051-0/+175
| | | | | | | | | | | | | | | | | | | | | | | | The intended behaviour is to force vectorization on the presence of the flag (either turn on or off), and to continue the behaviour as expected in its absence. Tests were added to make sure the all cases are covered in opt. No tests were added in other tools with the assumption that they should use the PassManagerBuilder in the same way. This patch also removes the outdated -late-vectorize flag, which was on by default and not helping much. The pragma metadata is being attached to the same place as other loop metadata, but nothing forbids one from attaching it to a function (to enable #pragma optimize) or basic blocks (to hint the basic-block vectorizers), etc. The logic should be the same all around. Patches to Clang to produce the metadata will be produced after the initial implementation is agreed upon and committed. Patches to other vectorizers (such as SLP and BB) will be added once we're happy with the pass manager changes. llvm-svn: 196537
* Correct word hyphenationsAlp Toker2013-12-051-1/+1
| | | | | | | This patch tries to avoid unrelated changes other than fixing a few hyphen-related ambiguities and contractions in nearby lines. llvm-svn: 196471
* opt: Mirror vectorization presets of clangArnold Schwaighofer2013-12-031-0/+28
| | | | | | | | | | clang enables vectorization at optimization levels > 1 and size level < 2. opt should behave similarily. Loop vectorization and SLP vectorization can be disabled with the flags -disable-(loop/slp)-vectorization. llvm-svn: 196294
* LoopVectorizer: Truncate i64 trip counts of i32 phis if necessaryArnold Schwaighofer2013-11-261-0/+39
| | | | | | | | | | | In signed arithmetic we could end up with an i64 trip count for an i32 phi. Because it is signed arithmetic we know that this is only defined if the i32 does not wrap. It is therefore safe to truncate the i64 trip count to a i32 value. Fixes PR18049. llvm-svn: 195787
* Debug Info: update testing cases to specify the debug info version number.Manman Ren2013-11-222-1/+4
| | | | | | | | We are going to drop debug info without a version number or with a different version number, to make sure we don't crash when we see bitcode files with different debug info metadata format. llvm-svn: 195504
* SLPVectorizer: Fix stale for Value pointer arrayArnold Schwaighofer2013-11-191-0/+33
| | | | | | | | | | | | | | | We are slicing an array of Value pointers and process those slices in a loop. The problem is that we might invalidate a later slice by vectorizing a former slice. Use a WeakVH to track the pointer. If the pointer is deleted or RAUW'ed we can tell. The test case will only fail when running with libgmalloc. radar://15498655 llvm-svn: 195162
* LoopVectorizer: Extend the induction variable to a larger typeArnold Schwaighofer2013-11-181-0/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | In some case the loop exit count computation can overflow. Extend the type to prevent most of those cases. The problem is loops like: int main () { int a = 1; char b = 0; lbl: a &= 4; b--; if (b) goto lbl; return a; } The backedge count is 255. The induction variable type is i8. If we add one to 255 to get the exit count we overflow to zero. To work around this issue we extend the type of the induction variable to i32 in the case of i8 and i16. PR17532 llvm-svn: 195008
* LoopVectorizer: Use abi alignment for accesses with no alignmentArnold Schwaighofer2013-11-151-0/+33
| | | | | | | | | | | When we vectorize a scalar access with no alignment specified, we have to set the target's abi alignment of the scalar access on the vectorized access. Using the same alignment of zero would be wrong because most targets will have a bigger abi alignment for vector types. This probably fixes PR17878. llvm-svn: 194876
* Scalarize select vector arguments when extracted.Matt Arsenault2013-11-041-29/+29
| | | | | | | | When the elements are extracted from a select on vectors or a vector select, do the select on the extracted scalars from the input if there is only one use. llvm-svn: 194013
* LoopVectorizer: Perform redundancy elimination on induction variablesArnold Schwaighofer2013-11-013-5/+42
| | | | | | | | | | | | | | | | | | | | When the loop vectorizer was part of the SCC inliner pass manager gvn would run after the loop vectorizer followed by instcombine. This way redundancy (multiple uses) were removed and instcombine could perform scalarization on the induction variables. Having moved the loop vectorizer to later we no longer run any form of redundancy elimination before we perform instcombine. This caused vectorized induction variables to survive that did not before. On a recent iMac this helps linpack back from 6000Mflops to 7000Mflops. This should also help lpbench and paq8p. I ran a Release (without Asserts) build over the test-suite and did not see any negative impact on compile time. radar://15339680 llvm-svn: 193891
* LoopVectorize: Look for consecutive acces in GEPs with trailing zero indicesBenjamin Kramer2013-11-011-0/+42
| | | | | | | If we have a pointer to a single-element struct we can still build wide loads and stores to it (if there is no padding). llvm-svn: 193860
* LoopVectorizer: If dependency checks fail try runtime checksArnold Schwaighofer2013-11-011-0/+28
| | | | | | | | | | | | When a dependence check fails we can still try to vectorize loops with runtime array bounds checks. This helps linpack to vectorize a loop in dgefa. And we are back to 2x of the scalar performance on a corei7-avx. radar://15339680 llvm-svn: 193853
* ARM cost model: Unaligned vectorized double stores are expensiveArnold Schwaighofer2013-10-291-9/+9
| | | | | | | | | Updated a test case that assumed that <2 x double> would vectorize to use <4 x float>. radar://15338229 llvm-svn: 193574
* LoopVectorizer: Don't attempt to vectorize extractelement instructionsHal Finkel2013-10-251-0/+35
| | | | | | | | | | | | | | | The loop vectorizer does not currently understand how to vectorize extractelement instructions. The existing check, which excluded all vector-valued instructions, did not catch extractelement instructions because it checked only the return value. As a result, vectorization would proceed, producing illegal instructions like this: %58 = extractelement <2 x i32> %15, i32 0 %59 = extractelement i32 %58, i32 0 where the second extractelement is illegal because its first operand is not a vector. llvm-svn: 193434
* I had to move and removeRenato Golin2013-10-241-46/+0
| | | | llvm-svn: 193355
* Fix broken builds by moving test to x86 dirRenato Golin2013-10-241-0/+46
| | | | llvm-svn: 193351
OpenPOWER on IntegriCloud