summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/Vectorize
Commit message (Collapse)AuthorAgeFilesLines
* SLPVectorizer: Make it a function pass and add code for hoisting the ↵Nadav Rotem2013-04-153-159/+254
| | | | | | vector-gather sequence out of loops. llvm-svn: 179562
* SLPVectorizer: Add support for vectorizing trees that start at compare ↵Nadav Rotem2013-04-151-21/+40
| | | | | | instructions. llvm-svn: 179504
* Miscellaneous cleanups for VecUtils.hBenjamin Kramer2013-04-141-9/+6
| | | | llvm-svn: 179483
* SLP: Document the scalarization cost method.Nadav Rotem2013-04-141-3/+10
| | | | llvm-svn: 179479
* SLPVectorizer: Add support for trees that don't start at binary operators, ↵Nadav Rotem2013-04-143-7/+25
| | | | | | and add the cost of extracting values from the roots of the tree. llvm-svn: 179475
* SLPVectorizer: add initial support for reduction variable vectorization.Nadav Rotem2013-04-143-7/+95
| | | | llvm-svn: 179470
* SLPVectorizer: add support for vectorization of diamond shaped trees. We now ↵Nadav Rotem2013-04-122-46/+254
| | | | | | perform a preliminary traversal of the graph to collect values with multiple users and check where the users came from. llvm-svn: 179414
* Add debug prints.Nadav Rotem2013-04-121-1/+5
| | | | llvm-svn: 179412
* LoopVectorizer: integer division is not a reduction operationArnold Schwaighofer2013-04-121-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Don't classify idiv/udiv as a reduction operation. Integer division is lossy. For example : (1 / 2) * 4 != 4/2. Example: int a[] = { 2, 5, 2, 2} int x = 80; for() x /= a[i]; Scalar: x /= 2 // = 40 x /= 5 // = 8 x /= 2 // = 4 x /= 2 // = 2 Vectorized: <80, 1> / <2,5> //= <40,0> <40, 0> / <2,2> //= <20,0> 20*0 = 0 radar://13640654 llvm-svn: 179381
* Rename the C function to create a SLPVectorizerPass to something sane and ↵Benjamin Kramer2013-04-111-2/+2
| | | | | | expose it in the header file. llvm-svn: 179272
* Make the SLP store-merger less paranoid about function calls. We check for ↵Nadav Rotem2013-04-101-4/+0
| | | | | | function calls when we check if it is safe to sink instructions. llvm-svn: 179207
* We require DataLayout for analyzing the size of stores.Nadav Rotem2013-04-102-1/+6
| | | | llvm-svn: 179206
* Add support for bottom-up SLP vectorization infrastructure.Nadav Rotem2013-04-095-0/+707
| | | | | | | | | | | | | | | | | | | | | | This commit adds the infrastructure for performing bottom-up SLP vectorization (and other optimizations) on parallel computations. The infrastructure has three potential users: 1. The loop vectorizer needs to be able to vectorize AOS data structures such as (sum += A[i] + A[i+1]). 2. The BB-vectorizer needs this infrastructure for bottom-up SLP vectorization, because bottom-up vectorization is faster to compute. 3. A loop-roller needs to be able to analyze consecutive chains and roll them into a loop, in order to reduce code size. A loop roller does not need to create vector instructions, and this infrastructure separates the chain analysis from the vectorization. This patch also includes a simple (100 LOC) bottom up SLP vectorizer that uses the infrastructure, and can vectorize this code: void SAXPY(int *x, int *y, int a, int i) { x[i] = a * x[i] + y[i]; x[i+1] = a * x[i+1] + y[i+1]; x[i+2] = a * x[i+2] + y[i+2]; x[i+3] = a * x[i+3] + y[i+3]; } llvm-svn: 179117
* LoopVectorizer: Pass OperandValueKind information to the cost modelArnold Schwaighofer2013-04-041-2/+13
| | | | | | | | | | | | Pass down the fact that an operand is going to be a vector of constants. This should bring the performance of MultiSource/Benchmarks/PAQ8p/paq8p on x86 back. It had degraded to scalar performance due to my pervious shift cost change that made all shifts expensive on x86. radar://13576547 llvm-svn: 178809
* LoopVectorize: Invert case when we use a vector cmp value to query select costArnold Schwaighofer2013-03-141-1/+1
| | | | | | | We generate a select with a vectorized condition argument when the condition is NOT loop invariant. Not the other way around. llvm-svn: 177098
* BBVectorize: Fixup debugging statementsHal Finkel2013-03-101-2/+2
| | | | | | | After the recent data-structure improvements, a couple of debugging statements were broken (printing pointer values). llvm-svn: 176791
* Remove a source of nondeterminism from the LoopVectorizer.Benjamin Kramer2013-03-091-1/+1
| | | | | | | This made us emit runtime checks in a random order. Hopefully bootstrap miscompares will go away now. llvm-svn: 176775
* LoopVectorizer: Ignore all dbg intrinisicArnold Schwaighofer2013-03-091-6/+6
| | | | | | Ignore all DbgIntriniscInfo instructions instead of just DbgValueInst. llvm-svn: 176769
* LoopVectorizer: Ignore dbg.value instructionsArnold Schwaighofer2013-03-091-2/+11
| | | | | | | | | We want vectorization to happen at -g. Ignore calls to the dbg.value intrinsic and don't transfer them to the vectorized code. radar://13378964 llvm-svn: 176768
* Insert the reduction start value into the first bypass block to preserve ↵Benjamin Kramer2013-03-081-1/+1
| | | | | | | | domination. Fixes PR15344. llvm-svn: 176701
* PR14448 - prevent the loop vectorizer from vectorizing the same loop twice.Nadav Rotem2013-03-021-0/+18
| | | | | | | | | | The LoopVectorizer often runs multiple times on the same function due to inlining. When this happens the loop vectorizer often vectorizes the same loops multiple times, increasing code size and adding unneeded branches. With this patch, the vectorizer during vectorization puts metadata on scalar loops and marks them as 'already vectorized' so that it knows to ignore them when it sees them a second time. PR14448. llvm-svn: 176399
* LoopVectorize: Don't hang forever if a PHI only has skipped PHI uses.Benjamin Kramer2013-03-011-1/+8
| | | | | | Fixes PR15384. llvm-svn: 176366
* LoopVectorize: Vectorize math builtin calls.Benjamin Kramer2013-02-271-50/+137
| | | | | | | | | | | This properly asks TargetLibraryInfo if a call is available and if it is, it can be translated into the corresponding LLVM builtin. We don't vectorize sqrt() yet because I'm not sure about the semantics for negative numbers. The other intrinsic should be exact equivalents to the libm functions. Differential Revision: http://llvm-reviews.chandlerc.com/D465 llvm-svn: 176188
* Allow GlobalValues to vectorize with AliasAnalysisRenato Golin2013-02-211-35/+154
| | | | | | | | | | | | | | | | | | | | | Storing the load/store instructions with the values and inspect them using Alias Analysis to make sure they don't alias, since the GEP pointer operand doesn't take the offset into account. Trying hard to not add any extra cost to loads and stores that don't overlap on global values, AA is *only* calculated if all of the previous attempts failed. Using biggest vector register size as the stride for the vectorization access, as we're being conservative and the cost model (which calculates the real vectorization factor) is only run after the legalization phase. We might re-think this relationship in the future, but for now, I'd rather be safe than sorry. llvm-svn: 175818
* BBVectorize: Fix an invalid reference bugHal Finkel2013-02-171-4/+7
| | | | | | | | | | | | | | This fixes PR15289. This bug was introduced (recently) in r175215; collecting all std::vector references for candidate pairs to delete at once is invalid because subsequent lookups in the owning DenseMap could invalidate the references. bugpoint was able to reduce a useful test case. Unfortunately, because whether or not this asserts depends on memory layout, this test case will sometimes appear to produce valid output. Nevertheless, running under valgrind will reveal the error. llvm-svn: 175397
* BBVectorize: Call a DAG and DAG instead of a treeHal Finkel2013-02-151-84/+84
| | | | | | | | | | Several functions and variable names used the term 'tree' to refer to what is actually a DAG. Correcting this mistake will, hopefully, prevent confusion in the future. No functionality change intended. llvm-svn: 175278
* BBVectorize: Cap the number of candidate pairs in each instruction groupHal Finkel2013-02-151-1/+9
| | | | | | | | | | | | | | | | | | | | | | | For some basic blocks, it is possible to generate many candidate pairs for relatively few pairable instructions. When many (tens of thousands) of these pairs are generated for a single instruction group, the time taken to generate and rank the different vectorization plans can become quite large. As a result, we now cap the number of candidate pairs within each instruction group. This is done by closing out the group once the threshold is reached (set now at 3000 pairs). Although this will limit the overall compile-time impact, this may not be the best way to achieve this result. It might be better, for example, to prune excessive candidate pairs after the fact the prevent the generation of short, but highly-connected groups. We can experiment with this in the future. This change reduces the overall compile-time slowdown of the csa.ll test case in PR15222 to ~5x. If 5x is still considered too large, a lower limit can be used as the default. This represents a functionality change, but only for very large inputs (thus, there is no regression test). llvm-svn: 175251
* BBVectorize: Remove the remaining instances of std::multimapHal Finkel2013-02-141-231/+256
| | | | | | | | | | All instances of std::multimap have now been replaced by DenseMap<K, std::vector<V> >, and this yields a speedup of 5% on the csa.ll test case from PR15222. No functionality change intended. llvm-svn: 175216
* BBVectorize: Don't store candidate pairs in a std::multimapHal Finkel2013-02-141-60/+92
| | | | | | | | | | This is another commit on the road to removing std::multimap from BBVectorize. This gives an ~1% speedup on the csa.ll test case in PR15222. No functionality change intended. llvm-svn: 175215
* LoopVectorize: Simplify code for clarity.Benjamin Kramer2013-02-131-10/+8
| | | | | | No functionality change. llvm-svn: 175076
* Metadata for annotating loops as parallel. The first consumer for this Pekka Jaaskelainen2013-02-131-0/+8
| | | | | | | | metadata is the loop vectorizer. See the documentation update for more info. llvm-svn: 175060
* BBVectorize: Don't over-search when building the dependency mapHal Finkel2013-02-111-2/+10
| | | | | | | | | | | | | When building the pairable-instruction dependency map, don't search past the last pairable instruction. For large blocks that have been divided into multiple instruction groups, searching past the last instruction in each group is very wasteful. This gives a 32% speedup on the csa.ll test case from PR15222 (when using 50 instructions in each group). No functionality change intended. llvm-svn: 174915
* BBVectorize: Omit unnecessary entries in PairableInstUsersHal Finkel2013-02-111-1/+3
| | | | | | | | | | | This map is queried only for instructions in pairs of pairable instructions; so make sure that only pairs of pairable instructions are added to the map. This gives a 3.5% speedup on the csa.ll test case from PR15222. No functionality change intended. llvm-svn: 174914
* BBVectorize: Eliminate one more restricted linear searchHal Finkel2013-02-111-27/+31
| | | | | | | | | | This eliminates one more linear search over a range of std::multimap entries. This gives a 22% speedup on the csa.ll test case from PR15222. No functionality change intended. llvm-svn: 174893
* BBVectorize: Remove the linear searches from pair connection searchingHal Finkel2013-02-111-24/+11
| | | | | | | | | This removes the last of the linear searches over ranges of std::multimap iterators, giving a 7% speedup on the doduc.bc input from PR15222. No functionality change intended. llvm-svn: 174859
* BBVectorize: Avoid linear searches within the load-move setHal Finkel2013-02-111-20/+30
| | | | | | | | | This is another cleanup aimed at eliminating linear searches in ranges of std::multimap. No functionality change intended. llvm-svn: 174858
* BBVectorize: isa/cast cleanup in getInstructionTypesHal Finkel2013-02-111-4/+4
| | | | | | | | | | Profiling suggests that getInstructionTypes is performance-sensitive, this cleans up some double-casting in that function in favor of using dyn_cast. No functionality change intended. llvm-svn: 174857
* BBVectorize: Make the bookkeeping to support full cycle checking less expensiveHal Finkel2013-02-111-14/+25
| | | | | | | | | | | By itself, this does not have much of an effect, but only because in the default configuration the full cycle checks are used only for small problem sizes. This is part of a general cleanup of uses of iteration over std::multimap ranges only for the purpose of checking membership. No functionality change intended. llvm-svn: 174856
* BBVectorize: Use TTI->getAddressComputationCostHal Finkel2013-02-081-0/+5
| | | | | | | | | | | | | This is a follow-up to the cost-model change in r174713 which splits the cost of a memory operation between the address computation and the actual memory access. In r174713, this cost is always added to the memory operation cost, and so BBVectorize will do the same. Currently, this new cost function is used only by ARM, and I don't have any ARM test cases for BBVectorize. Assistance in generating some good ARM test cases for BBVectorize would be greatly appreciated! llvm-svn: 174743
* Typos.Jakob Stoklund Olesen2013-02-081-4/+4
| | | | llvm-svn: 174723
* ARM cost model: Address computation in vector mem ops not freeArnold Schwaighofer2013-02-081-8/+14
| | | | | | | | | | | | | | | Adds a function to target transform info to query for the cost of address computation. The cost model analysis pass now also queries this interface. The code in LoopVectorize adds the cost of address computation as part of the memory instruction cost calculation. Only there, we know whether the instruction will be scalarized or not. Increase the penality for inserting in to D registers on swift. This becomes necessary because we now always assume that address computation has a cost and three is a closer value to the architecture. radar://13097204 llvm-svn: 174713
* Test CommitMichael Kuperstein2013-02-081-1/+1
| | | | llvm-svn: 174709
* fix 80-col violation and fix the docs.Nadav Rotem2013-02-071-3/+7
| | | | llvm-svn: 174671
* Loop Vectorizer: Refactor Memory Cost ComputationArnold Schwaighofer2013-02-071-180/+52
| | | | | | | | | | We don't want too many classes in a pass and the classes obscure the details. I was going a little overboard with object modeling here. Replace classes by generic code that handles both loads and stores. No functionality change intended. llvm-svn: 174646
* Loop Vectorizer: Refactor code to compute vectorized memory instruction costArnold Schwaighofer2013-02-051-79/+178
| | | | | | | Introduce a helper class that computes the cost of memory access instructions. No functionality change intended. llvm-svn: 174422
* Loop Vectorizer: Handle pointer stores/loads in getWidestType()Arnold Schwaighofer2013-02-051-9/+31
| | | | | | | | | | | | | | | | | In the loop vectorizer cost model, we used to ignore stores/loads of a pointer type when computing the widest type within a loop. This meant that if we had only stores/loads of pointers in a loop we would return a widest type of 8bits (instead of 32 or 64 bit) and therefore a vector factor that was too big. Now, if we see a consecutive store/load of pointers we use the size of a pointer (from data layout). This problem occured in SingleSource/Benchmarks/Shootout-C++/hash.cpp (reduced test case is the first test in vector_ptr_load_store.ll). radar://13139343 llvm-svn: 174377
* LoopVectorize: convert TinyTripCountVectorThreshold constantPekka Jaaskelainen2013-01-291-1/+3
| | | | | | to a command line switch. llvm-svn: 173837
* LoopVectorize: Clean up ValueMap a bit and avoid double lookups.Benjamin Kramer2013-01-291-10/+12
| | | | | | No intended functionality change. llvm-svn: 173809
* Vectorization Factor clarificationRenato Golin2013-01-281-17/+24
| | | | llvm-svn: 173691
* BBVectorize: Better use of TTI->getShuffleCostHal Finkel2013-01-271-4/+23
| | | | | | | | | | | | | | | | | | | | When flipping the pair of subvectors that form a vector, if the vector length is 2, we can use the SK_Reverse shuffle kind to get more-accurate cost information. Also we can use the SK_ExtractSubvector shuffle kind to get accurate subvector extraction costs. The current cost model implementations don't yet seem complex enough for this to make a difference (thus, there are no test cases with this commit), but it should help in future. Depending on how the various targets optimize and combine shuffles in practice, we might be able to get more-accurate costs by combining the costs of multiple shuffle kinds. For example, the cost of flipping the subvector pairs could be modeled as two extractions and two subvector insertions. These changes, however, should probably be motivated by specific test cases. llvm-svn: 173621
OpenPOWER on IntegriCloud