summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/X86/X86TargetTransformInfo.cpp
Commit message (Collapse)AuthorAgeFilesLines
* Masked Load/Store - Changed the order of parameters in intrinsics.Elena Demikhovsky2014-12-251-3/+3
| | | | | | | No functional changes. The documentation is coming. llvm-svn: 224829
* Loop Vectorizer minor changes in the code - Elena Demikhovsky2014-12-141-5/+5
| | | | | | | | some comments, function names, identation. Reviewed here: http://reviews.llvm.org/D6527 llvm-svn: 224218
* Masked Load / Store Intrinsics - the CodeGen part.Elena Demikhovsky2014-12-041-0/+18
| | | | | | | | | | | | | | | | | | I'm recommiting the codegen part of the patch. The vectorizer part will be send to review again. Masked Vector Load and Store Intrinsics. Introduced new target-independent intrinsics in order to support masked vector loads and stores. The loop vectorizer optimizes loops containing conditional memory accesses by generating these intrinsics for existing targets AVX2 and AVX-512. The vectorizer asks the target about availability of masked vector loads and stores. Added SDNodes for masked operations and lowering patterns for X86 code generator. Examples: <16 x i32> @llvm.masked.load.v16i32(i8* %addr, <16 x i32> %passthru, i32 4 /* align */, <16 x i1> %mask) declare void @llvm.masked.store.v8f64(i8* %addr, <8 x double> %value, i32 4, <8 x i1> %mask) Scalarizer for other targets (not AVX2/AVX-512) will be done in a separate patch. http://reviews.llvm.org/D6191 llvm-svn: 223348
* [X86] Clean up whitespace as well as minor coding styleMichael Liao2014-12-041-16/+16
| | | | llvm-svn: 223339
* Revert "Masked Vector Load and Store Intrinsics."Duncan P. N. Exon Smith2014-11-281-18/+0
| | | | | | | | | | | This reverts commit r222632 (and follow-up r222636), which caused a host of LNT failures on an internal bot. I'll respond to the commit on the list with a reproduction of one of the failures. Conflicts: lib/Target/X86/X86TargetTransformInfo.cpp llvm-svn: 222936
* Add missing override keywords.Craig Topper2014-11-231-2/+2
| | | | llvm-svn: 222634
* Masked Vector Load and Store Intrinsics.Elena Demikhovsky2014-11-231-0/+18
| | | | | | | | | | | | | | Introduced new target-independent intrinsics in order to support masked vector loads and stores. The loop vectorizer optimizes loops containing conditional memory accesses by generating these intrinsics for existing targets AVX2 and AVX-512. The vectorizer asks the target about availability of masked vector loads and stores. Added SDNodes for masked operations and lowering patterns for X86 code generator. Examples: <16 x i32> @llvm.masked.load.v16i32(i8* %addr, <16 x i32> %passthru, i32 4 /* align */, <16 x i1> %mask) declare void @llvm.masked.store.v8f64(i8* %addr, <8 x double> %value, i32 4, <8 x i1> %mask) Scalarizer for other targets (not AVX2/AVX-512) will be done in a separate patch. http://reviews.llvm.org/D6191 llvm-svn: 222632
* AVX-512: SINT_TO_FP cost model and some bugfixesElena Demikhovsky2014-11-131-0/+7
| | | | | | | Checked some corner cases, for example translation of <8 x i1> to <8 x double> llvm-svn: 221883
* [X86] Custom lower UINT_TO_FP from v4f32 to v4i32, and for v8f32 to v8i32 ifQuentin Colombet2014-11-111-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | AVX2 is available. According to IACA, the new lowering has a throughput of 8 cycles instead of 13 with the previous one. Althought this lowering kicks in some SPECs benchmarks, the performance improvement was within the noise. Correctness testing has been done for the whole range of uint32_t with the following program: uint4 v = (uint4) {0,1,2,3}; uint32_t i; //Check correctness over entire range for uint4 -> float4 conversion for( i = 0; i < 1U << (32-2); i++ ) { float4 t = test(v); float4 c = correct(v); if( 0xf != _mm_movemask_ps( t == c )) { printf( "Error @ %vx: %vf vs. %vf\n", v, c, t); return -1; } v += 4; } Where "correct" is the old lowering and "test" the new one. The patch adds a test case for the two custom lowering instruction. It also modifies the vector cost model, which is why cast.ll and uitofp.ll are modified. 2009-02-26-MachineLICMBug.ll is also modified because we now hoist 7 instructions instead of 4 (3 more constant loads). rdar://problem/18153096> llvm-svn: 221657
* AVX-512: added cost for some AVX-512 instructionsElena Demikhovsky2014-09-161-0/+62
| | | | llvm-svn: 217863
* Rename getMaximumUnrollFactor -> getMaxInterleaveFactor; also rename option ↵Sanjay Patel2014-09-101-2/+2
| | | | | | | | | | | names controlling this variable. "Unroll" is not the appropriate name for this variable. Clang already uses the term "interleave" in pragmas and metadata for this. Differential Revision: http://reviews.llvm.org/D5066 llvm-svn: 217528
* Allow vectorization of division by uniform power of 2.Karthik Bhat2014-08-251-4/+27
| | | | | | | | This patch adds support to recognize division by uniform power of 2 and modifies the cost table to vectorize division by uniform power of 2 whenever possible. Updates Cost model for Loop and SLP Vectorizer.The cost table is currently only updated for X86 backend. Thanks to Hal, Andrea, Sanjay for the review. (http://reviews.llvm.org/D4971) llvm-svn: 216371
* Remove the TargetMachine forwards for TargetSubtargetInfo basedEric Christopher2014-08-041-2/+2
| | | | | | information and update all callers. No functional change. llvm-svn: 214781
* [X86] AVX512: Enable it in the Loop VectorizerAdam Nemet2014-07-091-1/+5
| | | | | | | | | | This lets us experiment with 512-bit vectorization without passing force-vector-width manually. The code generated for a simple integer memset loop is properly vectorized. Disassembly is still broken for it though :(. llvm-svn: 212634
* [CostModel][x86] Improved cost model for alternate shuffles.Andrea Di Biagio2014-07-031-17/+87
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch: 1) Improves the cost model for x86 alternate shuffles (originally added at revision 211339); 2) Teaches the Cost Model Analysis pass how to analyze alternate shuffles. Alternate shuffles are a special kind of blend; on x86, we can often easily lowered alternate shuffled into single blend instruction (depending on the subtarget features). The existing cost model didn't take into account subtarget features. Also, it had a couple of "dead" entries for vector types that are never legal (example: on x86 types v2i32 and v2f32 are not legal; those are always either promoted or widened to 128-bit vector types). The new x86 cost model takes into account what target features we have before returning the shuffle cost (i.e. the number of instructions after the blend is lowered/expanded). This patch also teaches the Cost Model Analysis how to identify and analyze alternate shuffles (i.e. 'SK_Alternate' shufflevector instructions): - added function 'isAlternateVectorMask'; - added some logic to check if an instruction is a alternate shuffle and, in case, call the target specific TTI to get the corresponding shuffle cost; - added a test to verify the cost model analysis on alternate shuffles. llvm-svn: 212296
* Add Support to Recognize and Vectorize NON SIMD instructions in SLPVectorizer.Karthik Bhat2014-06-201-8/+38
| | | | | | | | | This patch adds support to recognize patterns such as fadd,fsub,fadd,fsub.../add,sub,add,sub... and vectorizes them as vector shuffles if they are profitable. These patterns of vector shuffle can later be converted to instructions such as addsubpd etc on X86. Thanks to Arnold and Hal for the reviews. http://reviews.llvm.org/D4015 llvm-svn: 211339
* X86: stifle GCC warningSaleem Abdulrasool2014-06-121-1/+3
| | | | | | | | | | lib/Target/X86/X86TargetTransformInfo.cpp: In member function ‘virtual unsigned int {anonymous}::X86TTI::getIntImmCost(unsigned int, unsigned int, const llvm::APInt&, llvm::Type*) const’: lib/Target/X86/X86TargetTransformInfo.cpp:920:60: warning: enumeral and non-enumeral type in conditional expression [enabled by default] This seems like an unhelpful warning, but there doesnt seem to be a controlling flag, so add an explicit cast to silence the warning. llvm-svn: 210806
* [ConstantHoisting][X86] Improve the cost model for small constants with ↵Juergen Ributzka2014-06-101-8/+35
| | | | | | | | | | | large types (i64 and above). This improves the X86 cost model for small constants with large types. Before this commit we would even hoist trivial constants such as i96 2. This is related to <rdar://problem/17070936> llvm-svn: 210504
* Fix typo.Eric Christopher2014-05-221-1/+1
| | | | llvm-svn: 209377
* [ConstantHoisting][X86] Change the cost model to never hoist constants for ↵Juergen Ributzka2014-05-191-2/+13
| | | | | | | | | | | | | | | types larger than i128. Currently the X86 backend doesn't support types larger than i128 very well. For example an i192 multiply will assert in codegen when the 2nd argument is a constant and the constant got hoisted. This fix changes the cost model to never hoist constants for types larger than i128. Once the codegen issues have been resolved, the cost model can be updated to allow also larger types. This is related to <rdar://problem/16954938> llvm-svn: 209162
* Move late partial-unrolling thresholds into the processor definitionsHal Finkel2014-05-081-76/+0
| | | | | | | | | | | | | | | | | | | | | | The old method used by X86TTI to determine partial-unrolling thresholds was messy (because it worked by testing target features), and also would not correctly identify the target CPU if certain target features were disabled. After some discussions on IRC with Chandler et al., it was decided that the processor scheduling models were the right containers for this information (because it is often tied to special uop dispatch-buffer sizes). This does represent a small functionality change: - For generic x86-64 (which uses the SB model and, thus, will get some unrolling). - For AMD cores (because they still currently use the SB scheduling model) - For Haswell (based on benchmarking by Louis Gerbarg, it was decided to bump the default threshold to 50; we're working on a test case for this). Otherwise, nothing has changed for any other targets. The logic, however, has been moved into BasicTTI, so other targets may now also opt-in to this functionality simply by setting LoopMicroOpBufferSize in their processor model definitions. llvm-svn: 208289
* [X86TTI] Remove the unrolling branch limitsHal Finkel2014-05-071-40/+13
| | | | | | | | | | | | | | | | | | | | The loop stream detector (LSD) on modern Intel cores, which optimizes the execution of small loops, has limits on the number of taken branches in addition to uop-count limits (modern AMD cores have similar limits). Unfortunately, at the IR level, estimating the number of branches that will be taken is difficult. For one thing, it strongly depends on later passes (block placement, etc.). The original implementation took a conservative approach and limited the maximal BB DFS depth of the loop. However, fairly-extensive benchmarking by several of us has revealed that this is the wrong approach. In fact, there are zero known cases where the branch limit prevents a detrimental unrolling (but plenty of cases where it does prevent beneficial unrolling). While we could improve the current branch counting logic by incorporating branch probabilities, this further complication seems unjustified without a motivating regression. Instead, unless and until a regression appears, the branch counting will be removed. llvm-svn: 208255
* [X86] Never hoist the shift value of a shift instruction.Michael Zolotukhin2014-04-301-3/+7
| | | | | | | | | | | There is no need to check if we want to hoist the immediate value of an shift instruction. Simply return TCC_Free right away. This change is like r206101, but for X86. rdar://problem/16190769 llvm-svn: 207692
* X86TTI: Adjust sdiv cost now that we can lower it on plain SSE2.Benjamin Kramer2014-04-271-0/+5
| | | | | | | Includes a fix for a horrible typo that caused all SDIV costs to be slightly off :) llvm-svn: 207371
* X86TTI: i16/i32 vector div with a constant (splat) divisor are reasonably ↵Benjamin Kramer2014-04-261-0/+19
| | | | | | | | cheap now. Turn vectorization back on. llvm-svn: 207320
* [C++] Use 'nullptr'. Target edition.Craig Topper2014-04-251-1/+1
| | | | llvm-svn: 207197
* [Modules] Fix potential ODR violations by sinking the DEBUG_TYPEChandler Carruth2014-04-221-1/+2
| | | | | | | definition below all of the header #include lines, lib/Target/... edition. llvm-svn: 206842
* Add comments and test case for [X86TTI] Make constant base pointers for ↵Juergen Ributzka2014-04-021-0/+3
| | | | | | GetElementPtr opaque (r204739). llvm-svn: 205468
* Implement X86TTI::getUnrollingPreferencesHal Finkel2014-04-011-0/+103
| | | | | | | | | | | | | | | | | | | | | | | | | | | This provides an initial implementation of getUnrollingPreferences for x86. getUnrollingPreferences is used by the generic (concatenation) unroller, which is distinct from the unrolling done by the loop vectorizer. Many modern x86 cores have some kind of uop cache and loop-stream detector (LSD) used to efficiently dispatch small loops, and taking full advantage of this requires unrolling small loops (small here means 10s of uops). These caches also have limits on the number of taken branches in the loop, and so we also cap the loop unrolling factor based on the maximum "depth" of the loop. This is currently calculated with a partial DFS traversal (partial because it will stop early if the path length grows too much). This is still an approximation, and one that is both conservative (because it does not account for branches eliminated via block placement) and optimistic (because it is only recording the maximum depth over minimum paths). Nevertheless, because the loops that fit in these uop caches are so small, it is not clear how much the details matter. The original set of patches posted for review produced the following test-suite performance results (from the TSVC benchmark) at that time: ControlLoops-dbl - 13% speedup ControlLoops-flt - 15% speedup Reductions-dbl - 7.5% speedup llvm-svn: 205348
* [X86] Adjust cost of FP_TO_UINT v4f64->v4i32 as wellAdam Nemet2014-03-311-0/+1
| | | | | | | | | Pretty obvious follow-on to r205159 to also handle conversion from double besides float. Fixes <rdar://problem/16373208> llvm-svn: 205253
* [X86] Adjust cost of FP_TO_UINT v8f32->v8i32Adam Nemet2014-03-301-0/+6
| | | | | | | | | | | | | | | There is no direct AVX instruction to convert to unsigned. I have some ideas how we may be able to do this with three vector instructions but the current backend just bails on this to get it scalarized. See the comment why we need to adjust the cost returned by BasicTTI. The test is a bit roundabout (and checks assembly rather than bit code) because I'd like it to work even if at some point we could vectorize this conversion. Fixes <rdar://problem/16371920> llvm-svn: 205159
* [X86][Vector Cost Model] Add a comment to explain the workaroundQuentin Colombet2014-03-271-0/+5
| | | | | | | | in my previous commit (r204884). <rdar://problem/16381225> llvm-svn: 204972
* [X86][Vectorizer Cost Model] Correct vectorization cost model for v2i64->v2f64Quentin Colombet2014-03-271-0/+2
| | | | | | | | | | and v4i64->v4f64. The new costs match what we did for SSE2 and reflect the reality of our codegen. <rdar://problem/16381225> llvm-svn: 204884
* X86: Correct vectorization cost model for v8f32->v8i8.Jim Grosbach2014-03-271-1/+1
| | | | | | | | Fix the cost model to reflect the reality of our codegen. rdar://16370633 llvm-svn: 204880
* [X86TTI] Make constant base pointers for getElementPtr opaque.Juergen Ributzka2014-03-251-2/+3
| | | | | | | | | If getElementPtr uses a constant as base pointer, then make the constant opaque. This prevents constant folding it with the offset. The offset can usually be encoded in the load/store instruction itself and the base address doesn't have to be rematerialized several times. llvm-svn: 204739
* [Stackmaps][X86TTI] Fix think-o in getIntImmCost calculation.Juergen Ributzka2014-03-251-7/+6
| | | | | | | | The cost for the first four stackmap operands was always TCC_Free. This is only true for the first two operands. All other operands are TCC_Free if they are within 64bit. llvm-svn: 204738
* [Constant Hoisting] Make the constant materialization cost operand dependentJuergen Ributzka2014-03-211-15/+32
| | | | | | | | | Extend the target hook to take also the operand index into account when calculating the cost of the constant materialization. Related to <rdar://problem/16381500> llvm-svn: 204435
* Revert "[Constant Hoisting] Extend coverage of the constant hoisting pass."Juergen Ributzka2014-03-201-32/+15
| | | | | | I will break this up into smaller pieces for review and recommit. llvm-svn: 204393
* [Constant Hoisting] Extend coverage of the constant hoisting pass.Juergen Ributzka2014-03-201-15/+32
| | | | | | | | | This commit extends the coverage of the constant hoisting pass, adds additonal debug output and updates the function names according to the style guide. Related to <rdar://problem/16381500> llvm-svn: 204389
* [C++11] Remove 'virtual' keyword from methods marked with 'override' keyword.Craig Topper2014-03-101-34/+32
| | | | llvm-svn: 203444
* [TTI] There is actually no realistic way to pop TTI implementations offChandler Carruth2014-03-101-4/+0
| | | | | | | | | | | | | | the stack of the analysis group because they are all immutable passes. This is made clear by Craig's recent work to use override systematically -- we weren't overriding anything for 'finalizePass' because there is no such thing. This is kind of a lame restriction on the API -- we can no longer push and pop things, we just set up the stack and run. However, I'm not invested in building some better solution on top of the existing (terrifying) immutable pass and legacy pass manager. llvm-svn: 203437
* Switch all uses of LLVM_OVERRIDE to just use 'override' directly.Craig Topper2014-03-021-21/+19
| | | | llvm-svn: 202621
* Switch all uses of LLVM_FINAL to just use 'final', and remove the macro.Craig Topper2014-03-021-1/+1
| | | | llvm-svn: 202618
* [Vectorizer] Add a new 'OperandValueKind' in TargetTransformInfo calledAndrea Di Biagio2014-02-121-2/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | 'OK_NonUniformConstValue' to identify operands which are constants but not constant splats. The cost model now allows returning 'OK_NonUniformConstValue' for non splat operands that are instances of ConstantVector or ConstantDataVector. With this change, targets are now able to compute different costs for instructions with non-uniform constant operands. For example, On X86 the cost of a vector shift may vary depending on whether the second operand is a uniform or non-uniform constant. This patch applies the following changes: - The cost model computation now takes into account non-uniform constants; - The cost of vector shift instructions has been improved in X86TargetTransformInfo analysis pass; - BBVectorize, SLPVectorizer and LoopVectorize now know how to distinguish between non-uniform and uniform constant operands. Added a new test to verify that the output of opt '-cost-model -analyze' is valid in the following configurations: SSE2, SSE4.1, AVX, AVX2. llvm-svn: 201272
* X86: add costs for 64-bit vector ext/trunc & rebalanceTim Northover2014-02-061-15/+58
| | | | | | | | | | | | | | | | | | | | The most important part of this is probably adding any cost at all for operations like zext <8 x i8> to <8 x i32>. Before they were being recorded as extremely costly (24, I believe) which made LLVM fall back on a 4-wide vectorisation of a loop. It also rebalances the values for sext, zext and trunc. Lacking any other sane metric that might work across CPU microarchitectures I went for instructions. This seems to be in reasonable accord with the rest of the table (sitofp, ...) though no doubt at least one value is sub-optimal for some bizarre reason. Finally, separate AVX and AVX2 values are provided where appropriate. The CodeGen is quite different in many cases. rdar://problem/15981990 llvm-svn: 200928
* Revert "Revert "Add Constant Hoisting Pass" (r200034)"Juergen Ributzka2014-01-251-0/+95
| | | | | | | This reverts commit r200058 and adds the using directive for ARMTargetTransformInfo to silence two g++ overload warnings. llvm-svn: 200062
* Revert "Add Constant Hoisting Pass" (r200034)Hans Wennborg2014-01-251-95/+0
| | | | | | | | | | | | | | | This commit caused -Woverloaded-virtual warnings. The two new TargetTransformInfo::getIntImmCost functions were only added to the superclass, and to the X86 subclass. The other targets were not updated, and the warning highlighted this by pointing out that e.g. ARMTTI::getIntImmCost was hiding the two new getIntImmCost variants. We could pacify the warning by adding "using TargetTransformInfo::getIntImmCost" to the various subclasses, or turning it off, but I suspect that it's wrong to leave the functions unimplemnted in those targets. The default implementations return TCC_Free, which I don't think is right e.g. for ARM. llvm-svn: 200058
* Add Constant Hoisting PassJuergen Ributzka2014-01-241-0/+95
| | | | | | | | Retry commit r200022 with a fix for the build bot errors. Constant expressions have (unlike instructions) module scope use lists and therefore may have users in different functions. The fix is to simply ignore these out-of-function uses. llvm-svn: 200034
* Revert "Add Constant Hoisting Pass"Juergen Ributzka2014-01-241-95/+0
| | | | | | This reverts commit r200022 to unbreak the build bots. llvm-svn: 200024
* Add Constant Hoisting PassJuergen Ributzka2014-01-241-0/+95
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This pass identifies expensive constants to hoist and coalesces them to better prepare it for SelectionDAG-based code generation. This works around the limitations of the basic-block-at-a-time approach. First it scans all instructions for integer constants and calculates its cost. If the constant can be folded into the instruction (the cost is TCC_Free) or the cost is just a simple operation (TCC_BASIC), then we don't consider it expensive and leave it alone. This is the default behavior and the default implementation of getIntImmCost will always return TCC_Free. If the cost is more than TCC_BASIC, then the integer constant can't be folded into the instruction and it might be beneficial to hoist the constant. Similar constants are coalesced to reduce register pressure and materialization code. When a constant is hoisted, it is also hidden behind a bitcast to force it to be live-out of the basic block. Otherwise the constant would be just duplicated and each basic block would have its own copy in the SelectionDAG. The SelectionDAG recognizes such constants as opaque and doesn't perform certain transformations on them, which would create a new expensive constant. This optimization is only applied to integer constants in instructions and simple (this means not nested) constant cast experessions. For example: %0 = load i64* inttoptr (i64 big_constant to i64*) Reviewed by Eric llvm-svn: 200022
OpenPOWER on IntegriCloud