summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/X86/X86TargetTransformInfo.cpp
Commit message (Collapse)AuthorAgeFilesLines
...
* [X86][AVX512] Add 512-bit vector ctlz costs + testsSimon Pilgrim2017-05-171-0/+24
| | | | llvm-svn: 303300
* [X86][AVX512] Add 512-bit vector cttz costs + testsSimon Pilgrim2017-05-171-0/+6
| | | | llvm-svn: 303293
* [X86][AVX512] Add 512-bit vector bitreverse costs + testsSimon Pilgrim2017-05-171-0/+18
| | | | llvm-svn: 303283
* [X86][AVX1] Account for cost of extract/insert of 256-bit shiftsSimon Pilgrim2017-05-141-49/+49
| | | | llvm-svn: 303023
* [X86][AVX2] Fix costs for v4i64 ashr by splatSimon Pilgrim2017-05-141-0/+5
| | | | llvm-svn: 303022
* [X86][AVX1] Account for cost of extract/insert of 256-bit shifts by splatSimon Pilgrim2017-05-141-12/+12
| | | | llvm-svn: 303021
* [X86][AVX1] Account for cost of extract/insert of 256-bit SDIV/UDIV by mul ↵Simon Pilgrim2017-05-141-17/+17
| | | | | | sequences llvm-svn: 303017
* [X86][XOP] XOP's general v16i8 shifts will be used instead of v8i16 shift + ↵Simon Pilgrim2017-05-141-3/+6
| | | | | | | | mask. Tweak cost model to match what lowering actually does. llvm-svn: 303013
* [X86][SSE] Account for cost of extract/insert of v32i8 vector shiftsSimon Pilgrim2017-05-141-3/+3
| | | | llvm-svn: 303012
* [X86][XOP] Account for cost of extract/insert of 256-bit vector shiftsSimon Pilgrim2017-05-141-12/+12
| | | | llvm-svn: 303010
* [X86][AVX1] Improve 256-bit vector costs for integer unary intrinsics.Simon Pilgrim2017-05-071-16/+16
| | | | | | Account for subvector extraction/insertion, helps prevent the vectorizers from selecting 256-bit vectors that will have to be split anyhow on AVX1 targets. llvm-svn: 302378
* [SystemZ] TargetTransformInfo cost functions implemented.Jonas Paulsson2017-04-121-4/+6
| | | | | | | | | | | | | | | | getArithmeticInstrCost(), getShuffleCost(), getCastInstrCost(), getCmpSelInstrCost(), getVectorInstrCost(), getMemoryOpCost(), getInterleavedMemoryOpCost() implemented. Interleaved access vectorization enabled. BasicTTIImpl::getCastInstrCost() improved to check for legal extending loads, in which case the cost of the z/sext instruction becomes 0. Review: Ulrich Weigand, Renato Golin. https://reviews.llvm.org/D29631 llvm-svn: 300052
* [X86 TTI] Implement LSV hookKeno Fischer2017-04-051-1/+5
| | | | | | | | | | | | | | | | | | Summary: LSV wants to know the maximum size that can be loaded to a vector register. On X86, this always matches the maximum register width. Implement this accordingly and add a test to make sure that LSV can vectorize up to the maximum permissible width on X86. Reviewers: delena, arsenm Reviewed By: arsenm Subscribers: wdng, llvm-commits Differential Revision: https://reviews.llvm.org/D31504 llvm-svn: 299589
* [X86] Add missing BITREVERSE costs for SSE2 vectors and i8/i16/i32/i64 scalarsSimon Pilgrim2017-03-151-0/+19
| | | | | | Prep work for PR31810 llvm-svn: 297876
* Align cost model columns. NFCI.Simon Pilgrim2017-03-151-4/+4
| | | | llvm-svn: 297824
* [TargetTransformInfo] getIntrinsicInstrCost() scalarization estimation improvedJonas Paulsson2017-03-141-4/+5
| | | | | | | | | | | | | | | | | | | | | getIntrinsicInstrCost() used to only compute scalarization cost based on types. This patch improves this so that the actual arguments are checked when they are available, in order to handle only unique non-constant operands. Tests updates: Analysis/CostModel/X86/arith-fp.ll Transforms/LoopVectorize/AArch64/interleaved_cost.ll Transforms/LoopVectorize/ARM/interleaved_cost.ll The improvement in getOperandsScalarizationOverhead() to differentiate on constants made it necessary to update the interleaved_cost.ll tests even though they do not relate to intrinsics. Review: Hal Finkel https://reviews.llvm.org/D29540 llvm-svn: 297705
* [X86] Add costs for non-AVX512 single-source permutation integer shufflesMichael Kuperstein2017-02-021-3/+16
| | | | | | Differential Revision: https://reviews.llvm.org/D29416 llvm-svn: 293932
* [TargetTransformInfo] Refactor and improve getScalarizationOverhead()Jonas Paulsson2017-01-261-14/+0
| | | | | | | | | | | | | | | | | Refactoring to remove duplications of this method. New method getOperandsScalarizationOverhead() that looks at the present unique operands and add extract costs for them. Old behaviour was to just add extract costs for one operand of the type always, which still happens in getArithmeticInstrCost() if no operands are provided by the caller. This is a good start of improving on this, but there are more places that can be improved by using getOperandsScalarizationOverhead(). Review: Hal Finkel https://reviews.llvm.org/D29017 llvm-svn: 293155
* [X86] enable memory interleaving for X86\SLM arch. Mohammed Agabaria2017-01-251-1/+1
| | | | | | Differential Revision: https://reviews.llvm.org/D28547 llvm-svn: 293040
* Remove trailing whitespace. NFCI.Simon Pilgrim2017-01-201-1/+1
| | | | llvm-svn: 292613
* [CostModel][X86] Removed unused cost. NFCI.Simon Pilgrim2017-01-201-1/+0
| | | | | | SHL v8i32 is already handled in the SSE41 cost table llvm-svn: 292612
* [CostModel][X86] Fix AVX512BW vector shift costs for vXi16 typesSimon Pilgrim2017-01-151-0/+8
| | | | | | We already have patterns in place to support 128/256-bit shifts without AVX512VL llvm-svn: 292077
* [CostModel][X86] Updated vXi64 ASHR costs on AVX512 targets now that D28604 ↵Simon Pilgrim2017-01-141-0/+8
| | | | | | has landed llvm-svn: 292023
* [X86][AVX512BW] Vectorize v64i8 vector shiftsSimon Pilgrim2017-01-111-0/+4
| | | | | | Differential Revision: https://reviews.llvm.org/D28447 llvm-svn: 291665
* [X86] updating TTI costs for arithmetic instructions on X86\SLM arch.Mohammed Agabaria2017-01-111-3/+50
| | | | | | | | | | | | updated instructions: pmulld, pmullw, pmulhw, mulsd, mulps, mulpd, divss, divps, divsd, divpd, addpd and subpd. special optimization case which replaces pmulld with pmullw\pmulhw\pshuf seq. In case if the real operands bitwidth <= 16. Differential Revision: https://reviews.llvm.org/D28104 llvm-svn: 291657
* [CostModel][X86] Fixed vXi8 uniform shift costs.Simon Pilgrim2017-01-081-6/+16
| | | | | | | | | | The 'fast' costs should only work for shifts by uniform constants (uniform non-constant are lowered using the slow default implementation). Logical shifts were not taking into account that we must mask the psrlw result, so the costs needed to be doubled. Added missing AVX2/AVX512BW costs as well. llvm-svn: 291391
* [CostModel][X86] Moved legal uniform shift costs earlier.Simon Pilgrim2017-01-081-24/+39
| | | | | | XOP was prematurely matching, doubling the cost of ashr/lshr uniform shifts. llvm-svn: 291390
* [CostModel][X86] Update SSE41/AVX1 vXi32 SHL costsSimon Pilgrim2017-01-071-0/+2
| | | | | | SSE41 provides pmulld which allows the simpler pslld/paddd/cvttps2dq/pmulld pattern than SSE2's use of pmuludq. llvm-svn: 291372
* [CostModel][X86] Fix AVX2 v16i16 shift 'splat' costs.Simon Pilgrim2017-01-071-2/+15
| | | | llvm-svn: 291366
* [CostModel][X86] Match 256-bit vector shift 'splat' costs for AVX2 and aboveSimon Pilgrim2017-01-071-45/+44
| | | | | | We were matching against general vector shift costs before the uniform splat costs llvm-svn: 291365
* [CostModel][X86] Generalized cost calculation of SHL by constant -> MUL ↵Simon Pilgrim2017-01-071-21/+10
| | | | | | conversion. llvm-svn: 291364
* [CostModel][X86] Merge separate AVX1 cost LUTs. NFCI.Simon Pilgrim2017-01-071-38/+30
| | | | llvm-svn: 291355
* [CostModel][AVX512BW] Add v32i16 vector shift costs for avx512bw targets.Simon Pilgrim2017-01-071-0/+4
| | | | llvm-svn: 291354
* [CostModel][X86] Added missing AVX2 arithmetic costs.Simon Pilgrim2017-01-071-23/+33
| | | | | | Allows us to correctly fall through to the lower AVX1 costs if look up failed. llvm-svn: 291353
* [CostModel][X86] Reordered AVX1 arithmetic cost LUT into descending target ↵Simon Pilgrim2017-01-071-27/+27
| | | | | | order. NFCI. llvm-svn: 291352
* [X86][AVX512] Use lowerShuffleAsRepeatedMaskAndLanePermute for non-VBMI ↵Simon Pilgrim2017-01-071-2/+1
| | | | | | v64i8 shuffles (PR31470) llvm-svn: 291347
* [CostModel][X86] Fix 512-bit SDIV/UDIV 'big' costs.Simon Pilgrim2017-01-061-16/+18
| | | | | | Set the costs on the lowest target that supports the type. llvm-svn: 291229
* [CostModel][X86] Tidyup arithmetic costs code. NFCI.Simon Pilgrim2017-01-051-28/+15
| | | | | | Remove unnecessary braces, remove one use variables and keep LUTs to similar naming convention. llvm-svn: 291187
* [CostModel][X86] Move vXi32 MUL costs into existing tables. NFCI.Simon Pilgrim2017-01-051-6/+5
| | | | llvm-svn: 291165
* Remove trailing whitespace. NFCI.Simon Pilgrim2017-01-051-3/+3
| | | | llvm-svn: 291163
* [CostModel][X86] Reordered SSE42 arithmetic cost LUT into descending order. ↵Simon Pilgrim2017-01-051-13/+11
| | | | | | NFCI. llvm-svn: 291162
* [CostModel][X86] Move vXi64 MUL costs into existing tables. NFCI.Simon Pilgrim2017-01-051-11/+3
| | | | | | Removes need for yet another LUT. llvm-svn: 291158
* [CostModel][X86] Strip unused 256-bit vector shift costs. NFCI.Simon Pilgrim2017-01-051-8/+0
| | | | | | Remove SSE2 256-bit entries - AVX targets will have used the SSE42 costs instead. llvm-svn: 291152
* [CostModel][X86] Include the cost of 256-bit upper subvector ↵Simon Pilgrim2017-01-051-2/+2
| | | | | | | | extract/insertion in AVX1 v4i64 MUL Matches other MUL/ADD/SUB 256-bit case on AVX1 llvm-svn: 291149
* [CostModel][X86] Merged SK_PermuteSingleSrc/SK_PermuteTwoSrc into common ↵Simon Pilgrim2017-01-051-272/+227
| | | | | | shuffle cost LUTs. NFCI. llvm-svn: 291146
* [CostModel][X86] Add support for broadcast shuffle costsSimon Pilgrim2017-01-051-9/+48
| | | | | | | | Currently only for broadcasts with input and output of the same width. Differential Revision: https://reviews.llvm.org/D27811 llvm-svn: 291122
* [CostModel][X86] Pulled out common type legalization codeSimon Pilgrim2017-01-051-7/+4
| | | | llvm-svn: 291109
* Currently isLikelyComplexAddressComputation tries to figure out if the given ↵Mohammed Agabaria2017-01-051-4/+16
| | | | | | | | | | | | | stride seems to be 'complex' and need some extra cost for address computation handling. This code seems to be target dependent which may not be the same for all targets. Passed the decision whether the given stride is complex or not to the target by sending stride information via SCEV to getAddressComputationCost instead of 'IsComplex'. Specifically at X86 targets we dont see any significant address computation cost in case of the strided access in general. Differential Revision: https://reviews.llvm.org/D27518 llvm-svn: 291106
* [Test Commit] fixing some format issue in X86TTI to match clang-format output.Mohammed Agabaria2017-01-051-3/+6
| | | | llvm-svn: 291095
* [CostModel][X86] Updated vXi8 and vXi16 Reverse/Alternate shuffle costsSimon Pilgrim2017-01-041-11/+9
| | | | | | Actual codegen is much better than the extract+insert patterns that was assumed. llvm-svn: 290962
OpenPOWER on IntegriCloud