diff options
author | Elena Demikhovsky <elena.demikhovsky@intel.com> | 2014-11-23 08:07:43 +0000 |
---|---|---|
committer | Elena Demikhovsky <elena.demikhovsky@intel.com> | 2014-11-23 08:07:43 +0000 |
commit | 9e5089a9384c146c0d277b4962c2345e92ff4a88 (patch) | |
tree | 3a01ebfa31ad635c396e816fedc8dce9f24d38e0 /llvm/lib/Target/X86/X86TargetTransformInfo.cpp | |
parent | 2a495975ed265fd249044afb83e4416df68410b9 (diff) | |
download | bcm5719-llvm-9e5089a9384c146c0d277b4962c2345e92ff4a88.tar.gz bcm5719-llvm-9e5089a9384c146c0d277b4962c2345e92ff4a88.zip |
Masked Vector Load and Store Intrinsics.
Introduced new target-independent intrinsics in order to support masked vector loads and stores. The loop vectorizer optimizes loops containing conditional memory accesses by generating these intrinsics for existing targets AVX2 and AVX-512. The vectorizer asks the target about availability of masked vector loads and stores.
Added SDNodes for masked operations and lowering patterns for X86 code generator.
Examples:
<16 x i32> @llvm.masked.load.v16i32(i8* %addr, <16 x i32> %passthru, i32 4 /* align */, <16 x i1> %mask)
declare void @llvm.masked.store.v8f64(i8* %addr, <8 x double> %value, i32 4, <8 x i1> %mask)
Scalarizer for other targets (not AVX2/AVX-512) will be done in a separate patch.
http://reviews.llvm.org/D6191
llvm-svn: 222632
Diffstat (limited to 'llvm/lib/Target/X86/X86TargetTransformInfo.cpp')
-rw-r--r-- | llvm/lib/Target/X86/X86TargetTransformInfo.cpp | 18 |
1 files changed, 18 insertions, 0 deletions
diff --git a/llvm/lib/Target/X86/X86TargetTransformInfo.cpp b/llvm/lib/Target/X86/X86TargetTransformInfo.cpp index 2b70fd0ecf8..1811a205284 100644 --- a/llvm/lib/Target/X86/X86TargetTransformInfo.cpp +++ b/llvm/lib/Target/X86/X86TargetTransformInfo.cpp @@ -111,6 +111,8 @@ public: Type *Ty) const override; unsigned getIntImmCost(Intrinsic::ID IID, unsigned Idx, const APInt &Imm, Type *Ty) const override; + bool isLegalPredicatedLoad (Type *DataType, int Consecutive) const; + bool isLegalPredicatedStore(Type *DataType, int Consecutive) const; /// @} }; @@ -1156,3 +1158,19 @@ unsigned X86TTI::getIntImmCost(Intrinsic::ID IID, unsigned Idx, } return X86TTI::getIntImmCost(Imm, Ty); } + +bool X86TTI::isLegalPredicatedLoad(Type *DataType, int Consecutive) const { + int ScalarWidth = DataType->getScalarSizeInBits(); + + // Todo: AVX512 allows gather/scatter, works with strided and random as well + if ((ScalarWidth < 32) || (Consecutive == 0)) + return false; + if (ST->hasAVX512() || ST->hasAVX2()) + return true; + return false; +} + +bool X86TTI::isLegalPredicatedStore(Type *DataType, int Consecutive) const { + return isLegalPredicatedLoad(DataType, Consecutive); +} + |