diff options
author | Sanjay Patel <spatel@rotateright.com> | 2016-02-29 23:16:48 +0000 |
---|---|---|
committer | Sanjay Patel <spatel@rotateright.com> | 2016-02-29 23:16:48 +0000 |
commit | 98a71505f5c2ea56c6cc4c9ccc67f25355cf1d93 (patch) | |
tree | 15e655943b829cfac82ef8098c832b0c2771d136 /lldb/source/Commands/CommandObjectHelp.h | |
parent | fe2f7f367a09cc5b0330b87c0f0b0078a0700a8f (diff) | |
download | bcm5719-llvm-98a71505f5c2ea56c6cc4c9ccc67f25355cf1d93.tar.gz bcm5719-llvm-98a71505f5c2ea56c6cc4c9ccc67f25355cf1d93.zip |
[x86, InstCombine] transform x86 AVX masked loads to LLVM intrinsics
The intended effect of this patch in conjunction with:
http://reviews.llvm.org/rL259392
http://reviews.llvm.org/rL260145
is that customers using the AVX intrinsics in C will benefit from combines when
the load mask is constant:
__m128 mload_zeros(float *f) {
return _mm_maskload_ps(f, _mm_set1_epi32(0));
}
__m128 mload_fakeones(float *f) {
return _mm_maskload_ps(f, _mm_set1_epi32(1));
}
__m128 mload_ones(float *f) {
return _mm_maskload_ps(f, _mm_set1_epi32(0x80000000));
}
__m128 mload_oneset(float *f) {
return _mm_maskload_ps(f, _mm_set_epi32(0x80000000, 0, 0, 0));
}
...so none of the above will actually generate a masked load for optimized code.
This is the masked load counterpart to:
http://reviews.llvm.org/rL262064
llvm-svn: 262269
Diffstat (limited to 'lldb/source/Commands/CommandObjectHelp.h')
0 files changed, 0 insertions, 0 deletions