diff options
| author | Nadav Rotem <nadav.rotem@intel.com> | 2012-01-15 19:27:55 +0000 |
|---|---|---|
| committer | Nadav Rotem <nadav.rotem@intel.com> | 2012-01-15 19:27:55 +0000 |
| commit | 57935243bdf2bdf5ac5dc2ce24040349659129d8 (patch) | |
| tree | 243b65bdfb7a741ec4ceb804bce1040af094b148 /llvm/test/CodeGen | |
| parent | 3a5ae564b8bb06f79c215526357eea5f213b78a6 (diff) | |
| download | bcm5719-llvm-57935243bdf2bdf5ac5dc2ce24040349659129d8.tar.gz bcm5719-llvm-57935243bdf2bdf5ac5dc2ce24040349659129d8.zip | |
[AVX] Optimize x86 VSELECT instructions using SimplifyDemandedBits.
We know that the blend instructions only use the MSB, so if the mask is
sign-extended then we can convert it into a SHL instruction. This is a
common pattern because the type-legalizer sign-extends the i1 type which
is used by the LLVM-IR for the condition.
Added a new optimization in SimplifyDemandedBits for SIGN_EXTEND_INREG -> SHL.
llvm-svn: 148225
Diffstat (limited to 'llvm/test/CodeGen')
| -rw-r--r-- | llvm/test/CodeGen/X86/blend-msb.ll | 37 |
1 files changed, 37 insertions, 0 deletions
diff --git a/llvm/test/CodeGen/X86/blend-msb.ll b/llvm/test/CodeGen/X86/blend-msb.ll new file mode 100644 index 00000000000..3a10c70ada8 --- /dev/null +++ b/llvm/test/CodeGen/X86/blend-msb.ll @@ -0,0 +1,37 @@ +; RUN: llc < %s -mtriple=x86_64-apple-darwin -mcpu=corei7 -promote-elements -mattr=+sse41 | FileCheck %s + + +; In this test we check that sign-extend of the mask bit is performed by +; shifting the needed bit to the MSB, and not using shl+sra. + +;CHECK: vsel_float +;CHECK: pslld +;CHECK-NEXT: blendvps +;CHECK: ret +define <4 x float> @vsel_float(<4 x float> %v1, <4 x float> %v2) { + %vsel = select <4 x i1> <i1 true, i1 false, i1 false, i1 false>, <4 x float> %v1, <4 x float> %v2 + ret <4 x float> %vsel +} + +;CHECK: vsel_4xi8 +;CHECK: pslld +;CHECK-NEXT: blendvps +;CHECK: ret +define <4 x i8> @vsel_4xi8(<4 x i8> %v1, <4 x i8> %v2) { + %vsel = select <4 x i1> <i1 true, i1 false, i1 false, i1 false>, <4 x i8> %v1, <4 x i8> %v2 + ret <4 x i8> %vsel +} + + +; We do not have native support for v8i16 blends and we have to use the +; blendvb instruction or a sequence of NAND/OR/AND. Make sure that we do not r +; reduce the mask in this case. +;CHECK: vsel_8xi16 +;CHECK: psllw +;CHECK: psraw +;CHECK: pblendvb +;CHECK: ret +define <8 x i16> @vsel_8xi16(<8 x i16> %v1, <8 x i16> %v2) { + %vsel = select <8 x i1> <i1 true, i1 false, i1 false, i1 false, i1 true, i1 false, i1 false, i1 false>, <8 x i16> %v1, <8 x i16> %v2 + ret <8 x i16> %vsel +} |

