diff options
author | Sanjay Patel <spatel@rotateright.com> | 2019-01-22 14:24:13 +0000 |
---|---|---|
committer | Sanjay Patel <spatel@rotateright.com> | 2019-01-22 14:24:13 +0000 |
commit | effee52c59a0716980d545a97ab61d0c9839660a (patch) | |
tree | 4c44e74681a39c4502d74e00970abc0518cb5a24 /llvm/test/CodeGen/X86/vector-partial-undef.ll | |
parent | 121fcd7ec6ad051750cb56756d7895f12783802e (diff) | |
download | bcm5719-llvm-effee52c59a0716980d545a97ab61d0c9839660a.tar.gz bcm5719-llvm-effee52c59a0716980d545a97ab61d0c9839660a.zip |
[DAGCombiner] narrow vector binop with 2 insert subvector operands
vecbo (insertsubv undef, X, Z), (insertsubv undef, Y, Z) --> insertsubv VecC, (vecbo X, Y), Z
This is another step in generic vector narrowing. It's also a step towards more horizontal op
formation specifically for x86 (although we still failed to match those in the affected tests).
The scalarization cases are also not optimal (we should be scalarizing those), but it's still
an improvement to use a narrower vector op when we know part of the result must be constant
because both inputs are undef in some vector lanes.
I think a similar match but checking for a constant operand might help some of the cases in
D51553.
Differential Revision: https://reviews.llvm.org/D56875
llvm-svn: 351825
Diffstat (limited to 'llvm/test/CodeGen/X86/vector-partial-undef.ll')
-rw-r--r-- | llvm/test/CodeGen/X86/vector-partial-undef.ll | 10 |
1 files changed, 4 insertions, 6 deletions
diff --git a/llvm/test/CodeGen/X86/vector-partial-undef.ll b/llvm/test/CodeGen/X86/vector-partial-undef.ll index 1cd3415d082..2b4ab11fea5 100644 --- a/llvm/test/CodeGen/X86/vector-partial-undef.ll +++ b/llvm/test/CodeGen/X86/vector-partial-undef.ll @@ -13,9 +13,7 @@ define <4 x i64> @xor_insert_insert(<2 x i64> %x, <2 x i64> %y) { ; ; AVX-LABEL: xor_insert_insert: ; AVX: # %bb.0: -; AVX-NEXT: # kill: def $xmm1 killed $xmm1 def $ymm1 -; AVX-NEXT: # kill: def $xmm0 killed $xmm0 def $ymm0 -; AVX-NEXT: vxorps %ymm1, %ymm0, %ymm0 +; AVX-NEXT: vxorps %xmm1, %xmm0, %xmm0 ; AVX-NEXT: retq %xw = shufflevector <2 x i64> %x, <2 x i64> undef, <4 x i32> <i32 0, i32 1, i32 undef, i32 undef> %yw = shufflevector <2 x i64> %y, <2 x i64> undef, <4 x i32> <i32 0, i32 1, i32 undef, i32 undef> @@ -32,9 +30,9 @@ define <4 x i64> @xor_insert_insert_high_half(<2 x i64> %x, <2 x i64> %y) { ; ; AVX-LABEL: xor_insert_insert_high_half: ; AVX: # %bb.0: -; AVX-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0 -; AVX-NEXT: vinsertf128 $1, %xmm1, %ymm0, %ymm1 -; AVX-NEXT: vxorps %ymm1, %ymm0, %ymm0 +; AVX-NEXT: vxorps %xmm1, %xmm0, %xmm0 +; AVX-NEXT: vxorps %xmm1, %xmm1, %xmm1 +; AVX-NEXT: vinsertf128 $1, %xmm0, %ymm1, %ymm0 ; AVX-NEXT: retq %xw = shufflevector <2 x i64> %x, <2 x i64> undef, <4 x i32> <i32 undef, i32 undef, i32 0, i32 1> %yw = shufflevector <2 x i64> %y, <2 x i64> undef, <4 x i32> <i32 undef, i32 undef, i32 0, i32 1> |