diff options
| author | Sanjay Patel <spatel@rotateright.com> | 2019-06-07 13:17:46 +0000 |
|---|---|---|
| committer | Sanjay Patel <spatel@rotateright.com> | 2019-06-07 13:17:46 +0000 |
| commit | 6880bceda2df17f68e319c86a78642125086e0b8 (patch) | |
| tree | 936eca4e86abcdbd7788df22c7cbd579b61b21c0 /llvm/test/CodeGen/X86/psubus.ll | |
| parent | 0723c659f5838a5f67cd6ef5133f7d0e9464b122 (diff) | |
| download | bcm5719-llvm-6880bceda2df17f68e319c86a78642125086e0b8.tar.gz bcm5719-llvm-6880bceda2df17f68e319c86a78642125086e0b8.zip | |
[x86] narrow extract subvector of vector select
This is a potentially large perf win for AVX1 targets because of the way we
auto-vectorize to 256-bit but then expect the backend to legalize/optimize
for the half-implemented AVX1 ISA.
On the motivating example from PR37428 (even though this patch doesn't solve
the vector shift issue):
https://bugs.llvm.org/show_bug.cgi?id=37428
...there's a 16% speedup when compiling with "-mavx" (perf tested on Haswell)
because we eliminate the remaining 256-bit vblendv ops.
I added comments on a couple of tests that require further work. If we have
256-bit logic ops separating the vselect and extract, we should probably narrow
everything to 128-bit, but that requires a larger pattern match.
Differential Revision: https://reviews.llvm.org/D62969
llvm-svn: 362797
Diffstat (limited to 'llvm/test/CodeGen/X86/psubus.ll')
| -rw-r--r-- | llvm/test/CodeGen/X86/psubus.ll | 40 |
1 files changed, 19 insertions, 21 deletions
diff --git a/llvm/test/CodeGen/X86/psubus.ll b/llvm/test/CodeGen/X86/psubus.ll index c2e05027f5b..35e78e6e560 100644 --- a/llvm/test/CodeGen/X86/psubus.ll +++ b/llvm/test/CodeGen/X86/psubus.ll @@ -1724,27 +1724,25 @@ define <8 x i16> @psubus_8i64_max(<8 x i16> %x, <8 x i64> %y) nounwind { ; ; AVX1-LABEL: psubus_8i64_max: ; AVX1: # %bb.0: # %vector.ph -; AVX1-NEXT: vextractf128 $1, %ymm2, %xmm3 -; AVX1-NEXT: vmovdqa {{.*#+}} xmm4 = [9223372036854775808,9223372036854775808] -; AVX1-NEXT: vpxor %xmm4, %xmm3, %xmm3 -; AVX1-NEXT: vmovdqa {{.*#+}} xmm5 = [9223372036854841343,9223372036854841343] -; AVX1-NEXT: vpcmpgtq %xmm3, %xmm5, %xmm3 -; AVX1-NEXT: vpxor %xmm4, %xmm2, %xmm6 -; AVX1-NEXT: vpcmpgtq %xmm6, %xmm5, %xmm6 -; AVX1-NEXT: vinsertf128 $1, %xmm3, %ymm6, %ymm3 -; AVX1-NEXT: vmovapd {{.*#+}} ymm6 = [65535,65535,65535,65535] -; AVX1-NEXT: vblendvpd %ymm3, %ymm2, %ymm6, %ymm2 -; AVX1-NEXT: vextractf128 $1, %ymm2, %xmm3 -; AVX1-NEXT: vpackusdw %xmm3, %xmm2, %xmm2 -; AVX1-NEXT: vextractf128 $1, %ymm1, %xmm3 -; AVX1-NEXT: vpxor %xmm4, %xmm3, %xmm3 -; AVX1-NEXT: vpcmpgtq %xmm3, %xmm5, %xmm3 -; AVX1-NEXT: vpxor %xmm4, %xmm1, %xmm4 -; AVX1-NEXT: vpcmpgtq %xmm4, %xmm5, %xmm4 -; AVX1-NEXT: vinsertf128 $1, %xmm3, %ymm4, %ymm3 -; AVX1-NEXT: vblendvpd %ymm3, %ymm1, %ymm6, %ymm1 -; AVX1-NEXT: vextractf128 $1, %ymm1, %xmm3 -; AVX1-NEXT: vpackusdw %xmm3, %xmm1, %xmm1 +; AVX1-NEXT: vmovapd {{.*#+}} xmm3 = [65535,65535] +; AVX1-NEXT: vextractf128 $1, %ymm2, %xmm4 +; AVX1-NEXT: vmovdqa {{.*#+}} xmm5 = [9223372036854775808,9223372036854775808] +; AVX1-NEXT: vpxor %xmm5, %xmm4, %xmm6 +; AVX1-NEXT: vmovdqa {{.*#+}} xmm7 = [9223372036854841343,9223372036854841343] +; AVX1-NEXT: vpcmpgtq %xmm6, %xmm7, %xmm6 +; AVX1-NEXT: vblendvpd %xmm6, %xmm4, %xmm3, %xmm4 +; AVX1-NEXT: vpxor %xmm5, %xmm2, %xmm6 +; AVX1-NEXT: vpcmpgtq %xmm6, %xmm7, %xmm6 +; AVX1-NEXT: vblendvpd %xmm6, %xmm2, %xmm3, %xmm2 +; AVX1-NEXT: vpackusdw %xmm4, %xmm2, %xmm2 +; AVX1-NEXT: vextractf128 $1, %ymm1, %xmm4 +; AVX1-NEXT: vpxor %xmm5, %xmm4, %xmm6 +; AVX1-NEXT: vpcmpgtq %xmm6, %xmm7, %xmm6 +; AVX1-NEXT: vblendvpd %xmm6, %xmm4, %xmm3, %xmm4 +; AVX1-NEXT: vpxor %xmm5, %xmm1, %xmm5 +; AVX1-NEXT: vpcmpgtq %xmm5, %xmm7, %xmm5 +; AVX1-NEXT: vblendvpd %xmm5, %xmm1, %xmm3, %xmm1 +; AVX1-NEXT: vpackusdw %xmm4, %xmm1, %xmm1 ; AVX1-NEXT: vpackusdw %xmm2, %xmm1, %xmm1 ; AVX1-NEXT: vpsubusw %xmm1, %xmm0, %xmm0 ; AVX1-NEXT: vzeroupper |

