diff options
| author | Craig Topper <craig.topper@intel.com> | 2018-11-09 18:04:34 +0000 |
|---|---|---|
| committer | Craig Topper <craig.topper@intel.com> | 2018-11-09 18:04:34 +0000 |
| commit | 9a7e19b8f28eb817d9183d8f921abf5aae31f210 (patch) | |
| tree | 67536c761bb62051322dacd9138e25edebf89e42 /llvm/test/CodeGen/X86/known-signbits-vector.ll | |
| parent | dcf1f8e7169285486d00409484a2a9b0ee14eb6d (diff) | |
| download | bcm5719-llvm-9a7e19b8f28eb817d9183d8f921abf5aae31f210.tar.gz bcm5719-llvm-9a7e19b8f28eb817d9183d8f921abf5aae31f210.zip | |
[DAGCombiner][X86][Mips] Enable combineShuffleOfScalars to run between vector op legalization and DAG legalization. Fix bad one use check in combineShuffleOfScalars
It's possible for vector op legalization to generate a shuffle. If that happens we should give a chance for DAG combine to combine that with a build_vector input.
I also fixed a bug in combineShuffleOfScalars that was considering the number of uses on a undef input to a shuffle. We don't care how many times undef is used.
Differential Revision: https://reviews.llvm.org/D54283
llvm-svn: 346530
Diffstat (limited to 'llvm/test/CodeGen/X86/known-signbits-vector.ll')
| -rw-r--r-- | llvm/test/CodeGen/X86/known-signbits-vector.ll | 7 |
1 files changed, 3 insertions, 4 deletions
diff --git a/llvm/test/CodeGen/X86/known-signbits-vector.ll b/llvm/test/CodeGen/X86/known-signbits-vector.ll index 64bca733068..17af450d4b6 100644 --- a/llvm/test/CodeGen/X86/known-signbits-vector.ll +++ b/llvm/test/CodeGen/X86/known-signbits-vector.ll @@ -28,10 +28,9 @@ define <4 x float> @signbits_sext_v4i64_sitofp_v4f32(i8 signext %a0, i16 signext ; X32-NEXT: movswl {{[0-9]+}}(%esp), %eax ; X32-NEXT: movsbl {{[0-9]+}}(%esp), %ecx ; X32-NEXT: vmovd %ecx, %xmm0 -; X32-NEXT: vpinsrd $2, %eax, %xmm0, %xmm0 -; X32-NEXT: vmovd {{.*#+}} xmm1 = mem[0],zero,zero,zero -; X32-NEXT: vpinsrd $2, {{[0-9]+}}(%esp), %xmm1, %xmm1 -; X32-NEXT: vshufps {{.*#+}} xmm0 = xmm0[0,2],xmm1[0,2] +; X32-NEXT: vpinsrd $1, %eax, %xmm0, %xmm0 +; X32-NEXT: vpinsrd $2, {{[0-9]+}}(%esp), %xmm0, %xmm0 +; X32-NEXT: vpinsrd $3, {{[0-9]+}}(%esp), %xmm0, %xmm0 ; X32-NEXT: vcvtdq2ps %xmm0, %xmm0 ; X32-NEXT: retl ; |

