diff options
| author | Sanjay Patel <spatel@rotateright.com> | 2017-01-20 22:18:47 +0000 |
|---|---|---|
| committer | Sanjay Patel <spatel@rotateright.com> | 2017-01-20 22:18:47 +0000 |
| commit | 0c1c70aef4a1d97188fde20e894bafeccd64224c (patch) | |
| tree | 93df8c022e8f943244b21ba8e5bf5784b2f5da7a /llvm/test/CodeGen | |
| parent | 867be0d14ce393108d848bcd9d080a92ca6b0006 (diff) | |
| download | bcm5719-llvm-0c1c70aef4a1d97188fde20e894bafeccd64224c.tar.gz bcm5719-llvm-0c1c70aef4a1d97188fde20e894bafeccd64224c.zip | |
[ValueTracking] recognize variations of 'clamp' to improve codegen (PR31693)
By enhancing value tracking, we allow an existing min/max canonicalization to
kick in and improve codegen for several targets that have min/max instructions.
Unfortunately, recognizing min/max in value tracking may cause us to hit
a hack in InstCombiner::visitICmpInst() more often:
http://lists.llvm.org/pipermail/llvm-dev/2017-January/109340.html
...but I'm hoping we can remove that soon.
Correctness proofs based on Alive:
Name: smaxmin
Pre: C1 < C2
%cmp2 = icmp slt i8 %x, C2
%min = select i1 %cmp2, i8 %x, i8 C2
%cmp3 = icmp slt i8 %x, C1
%r = select i1 %cmp3, i8 C1, i8 %min
=>
%cmp2 = icmp slt i8 %x, C2
%min = select i1 %cmp2, i8 %x, i8 C2
%cmp1 = icmp sgt i8 %min, C1
%r = select i1 %cmp1, i8 %min, i8 C1
Name: sminmax
Pre: C1 > C2
%cmp2 = icmp sgt i8 %x, C2
%max = select i1 %cmp2, i8 %x, i8 C2
%cmp3 = icmp sgt i8 %x, C1
%r = select i1 %cmp3, i8 C1, i8 %max
=>
%cmp2 = icmp sgt i8 %x, C2
%max = select i1 %cmp2, i8 %x, i8 C2
%cmp1 = icmp slt i8 %max, C1
%r = select i1 %cmp1, i8 %max, i8 C1
----------------------------------------
Optimization: smaxmin
Done: 1
Optimization is correct!
----------------------------------------
Optimization: sminmax
Done: 1
Optimization is correct!
Name: umaxmin
Pre: C1 u< C2
%cmp2 = icmp ult i8 %x, C2
%min = select i1 %cmp2, i8 %x, i8 C2
%cmp3 = icmp ult i8 %x, C1
%r = select i1 %cmp3, i8 C1, i8 %min
=>
%cmp2 = icmp ult i8 %x, C2
%min = select i1 %cmp2, i8 %x, i8 C2
%cmp1 = icmp ugt i8 %min, C1
%r = select i1 %cmp1, i8 %min, i8 C1
Name: uminmax
Pre: C1 u> C2
%cmp2 = icmp ugt i8 %x, C2
%max = select i1 %cmp2, i8 %x, i8 C2
%cmp3 = icmp ugt i8 %x, C1
%r = select i1 %cmp3, i8 C1, i8 %max
=>
%cmp2 = icmp ugt i8 %x, C2
%max = select i1 %cmp2, i8 %x, i8 C2
%cmp1 = icmp ult i8 %max, C1
%r = select i1 %cmp1, i8 %max, i8 C1
----------------------------------------
Optimization: umaxmin
Done: 1
Optimization is correct!
----------------------------------------
Optimization: uminmax
Done: 1
Optimization is correct!
llvm-svn: 292660
Diffstat (limited to 'llvm/test/CodeGen')
| -rw-r--r-- | llvm/test/CodeGen/X86/vec_minmax_match.ll | 26 |
1 files changed, 8 insertions, 18 deletions
diff --git a/llvm/test/CodeGen/X86/vec_minmax_match.ll b/llvm/test/CodeGen/X86/vec_minmax_match.ll index 6644d5dc84b..98f77912779 100644 --- a/llvm/test/CodeGen/X86/vec_minmax_match.ll +++ b/llvm/test/CodeGen/X86/vec_minmax_match.ll @@ -1,5 +1,4 @@ ; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py -; NOTE: Assertions have been autogenerated by utils/update_test_checks.py ; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=+avx | FileCheck %s ; These are actually tests of ValueTracking, and so may have test coverage in InstCombine or other @@ -165,10 +164,8 @@ define <4 x i32> @umin_vec2(<4 x i32> %x) { define <4 x i32> @clamp_signed1(<4 x i32> %x) { ; CHECK-LABEL: clamp_signed1: ; CHECK: # BB#0: -; CHECK-NEXT: vpminsd {{.*}}(%rip), %xmm0, %xmm1 -; CHECK-NEXT: vmovdqa {{.*#+}} xmm2 = [15,15,15,15] -; CHECK-NEXT: vpcmpgtd %xmm0, %xmm2, %xmm0 -; CHECK-NEXT: vblendvps %xmm0, %xmm2, %xmm1, %xmm0 +; CHECK-NEXT: vpminsd {{.*}}(%rip), %xmm0, %xmm0 +; CHECK-NEXT: vpmaxsd {{.*}}(%rip), %xmm0, %xmm0 ; CHECK-NEXT: retq %cmp2 = icmp slt <4 x i32> %x, <i32 255, i32 255, i32 255, i32 255> %min = select <4 x i1> %cmp2, <4 x i32> %x, <4 x i32><i32 255, i32 255, i32 255, i32 255> @@ -182,10 +179,8 @@ define <4 x i32> @clamp_signed1(<4 x i32> %x) { define <4 x i32> @clamp_signed2(<4 x i32> %x) { ; CHECK-LABEL: clamp_signed2: ; CHECK: # BB#0: -; CHECK-NEXT: vpmaxsd {{.*}}(%rip), %xmm0, %xmm1 -; CHECK-NEXT: vmovdqa {{.*#+}} xmm2 = [255,255,255,255] -; CHECK-NEXT: vpcmpgtd %xmm2, %xmm0, %xmm0 -; CHECK-NEXT: vblendvps %xmm0, %xmm2, %xmm1, %xmm0 +; CHECK-NEXT: vpmaxsd {{.*}}(%rip), %xmm0, %xmm0 +; CHECK-NEXT: vpminsd {{.*}}(%rip), %xmm0, %xmm0 ; CHECK-NEXT: retq %cmp2 = icmp sgt <4 x i32> %x, <i32 15, i32 15, i32 15, i32 15> %max = select <4 x i1> %cmp2, <4 x i32> %x, <4 x i32><i32 15, i32 15, i32 15, i32 15> @@ -199,11 +194,8 @@ define <4 x i32> @clamp_signed2(<4 x i32> %x) { define <4 x i32> @clamp_unsigned1(<4 x i32> %x) { ; CHECK-LABEL: clamp_unsigned1: ; CHECK: # BB#0: -; CHECK-NEXT: vpminud {{.*}}(%rip), %xmm0, %xmm1 -; CHECK-NEXT: vpxor {{.*}}(%rip), %xmm0, %xmm0 -; CHECK-NEXT: vmovdqa {{.*#+}} xmm2 = [2147483663,2147483663,2147483663,2147483663] -; CHECK-NEXT: vpcmpgtd %xmm0, %xmm2, %xmm0 -; CHECK-NEXT: vblendvps %xmm0, {{.*}}(%rip), %xmm1, %xmm0 +; CHECK-NEXT: vpminud {{.*}}(%rip), %xmm0, %xmm0 +; CHECK-NEXT: vpmaxud {{.*}}(%rip), %xmm0, %xmm0 ; CHECK-NEXT: retq %cmp2 = icmp ult <4 x i32> %x, <i32 255, i32 255, i32 255, i32 255> %min = select <4 x i1> %cmp2, <4 x i32> %x, <4 x i32><i32 255, i32 255, i32 255, i32 255> @@ -217,10 +209,8 @@ define <4 x i32> @clamp_unsigned1(<4 x i32> %x) { define <4 x i32> @clamp_unsigned2(<4 x i32> %x) { ; CHECK-LABEL: clamp_unsigned2: ; CHECK: # BB#0: -; CHECK-NEXT: vpmaxud {{.*}}(%rip), %xmm0, %xmm1 -; CHECK-NEXT: vpxor {{.*}}(%rip), %xmm0, %xmm0 -; CHECK-NEXT: vpcmpgtd {{.*}}(%rip), %xmm0, %xmm0 -; CHECK-NEXT: vblendvps %xmm0, {{.*}}(%rip), %xmm1, %xmm0 +; CHECK-NEXT: vpmaxud {{.*}}(%rip), %xmm0, %xmm0 +; CHECK-NEXT: vpminud {{.*}}(%rip), %xmm0, %xmm0 ; CHECK-NEXT: retq %cmp2 = icmp ugt <4 x i32> %x, <i32 15, i32 15, i32 15, i32 15> %max = select <4 x i1> %cmp2, <4 x i32> %x, <4 x i32><i32 15, i32 15, i32 15, i32 15> |

