summaryrefslogtreecommitdiffstats
path: root/llvm/test/CodeGen/X86/2010-08-04-MaskedSignedCompare.ll
diff options
context:
space:
mode:
authorSanjay Patel <spatel@rotateright.com>2017-08-06 16:27:07 +0000
committerSanjay Patel <spatel@rotateright.com>2017-08-06 16:27:07 +0000
commita923c2ee95a4b3b6d43a850789ba56c6aa249b3c (patch)
treebe69b92b4063e1ba19c60cf82a2611f084c21a2f /llvm/test/CodeGen/X86/2010-08-04-MaskedSignedCompare.ll
parenta9b5bbac789a69322ec62011bcb8e8462a097e59 (diff)
downloadbcm5719-llvm-a923c2ee95a4b3b6d43a850789ba56c6aa249b3c.tar.gz
bcm5719-llvm-a923c2ee95a4b3b6d43a850789ba56c6aa249b3c.zip
[x86] use more shift or LEA for select-of-constants
We can convert any select-of-constants to math ops: http://rise4fun.com/Alive/d7d For this patch, I'm enhancing an existing x86 transform that uses fake multiplies (they always become shl/lea) to avoid cmov or branching. The current code misses cases where we have a negative constant and a positive constant, so this is just trying to plug that hole. The DAGCombiner diff prevents us from hitting a terrible inefficiency: we can start with a select in IR, create a select DAG node, convert it into a sext, convert it back into a select, and then lower it to sext machine code. Some notes about the test diffs: 1. 2010-08-04-MaskedSignedCompare.ll - We were creating control flow that didn't exist in the IR. 2. memcmp.ll - Choose -1 or 1 is the case that got me looking at this again. I think we could avoid the push/pop in some cases if we used 'movzbl %al' instead of an xor on a different reg? That's a post-DAG problem though. 3. mul-constant-result.ll - The trade-off between sbb+not vs. setne+neg could be addressed if that's a regression, but I think those would always be nearly equivalent. 4. pr22338.ll and sext-i1.ll - These tests have undef operands, so I don't think we actually care about these diffs. 5. sbb.ll - This shows a win for what I think is a common case: choose -1 or 0. 6. select.ll - There's another borderline case here: cmp+sbb+or vs. test+set+lea? Also, sbb+not vs. setae+neg shows up again. 7. select_const.ll - These are motivating cases for the enhancement; replace cmov with cheaper ops. Assembly differences between movzbl and xor to avoid a partial reg stall are caused later by the X86 Fixup SetCC pass. Differential Revision: https://reviews.llvm.org/D35340 llvm-svn: 310208
Diffstat (limited to 'llvm/test/CodeGen/X86/2010-08-04-MaskedSignedCompare.ll')
-rw-r--r--llvm/test/CodeGen/X86/2010-08-04-MaskedSignedCompare.ll18
1 files changed, 8 insertions, 10 deletions
diff --git a/llvm/test/CodeGen/X86/2010-08-04-MaskedSignedCompare.ll b/llvm/test/CodeGen/X86/2010-08-04-MaskedSignedCompare.ll
index 66d3f3108ec..cffefc2bee6 100644
--- a/llvm/test/CodeGen/X86/2010-08-04-MaskedSignedCompare.ll
+++ b/llvm/test/CodeGen/X86/2010-08-04-MaskedSignedCompare.ll
@@ -9,21 +9,19 @@
define i32 @main() nounwind {
; CHECK-LABEL: main:
; CHECK: # BB#0: # %entry
-; CHECK-NEXT: cmpq $0, {{.*}}(%rip)
-; CHECK-NEXT: movb $-106, %al
-; CHECK-NEXT: jne .LBB0_2
-; CHECK-NEXT: # BB#1: # %entry
; CHECK-NEXT: xorl %eax, %eax
-; CHECK-NEXT: .LBB0_2: # %entry
+; CHECK-NEXT: cmpq {{.*}}(%rip), %rax
+; CHECK-NEXT: sbbl %eax, %eax
+; CHECK-NEXT: andl $150, %eax
; CHECK-NEXT: testb %al, %al
-; CHECK-NEXT: jle .LBB0_3
-; CHECK-NEXT: # BB#4: # %if.then
+; CHECK-NEXT: jle .LBB0_1
+; CHECK-NEXT: # BB#2: # %if.then
; CHECK-NEXT: movl $1, {{.*}}(%rip)
; CHECK-NEXT: movl $1, %esi
-; CHECK-NEXT: jmp .LBB0_5
-; CHECK-NEXT: .LBB0_3: # %entry.if.end_crit_edge
+; CHECK-NEXT: jmp .LBB0_3
+; CHECK-NEXT: .LBB0_1: # %entry.if.end_crit_edge
; CHECK-NEXT: movl {{.*}}(%rip), %esi
-; CHECK-NEXT: .LBB0_5: # %if.end
+; CHECK-NEXT: .LBB0_3: # %if.end
; CHECK-NEXT: pushq %rax
; CHECK-NEXT: movl $.L.str, %edi
; CHECK-NEXT: xorl %eax, %eax
OpenPOWER on IntegriCloud