diff options
author | Sanjay Patel <spatel@rotateright.com> | 2017-08-11 15:44:14 +0000 |
---|---|---|
committer | Sanjay Patel <spatel@rotateright.com> | 2017-08-11 15:44:14 +0000 |
commit | 169dae70a680cdfa1779148eb9cb643bb76c8b0e (patch) | |
tree | 83e08148cec571ed6f42847d9ccc7658a73a0f96 /llvm/test/CodeGen/X86/2010-08-04-MaskedSignedCompare.ll | |
parent | 1fb1ce0c87b1b2c78068488be3f624d3c0cbb19a (diff) | |
download | bcm5719-llvm-169dae70a680cdfa1779148eb9cb643bb76c8b0e.tar.gz bcm5719-llvm-169dae70a680cdfa1779148eb9cb643bb76c8b0e.zip |
[x86] use more shift or LEA for select-of-constants (2nd try)
The previous rev (r310208) failed to account for overflow when subtracting the
constants to see if they're suitable for shift/lea. This version add a check
for that and more test were added in r310490.
We can convert any select-of-constants to math ops:
http://rise4fun.com/Alive/d7d
For this patch, I'm enhancing an existing x86 transform that uses fake multiplies
(they always become shl/lea) to avoid cmov or branching. The current code misses
cases where we have a negative constant and a positive constant, so this is just
trying to plug that hole.
The DAGCombiner diff prevents us from hitting a terrible inefficiency: we can start
with a select in IR, create a select DAG node, convert it into a sext, convert it
back into a select, and then lower it to sext machine code.
Some notes about the test diffs:
1. 2010-08-04-MaskedSignedCompare.ll - We were creating control flow that didn't exist in the IR.
2. memcmp.ll - Choose -1 or 1 is the case that got me looking at this again. We could avoid the
push/pop in some cases if we used 'movzbl %al' instead of an xor on a different reg? That's a
post-DAG problem though.
3. mul-constant-result.ll - The trade-off between sbb+not vs. setne+neg could be addressed if
that's a regression, but those would always be nearly equivalent.
4. pr22338.ll and sext-i1.ll - These tests have undef operands, so we don't actually care about these diffs.
5. sbb.ll - This shows a win for what is likely a common case: choose -1 or 0.
6. select.ll - There's another borderline case here: cmp+sbb+or vs. test+set+lea? Also, sbb+not vs. setae+neg shows up again.
7. select_const.ll - These are motivating cases for the enhancement; replace cmov with cheaper ops.
Assembly differences between movzbl and xor to avoid a partial reg stall are caused later by the X86 Fixup SetCC pass.
Differential Revision: https://reviews.llvm.org/D35340
llvm-svn: 310717
Diffstat (limited to 'llvm/test/CodeGen/X86/2010-08-04-MaskedSignedCompare.ll')
-rw-r--r-- | llvm/test/CodeGen/X86/2010-08-04-MaskedSignedCompare.ll | 18 |
1 files changed, 8 insertions, 10 deletions
diff --git a/llvm/test/CodeGen/X86/2010-08-04-MaskedSignedCompare.ll b/llvm/test/CodeGen/X86/2010-08-04-MaskedSignedCompare.ll index 66d3f3108ec..cffefc2bee6 100644 --- a/llvm/test/CodeGen/X86/2010-08-04-MaskedSignedCompare.ll +++ b/llvm/test/CodeGen/X86/2010-08-04-MaskedSignedCompare.ll @@ -9,21 +9,19 @@ define i32 @main() nounwind { ; CHECK-LABEL: main: ; CHECK: # BB#0: # %entry -; CHECK-NEXT: cmpq $0, {{.*}}(%rip) -; CHECK-NEXT: movb $-106, %al -; CHECK-NEXT: jne .LBB0_2 -; CHECK-NEXT: # BB#1: # %entry ; CHECK-NEXT: xorl %eax, %eax -; CHECK-NEXT: .LBB0_2: # %entry +; CHECK-NEXT: cmpq {{.*}}(%rip), %rax +; CHECK-NEXT: sbbl %eax, %eax +; CHECK-NEXT: andl $150, %eax ; CHECK-NEXT: testb %al, %al -; CHECK-NEXT: jle .LBB0_3 -; CHECK-NEXT: # BB#4: # %if.then +; CHECK-NEXT: jle .LBB0_1 +; CHECK-NEXT: # BB#2: # %if.then ; CHECK-NEXT: movl $1, {{.*}}(%rip) ; CHECK-NEXT: movl $1, %esi -; CHECK-NEXT: jmp .LBB0_5 -; CHECK-NEXT: .LBB0_3: # %entry.if.end_crit_edge +; CHECK-NEXT: jmp .LBB0_3 +; CHECK-NEXT: .LBB0_1: # %entry.if.end_crit_edge ; CHECK-NEXT: movl {{.*}}(%rip), %esi -; CHECK-NEXT: .LBB0_5: # %if.end +; CHECK-NEXT: .LBB0_3: # %if.end ; CHECK-NEXT: pushq %rax ; CHECK-NEXT: movl $.L.str, %edi ; CHECK-NEXT: xorl %eax, %eax |