diff options
| author | Evan Cheng <evan.cheng@apple.com> | 2012-07-16 19:35:43 +0000 |
|---|---|---|
| committer | Evan Cheng <evan.cheng@apple.com> | 2012-07-16 19:35:43 +0000 |
| commit | 75315b877c7a23784e46f8fe3477849243284a5c (patch) | |
| tree | 89cd4133469c3ea416a4af8c2ab98a8cc98e2a52 /llvm/lib/Transforms/Utils/InlineFunction.cpp | |
| parent | 60f7904db70ed7f174026a9cf17edeba936055be (diff) | |
| download | bcm5719-llvm-75315b877c7a23784e46f8fe3477849243284a5c.tar.gz bcm5719-llvm-75315b877c7a23784e46f8fe3477849243284a5c.zip | |
For something like
uint32_t hi(uint64_t res)
{
uint_32t hi = res >> 32;
return !hi;
}
llvm IR looks like this:
define i32 @hi(i64 %res) nounwind uwtable ssp {
entry:
%lnot = icmp ult i64 %res, 4294967296
%lnot.ext = zext i1 %lnot to i32
ret i32 %lnot.ext
}
The optimizer has optimize away the right shift and truncate but the resulting
constant is too large to fit in the 32-bit immediate field. The resulting x86
code is worse as a result:
movabsq $4294967296, %rax ## imm = 0x100000000
cmpq %rax, %rdi
sbbl %eax, %eax
andl $1, %eax
This patch teaches the x86 lowering code to handle ult against a large immediate
with trailing zeros. It will issue a right shift and a truncate followed by
a comparison against a shifted immediate.
shrq $32, %rdi
testl %edi, %edi
sete %al
movzbl %al, %eax
It also handles a ugt comparison against a large immediate with trailing bits
set. i.e. X > 0x0ffffffff -> (X >> 32) >= 1
rdar://11866926
llvm-svn: 160312
Diffstat (limited to 'llvm/lib/Transforms/Utils/InlineFunction.cpp')
0 files changed, 0 insertions, 0 deletions

