| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
| |
add.with.overflow(X, X)"
It's better to do this in codegen, mul.with.overflow(X, 2) is more canonical because it has only one use on "X".
llvm-svn: 131798
|
|
|
|
|
|
| |
add.with.overflow(X, X)
llvm-svn: 131789
|
|
|
|
| |
llvm-svn: 131559
|
|
|
|
|
|
|
|
|
|
| |
cannot overflow.
This happens a lot in clang-compiled C++ code because it adds overflow checks to operator new[]:
unsigned *foo(unsigned n) { return new unsigned[n]; }
We can optimize away the overflow check on 64 bit targets because (uint64_t)n*4 cannot overflow.
llvm-svn: 127418
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
generate them.
Now we compile:
define zeroext i8 @X(i8 signext %a, i8 signext %b) nounwind ssp {
entry:
%0 = tail call %0 @llvm.sadd.with.overflow.i8(i8 %a, i8 %b)
%cmp = extractvalue %0 %0, 1
br i1 %cmp, label %if.then, label %if.end
into:
_X: ## @X
## BB#0: ## %entry
subl $12, %esp
movb 16(%esp), %al
addb 20(%esp), %al
jo LBB0_2
Before we were generating:
_X: ## @X
## BB#0: ## %entry
pushl %ebp
movl %esp, %ebp
subl $8, %esp
movb 12(%ebp), %al
testb %al, %al
setge %cl
movb 8(%ebp), %dl
testb %dl, %dl
setge %ah
cmpb %cl, %ah
sete %cl
addb %al, %dl
testb %dl, %dl
setge %al
cmpb %al, %ah
setne %al
andb %cl, %al
testb %al, %al
jne LBB0_2
llvm-svn: 122186
|
|
|
|
|
|
|
| |
result is dead. This is required for my next patch to not
regress the testsuite.
llvm-svn: 122181
|
|
|
|
|
|
| |
it doesn't regress again.
llvm-svn: 110597
|
|
|
|
|
|
| |
readme forever.
llvm-svn: 94318
|
|
|
|
| |
llvm-svn: 92745
|
|
|
|
|
|
| |
leading/trailing bits. Patch by Alastair Lynn!
llvm-svn: 92706
|
|
|
|
| |
llvm-svn: 92383
|
|
|
|
|
|
| |
fix bugs exposed by the tests. Testcases from Alastair Lynn!
llvm-svn: 90056
|
|
it to a normal binop. Patch by Alastair Lynn, testcase by me.
llvm-svn: 86524
|