| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
|
|
|
| |
isTriviallyReMaterializable -> hasNoSideEffects
isReallyTriviallyReMaterializable -> isTriviallyReMaterializable
llvm-svn: 44702
|
| |
|
|
|
|
|
| |
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20071203/056043.html
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20071203/056048.html
llvm-svn: 44696
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
_foo:
li r2, 0
LBB1_1: ; bb
li r5, 0
stw r5, 0(r3)
addi r2, r2, 1
addi r3, r3, 4
cmplw cr0, r2, r4
bne cr0, LBB1_1 ; bb
LBB1_2: ; return
blr
to:
_foo:
li r2, 0
li r5, 0
LBB1_1: ; bb
stw r5, 0(r3)
addi r2, r2, 1
addi r3, r3, 4
cmplw cr0, r2, r4
bne cr0, LBB1_1 ; bb
LBB1_2: ; return
blr
ZOMG!! :-)
Moar to come...
llvm-svn: 44687
|
| |
|
|
| |
llvm-svn: 44671
|
| |
|
|
|
|
| |
Simpler and safer.
llvm-svn: 44663
|
| |
|
|
|
|
| |
llcbeta.
llvm-svn: 44660
|
| |
|
|
|
|
|
| |
only disable it if we don't know it will be obviously profitable.
Still fixme, but less so. :)
llvm-svn: 44658
|
| |
|
|
|
|
| |
the X86 backend are needed before this should be enabled by default.
llvm-svn: 44657
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
_foo:
movl $12, %eax
andl 4(%esp), %eax
movl _array(%eax), %eax
ret
instead of:
_foo:
movl 4(%esp), %eax
shrl $2, %eax
andl $3, %eax
movl _array(,%eax,4), %eax
ret
As it turns out, this triggers all the time, in a wide variety of
situations, for example, I see diffs like this in various programs:
- movl 8(%eax), %eax
- shll $2, %eax
- andl $1020, %eax
- movl (%esi,%eax), %eax
+ movzbl 8(%eax), %eax
+ movl (%esi,%eax,4), %eax
- shll $2, %edx
- andl $1020, %edx
- movl (%edi,%edx), %edx
+ andl $255, %edx
+ movl (%edi,%edx,4), %edx
Unfortunately, I also see stuff like this, which can be fixed in the
X86 backend:
- andl $85, %ebx
- addl _bit_count(,%ebx,4), %ebp
+ shll $2, %ebx
+ andl $340, %ebx
+ addl _bit_count(%ebx), %ebp
llvm-svn: 44656
|
| |
|
|
|
|
| |
SelectionDAGLegalize::ScalarizeVectorOp
llvm-svn: 44654
|
| |
|
|
| |
llvm-svn: 44649
|
| |
|
|
|
|
| |
a preferred spill candiate.
llvm-svn: 44644
|
| |
|
|
| |
llvm-svn: 44612
|
| |
|
|
|
|
| |
last use.
llvm-svn: 44611
|
| |
|
|
| |
llvm-svn: 44610
|
| |
|
|
| |
llvm-svn: 44609
|
| |
|
|
| |
llvm-svn: 44608
|
| |
|
|
| |
llvm-svn: 44607
|
| |
|
|
|
|
|
|
|
| |
This allows an important optimization to be re-enabled.
- If all uses / defs of a split interval can be folded, give the interval a
low spill weight so it would not be picked in case spilling is needed (avoid
pushing other intervals in the same BB to be spilled).
llvm-svn: 44601
|
| |
|
|
|
|
| |
the stored register is killed.
llvm-svn: 44600
|
| |
|
|
| |
llvm-svn: 44587
|
| |
|
|
|
|
| |
unless it can be modified.
llvm-svn: 44575
|
| |
|
|
|
|
|
|
|
|
|
|
| |
to codegen this:
define float @test_extract_elt(<1 x float> * %P) {
%p = load <1 x float>* %P
%R = extractelement <1 x float> %p, i32 0
ret float %R
}
llvm-svn: 44570
|
| |
|
|
| |
llvm-svn: 44569
|
| |
|
|
| |
llvm-svn: 44565
|
| |
|
|
| |
llvm-svn: 44549
|
| |
|
|
|
|
|
|
|
| |
throw exceptions", just mark intrinsics with the nounwind
attribute. Likewise, mark intrinsics as readnone/readonly
and get rid of special aliasing logic (which didn't use
anything more than this anyway).
llvm-svn: 44544
|
| |
|
|
| |
llvm-svn: 44532
|
| |
|
|
| |
llvm-svn: 44531
|
| |
|
|
| |
llvm-svn: 44517
|
| |
|
|
| |
llvm-svn: 44482
|
| |
|
|
|
|
| |
-> cmpl [mem], 0.
llvm-svn: 44479
|
| |
|
|
|
|
| |
extra load.
llvm-svn: 44467
|
| |
|
|
| |
llvm-svn: 44446
|
| |
|
|
| |
llvm-svn: 44443
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in the middle of a split basic block, create a new live interval starting at
the def. This avoid artifically extending the live interval over a number of
cycles where it is dead. e.g.
bb1:
= vr1204 (use / kill) <= new interval starts and ends here.
...
...
vr1204 = (new def) <= start a new interval here.
= vr1204 (use)
llvm-svn: 44436
|
| |
|
|
| |
llvm-svn: 44434
|
| |
|
|
| |
llvm-svn: 44428
|
| |
|
|
| |
llvm-svn: 44427
|
| |
|
|
|
|
| |
use them.
llvm-svn: 44403
|
| |
|
|
| |
llvm-svn: 44399
|
| |
|
|
| |
llvm-svn: 44386
|
| |
|
|
| |
llvm-svn: 44384
|
| |
|
|
| |
llvm-svn: 44371
|
| |
|
|
|
|
|
|
|
|
|
|
| |
the function type, instead they belong to functions
and function calls. This is an updated and slightly
corrected version of Reid Spencer's original patch.
The only known problem is that auto-upgrading of
bitcode files doesn't seem to work properly (see
test/Bitcode/AutoUpgradeIntrinsics.ll). Hopefully
a bitcode guru (who might that be? :) ) will fix it.
llvm-svn: 44359
|
| |
|
|
| |
llvm-svn: 44352
|
| |
|
|
| |
llvm-svn: 44351
|
| |
|
|
|
|
|
|
|
| |
optimized. This avoids creating illegal divisions when the combiner is
running after legalize; this fixes PR1815. Also, it produces better
code in the included testcase by avoiding the subtract and multiply
when the division isn't optimized.
llvm-svn: 44341
|
| |
|
|
| |
llvm-svn: 44304
|
| |
|
|
|
|
|
|
| |
Improve a comment.
Unbreak Duncan's carefully written path compression where I didn't realize
what was happening!
llvm-svn: 44301
|