| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
| |
llvm-svn: 44987
|
| |
|
|
|
|
| |
be added back later if it causes code quality issues.
llvm-svn: 44986
|
| |
|
|
|
|
| |
re-materializable and they should not be spilled.
llvm-svn: 44960
|
| |
|
|
|
|
|
|
| |
SelectionDAG::getConstant, in the same way as vector floating-point
constants. This allows the legalize expansion code for @llvm.ctpop and
friends to be usable with vector types.
llvm-svn: 44954
|
| |
|
|
|
|
| |
interfered with other registers. Seems like that might be a good thing to do. :-)
llvm-svn: 44902
|
| |
|
|
|
|
| |
register R and reload is targeting R), make sure to invalidate the kill information of the last kill.
llvm-svn: 44894
|
| |
|
|
| |
llvm-svn: 44892
|
| |
|
|
| |
llvm-svn: 44881
|
| |
|
|
| |
llvm-svn: 44877
|
| |
|
|
| |
llvm-svn: 44874
|
| |
|
|
|
|
| |
- Fix subtle bug when creating initially creating this map.
llvm-svn: 44873
|
| |
|
|
|
|
|
|
| |
because those with side effects will be caught by other checks in here.
Also, simplify the check for a BB in a sub loop.
llvm-svn: 44871
|
| |
|
|
| |
llvm-svn: 44838
|
| |
|
|
| |
llvm-svn: 44837
|
| |
|
|
|
|
|
| |
per-function collector model. Collector is now the factory for
CollectorMetadata, so the latter may be subclassed.
llvm-svn: 44827
|
| |
|
|
|
|
|
|
| |
sense of
how the CodeGen machinery works.
llvm-svn: 44786
|
| |
|
|
|
|
| |
impact the value of fall-through choices.
llvm-svn: 44785
|
| |
|
|
|
|
| |
knows the vector is not pow2
llvm-svn: 44740
|
| |
|
|
| |
llvm-svn: 44728
|
| |
|
|
| |
llvm-svn: 44727
|
| |
|
|
|
|
| |
that LegalizeDAG does.
llvm-svn: 44726
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
%f8 = type <8 x float>
define void @test_f8(%f8* %P, %f8* %Q, %f8* %S) {
%p = load %f8* %P ; <%f8> [#uses=1]
%q = load %f8* %Q ; <%f8> [#uses=1]
%R = add %f8 %p, %q ; <%f8> [#uses=1]
store %f8 %R, %f8* %S
ret void
}
into:
_test_f8:
movaps 16(%rdi), %xmm0
addps 16(%rsi), %xmm0
movaps (%rdi), %xmm1
addps (%rsi), %xmm1
movaps %xmm0, 16(%rdx)
movaps %xmm1, (%rdx)
ret
llvm-svn: 44725
|
| |
|
|
| |
llvm-svn: 44724
|
| |
|
|
| |
llvm-svn: 44723
|
| |
|
|
| |
llvm-svn: 44722
|
| |
|
|
| |
llvm-svn: 44719
|
| |
|
|
| |
llvm-svn: 44718
|
| |
|
|
| |
llvm-svn: 44717
|
| |
|
|
| |
llvm-svn: 44716
|
| |
|
|
| |
llvm-svn: 44715
|
| |
|
|
|
|
| |
Leave it visibility hidden, but not in an anon namespace.
llvm-svn: 44714
|
| |
|
|
|
|
|
| |
isTriviallyReMaterializable -> hasNoSideEffects
isReallyTriviallyReMaterializable -> isTriviallyReMaterializable
llvm-svn: 44702
|
| |
|
|
|
|
|
| |
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20071203/056043.html
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20071203/056048.html
llvm-svn: 44696
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
_foo:
li r2, 0
LBB1_1: ; bb
li r5, 0
stw r5, 0(r3)
addi r2, r2, 1
addi r3, r3, 4
cmplw cr0, r2, r4
bne cr0, LBB1_1 ; bb
LBB1_2: ; return
blr
to:
_foo:
li r2, 0
li r5, 0
LBB1_1: ; bb
stw r5, 0(r3)
addi r2, r2, 1
addi r3, r3, 4
cmplw cr0, r2, r4
bne cr0, LBB1_1 ; bb
LBB1_2: ; return
blr
ZOMG!! :-)
Moar to come...
llvm-svn: 44687
|
| |
|
|
| |
llvm-svn: 44671
|
| |
|
|
|
|
| |
Simpler and safer.
llvm-svn: 44663
|
| |
|
|
|
|
| |
llcbeta.
llvm-svn: 44660
|
| |
|
|
|
|
|
| |
only disable it if we don't know it will be obviously profitable.
Still fixme, but less so. :)
llvm-svn: 44658
|
| |
|
|
|
|
| |
the X86 backend are needed before this should be enabled by default.
llvm-svn: 44657
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
_foo:
movl $12, %eax
andl 4(%esp), %eax
movl _array(%eax), %eax
ret
instead of:
_foo:
movl 4(%esp), %eax
shrl $2, %eax
andl $3, %eax
movl _array(,%eax,4), %eax
ret
As it turns out, this triggers all the time, in a wide variety of
situations, for example, I see diffs like this in various programs:
- movl 8(%eax), %eax
- shll $2, %eax
- andl $1020, %eax
- movl (%esi,%eax), %eax
+ movzbl 8(%eax), %eax
+ movl (%esi,%eax,4), %eax
- shll $2, %edx
- andl $1020, %edx
- movl (%edi,%edx), %edx
+ andl $255, %edx
+ movl (%edi,%edx,4), %edx
Unfortunately, I also see stuff like this, which can be fixed in the
X86 backend:
- andl $85, %ebx
- addl _bit_count(,%ebx,4), %ebp
+ shll $2, %ebx
+ andl $340, %ebx
+ addl _bit_count(%ebx), %ebp
llvm-svn: 44656
|
| |
|
|
|
|
| |
SelectionDAGLegalize::ScalarizeVectorOp
llvm-svn: 44654
|
| |
|
|
| |
llvm-svn: 44649
|
| |
|
|
|
|
| |
a preferred spill candiate.
llvm-svn: 44644
|
| |
|
|
| |
llvm-svn: 44612
|
| |
|
|
|
|
| |
last use.
llvm-svn: 44611
|
| |
|
|
| |
llvm-svn: 44610
|
| |
|
|
| |
llvm-svn: 44609
|
| |
|
|
| |
llvm-svn: 44608
|
| |
|
|
| |
llvm-svn: 44607
|
| |
|
|
|
|
|
|
|
| |
This allows an important optimization to be re-enabled.
- If all uses / defs of a split interval can be folded, give the interval a
low spill weight so it would not be picked in case spilling is needed (avoid
pushing other intervals in the same BB to be spilled).
llvm-svn: 44601
|