| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
| |
llvm-svn: 117959
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
at more than those which define CPSR. You can have this situation:
(1) subs ...
(2) sub r6, r5, r4
(3) movge ...
(4) cmp r6, 0
(5) movge ...
We cannot convert (2) to "subs" because (3) is using the CPSR set by
(1). There's an analogous situation here:
(1) sub r1, r2, r3
(2) sub r4, r5, r6
(3) cmp r4, ...
(5) movge ...
(6) cmp r1, ...
(7) movge ...
We cannot convert (1) to "subs" because of the intervening use of CPSR.
llvm-svn: 117950
|
|
|
|
|
|
| |
give them individual stack slots once the are actually spilled.
llvm-svn: 117945
|
|
|
|
|
|
|
| |
When an instruction refers to a spill slot with a LiveStacks entry, check that
the spill slot is live at the instruction.
llvm-svn: 117944
|
|
|
|
| |
llvm-svn: 117904
|
|
|
|
| |
llvm-svn: 117879
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
looks like is happening:
Without the peephole optimizer:
(1) sub r6, r6, #32
orr r12, r12, lr, lsl r9
orr r2, r2, r3, lsl r10
(x) cmp r6, #0
ldr r9, LCPI2_10
ldr r10, LCPI2_11
(2) sub r8, r8, #32
(a) movge r12, lr, lsr r6
(y) cmp r8, #0
LPC2_10:
ldr lr, [pc, r10]
(b) movge r2, r3, lsr r8
With the peephole optimizer:
ldr r9, LCPI2_10
ldr r10, LCPI2_11
(1*) subs r6, r6, #32
(2*) subs r8, r8, #32
(a*) movge r12, lr, lsr r6
(b*) movge r2, r3, lsr r8
(1) is used by (x) for the conditional move at (a). (2) is used by (y) for the
conditional move at (b). After the peephole optimizer, these the flags resulting
from (1*) are ignored and only the flags from (2*) are considered for both
conditional moves.
llvm-svn: 117876
|
|
|
|
| |
llvm-svn: 117867
|
|
|
|
| |
llvm-svn: 117765
|
|
|
|
|
|
| |
a basic block.
llvm-svn: 117764
|
|
|
|
|
|
| |
elsewhere.
llvm-svn: 117763
|
|
|
|
| |
llvm-svn: 117762
|
|
|
|
| |
llvm-svn: 117761
|
|
|
|
|
|
| |
defs. rdar://8610857.
llvm-svn: 117745
|
|
|
|
| |
llvm-svn: 117720
|
|
|
|
| |
llvm-svn: 117677
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
operand and one of them has a single use that is a live out copy, favor the
one that is live out. Otherwise it will be difficult to eliminate the copy
if the instruction is a loop induction variable update. e.g.
BB:
sub r1, r3, #1
str r0, [r2, r3]
mov r3, r1
cmp
bne BB
=>
BB:
str r0, [r2, r3]
sub r3, r3, #1
cmp
bne BB
This fixed the recent 256.bzip2 regression.
llvm-svn: 117675
|
|
|
|
| |
llvm-svn: 117673
|
|
|
|
| |
llvm-svn: 117671
|
|
|
|
|
|
|
|
| |
We don't want unused values forming their own equivalence classes, so we lump
them all together in one class, and then merge them with the class of the last
used value.
llvm-svn: 117670
|
|
|
|
| |
llvm-svn: 117669
|
|
|
|
|
|
| |
basic logic, added initial platform support.
llvm-svn: 117667
|
|
|
|
| |
llvm-svn: 117643
|
|
|
|
|
|
|
| |
EquvivalenceClasses.h except it looks like overkill when elements are continuous
integers.
llvm-svn: 117631
|
|
|
|
|
|
| |
multiplicity.
llvm-svn: 117630
|
|
|
|
| |
llvm-svn: 117629
|
|
|
|
| |
llvm-svn: 117615
|
|
|
|
| |
llvm-svn: 117602
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in SSAUpdaterImpl.h
Verifying live intervals revealed that the old method was completely wrong, and
we need an iterative approach to calculating PHI placemant. Fortunately, we have
MachineDominators available, so we don't have to compute that over and over
like SSAUpdaterImpl.h must.
Live-out values are cached between calls to mapValue() and computed in a greedy
way, so most calls will be working with very small block sets.
Thanks to Bob for explaining how this should work.
llvm-svn: 117599
|
|
|
|
|
|
|
|
|
| |
proper SSA updating.
This doesn't cause MachineDominators to be recomputed since we are already
requiring MachineLoopInfo which uses dominators as well.
llvm-svn: 117598
|
|
|
|
|
|
| |
split.
llvm-svn: 117597
|
|
|
|
|
|
| |
record file info.
llvm-svn: 117588
|
|
|
|
| |
llvm-svn: 117563
|
|
|
|
|
|
| |
Also do some minor refactoring to reduce indentation.
llvm-svn: 117558
|
|
|
|
| |
llvm-svn: 117531
|
|
|
|
|
|
| |
to fail. Ugh.
llvm-svn: 117520
|
|
|
|
|
|
| |
by the number of defs first for it to match the instruction itinerary.
llvm-svn: 117518
|
|
|
|
|
|
| |
fallthroughs uses all registers, just gather the union of all successor liveins.
llvm-svn: 117506
|
|
|
|
|
|
|
|
| |
There are currently 100 references to COFF::IMAGE_SCN in 6 files
and 11 different functions. Section to attribute mapping really
needs to happen in one place to avoid problems like this.
llvm-svn: 117473
|
|
|
|
| |
llvm-svn: 117472
|
|
|
|
| |
llvm-svn: 117453
|
|
|
|
|
|
|
|
|
| |
live out.
This doesn't prevent us from inserting a loop preheader later on, if that is
better.
llvm-svn: 117424
|
|
|
|
|
|
|
|
| |
Critical edges going into a loop are not as bad as critical exits. We can handle
them by splitting the critical edge, or by having both inside and outside
registers live out of the predecessor.
llvm-svn: 117423
|
|
|
|
|
|
| |
Only virtuals should be requires to be connected.
llvm-svn: 117422
|
|
|
|
|
|
|
|
| |
memory, so a MachineMemOperand is useful (not propagated
into the MachineInstr yet). No functional change except
for dump output.
llvm-svn: 117413
|
|
|
|
|
|
|
| |
them, but hopefully we won't. And this is not the right data structure
to do it anyway.
llvm-svn: 117412
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the remainder register.
Example:
bb0:
x = 1
bb1:
use(x)
...
x = 2
jump bb1
When x is isolated in bb1, the inner part breaks into two components, x1 and x2:
bb0:
x0 = 1
bb1:
x1 = x0
use(x1)
...
x2 = 2
x0 = x2
jump bb1
llvm-svn: 117408
|
|
|
|
|
|
| |
components, each should get its own virtual register.
llvm-svn: 117407
|
|
|
|
|
|
| |
necessary to get correct hasPHIKill flags.
llvm-svn: 117406
|
|
|
|
| |
llvm-svn: 117405
|