| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
- Be smarter about coalescing copies from implicit_def.
llvm-svn: 49168
|
|
|
|
| |
llvm-svn: 48837
|
|
|
|
|
|
| |
use of the same val# is a copy instruction that has already been coalesced.
llvm-svn: 48833
|
|
|
|
| |
llvm-svn: 48759
|
|
|
|
|
|
| |
previously coalesced copy into an non-identity copy.
llvm-svn: 48752
|
|
|
|
| |
llvm-svn: 48653
|
|
|
|
| |
llvm-svn: 48526
|
|
|
|
|
|
| |
coalesced. This remove some ugly spaghetti code and fixed a number of subtle bugs.
llvm-svn: 48490
|
|
|
|
| |
llvm-svn: 48319
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the source is defined; BLR is the live range which is defined by the copy.
If ALR and BLR overlaps and end of BLR extends beyond end of ALR, e.g.
A = or A, B
...
B = A
...
C = A<kill>
...
= B
then do not add kills of A to the newly created B interval.
- Also fix some kill info update bug.
llvm-svn: 48141
|
|
|
|
| |
llvm-svn: 47966
|
|
|
|
|
|
| |
findRegisterUseOperandIdx, findRegisterDefOperandIndx. Fix some naming inconsistencies.
llvm-svn: 47927
|
|
|
|
|
|
| |
instructions will be deleted. Doh.
llvm-svn: 47749
|
|
|
|
| |
llvm-svn: 47629
|
|
|
|
|
|
| |
would have been a Godsend here!
llvm-svn: 47625
|
|
|
|
| |
llvm-svn: 47623
|
|
|
|
|
|
|
|
|
| |
vr1 = extract_subreg vr2, 3
...
vr3 = extract_subreg vr1, 2
The end result is vr3 is equal to vr2 with subidx 2.
llvm-svn: 47592
|
|
|
|
| |
llvm-svn: 47468
|
|
|
|
| |
llvm-svn: 47448
|
|
|
|
|
|
| |
- For now, conservatively ignore copy MI whose source is a physical register. Commuting its def MI can cause a physical register live interval to be live through a loop (since we know it's live coming into the def MI).
llvm-svn: 47281
|
|
|
|
|
|
| |
That simply trade a live interval for another and because only the non-two-address operands can be folded into loads, may end up pessimising code.
llvm-svn: 47262
|
|
|
|
|
|
| |
instruction.
llvm-svn: 47208
|
|
|
|
| |
llvm-svn: 47179
|
|
|
|
|
|
|
| |
register defs and uses after each successful coalescing.
- Also removed a number of hacks and fixed some subtle kill information bugs.
llvm-svn: 47167
|
|
|
|
| |
llvm-svn: 47060
|
|
|
|
|
|
|
|
| |
its uses.
* Ignore copy instructions which have already been coalesced.
llvm-svn: 47056
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PR1877.
A3 = op A2 B0<kill>
...
B1 = A3 <- this copy
...
= op A3 <- more uses
==>
B2 = op B0 A2<kill>
...
B1 = B2 <- now an identify copy
...
= op B2 <- more uses
This speeds up FreeBench/neural by 29%, Olden/bh by 12%, oopack_v1p8 by 53%.
llvm-svn: 47046
|
|
|
|
| |
llvm-svn: 46930
|
|
|
|
| |
llvm-svn: 46903
|
|
|
|
|
|
| |
preserved.
llvm-svn: 45596
|
|
|
|
| |
llvm-svn: 45574
|
|
|
|
| |
llvm-svn: 45468
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
that "machine" classes are used to represent the current state of
the code being compiled. Given this expanded name, we can start
moving other stuff into it. For now, move the UsedPhysRegs and
LiveIn/LoveOuts vectors from MachineFunction into it.
Update all the clients to match.
This also reduces some needless #includes, such as MachineModuleInfo
from MachineFunction.
llvm-svn: 45467
|
|
|
|
|
|
|
|
|
|
| |
- Eliminate the static "print" method for operands, moving it
into MachineOperand::print.
- Change various set* methods for register flags to take a bool
for the value to set it to. Remove unset* methods.
- Group methods more logically by operand flavor in MachineOperand.h
llvm-svn: 45461
|
|
|
|
| |
llvm-svn: 45418
|
|
|
|
|
|
| |
strict.
llvm-svn: 45253
|
|
|
|
| |
llvm-svn: 44838
|
|
|
|
| |
llvm-svn: 44671
|
|
|
|
|
|
| |
a preferred spill candiate.
llvm-svn: 44644
|
|
|
|
| |
llvm-svn: 44434
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a live interval is being spilled, rather than creating short, non-spillable
intervals for every def / use, split the interval at BB boundaries. That is, for
every BB where the live interval is defined or used, create a new interval that
covers all the defs and uses in the BB.
This is designed to eliminate one common problem: multiple reloads of the same
value in a single basic block. Note, it does *not* decrease the number of spills
since no copies are inserted so the split intervals are *connected* through
spill and reloads (or rematerialization). The newly created intervals can be
spilled again, in that case, since it does not span multiple basic blocks, it's
spilled in the usual manner. However, it can reuse the same stack slot as the
previously split interval.
This is currently controlled by -split-intervals-at-bb.
llvm-svn: 44198
|
|
|
|
|
|
|
|
|
|
|
| |
MachineOperand auxInfo. Previous clunky implementation uses an external map
to track sub-register uses. That works because register allocator uses
a new virtual register for each spilled use. With interval splitting (coming
soon), we may have multiple uses of the same register some of which are
of using different sub-registers from others. It's too fragile to constantly
update the information.
llvm-svn: 44104
|
|
|
|
| |
llvm-svn: 44010
|
|
|
|
| |
llvm-svn: 43764
|
|
|
|
|
|
| |
register coalescer interface: RegisterCoalescing.
llvm-svn: 43714
|
|
|
|
| |
llvm-svn: 43700
|
|
|
|
|
|
| |
- Some code clean up.
llvm-svn: 43606
|
|
|
|
|
|
| |
traversing inverse register coalescing map.
llvm-svn: 43118
|
|
|
|
| |
llvm-svn: 43065
|
|
|
|
| |
llvm-svn: 43035
|