| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
| |
a preferred spill candiate.
llvm-svn: 44644
|
|
|
|
| |
llvm-svn: 44612
|
|
|
|
|
|
| |
last use.
llvm-svn: 44611
|
|
|
|
| |
llvm-svn: 44610
|
|
|
|
| |
llvm-svn: 44609
|
|
|
|
|
|
|
|
|
| |
This allows an important optimization to be re-enabled.
- If all uses / defs of a split interval can be folded, give the interval a
low spill weight so it would not be picked in case spilling is needed (avoid
pushing other intervals in the same BB to be spilled).
llvm-svn: 44601
|
|
|
|
| |
llvm-svn: 44565
|
|
|
|
| |
llvm-svn: 44532
|
|
|
|
| |
llvm-svn: 44531
|
|
|
|
| |
llvm-svn: 44517
|
|
|
|
| |
llvm-svn: 44482
|
|
|
|
|
|
| |
-> cmpl [mem], 0.
llvm-svn: 44479
|
|
|
|
|
|
| |
extra load.
llvm-svn: 44467
|
|
|
|
| |
llvm-svn: 44443
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in the middle of a split basic block, create a new live interval starting at
the def. This avoid artifically extending the live interval over a number of
cycles where it is dead. e.g.
bb1:
= vr1204 (use / kill) <= new interval starts and ends here.
...
...
vr1204 = (new def) <= start a new interval here.
= vr1204 (use)
llvm-svn: 44436
|
|
|
|
| |
llvm-svn: 44434
|
|
|
|
| |
llvm-svn: 44428
|
|
|
|
| |
llvm-svn: 44386
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a live interval is being spilled, rather than creating short, non-spillable
intervals for every def / use, split the interval at BB boundaries. That is, for
every BB where the live interval is defined or used, create a new interval that
covers all the defs and uses in the BB.
This is designed to eliminate one common problem: multiple reloads of the same
value in a single basic block. Note, it does *not* decrease the number of spills
since no copies are inserted so the split intervals are *connected* through
spill and reloads (or rematerialization). The newly created intervals can be
spilled again, in that case, since it does not span multiple basic blocks, it's
spilled in the usual manner. However, it can reuse the same stack slot as the
previously split interval.
This is currently controlled by -split-intervals-at-bb.
llvm-svn: 44198
|
|
|
|
| |
llvm-svn: 44166
|
|
|
|
|
|
|
|
|
|
|
| |
MachineOperand auxInfo. Previous clunky implementation uses an external map
to track sub-register uses. That works because register allocator uses
a new virtual register for each spilled use. With interval splitting (coming
soon), we may have multiple uses of the same register some of which are
of using different sub-registers from others. It's too fragile to constantly
update the information.
llvm-svn: 44104
|
|
|
|
| |
llvm-svn: 44010
|
|
|
|
| |
llvm-svn: 43819
|
|
|
|
|
|
|
|
| |
replaces other operands of the same register. Watch out for situations where
only some of the operands are sub-register uses.
llvm-svn: 43776
|
|
|
|
| |
llvm-svn: 43763
|
|
|
|
| |
llvm-svn: 43692
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
can be eliminated by the allocator is the destination and source targets the
same register. The most common case is when the source and destination registers
are in different class. For example, on x86 mov32to32_ targets GR32_ which
contains a subset of the registers in GR32.
The allocator can do 2 things:
1. Set the preferred allocation for the destination of a copy to that of its source.
2. After allocation is done, change the allocation of a copy destination (if
legal) so the copy can be eliminated.
This eliminates 443 extra moves from 403.gcc.
llvm-svn: 43662
|
|
|
|
| |
llvm-svn: 43069
|
|
|
|
| |
llvm-svn: 43060
|
|
|
|
| |
llvm-svn: 42916
|
|
|
|
|
|
|
|
|
| |
(almost) a register copy. However, it always coalesced to the register of the
RHS (the super-register). All uses of the result of a EXTRACT_SUBREG are sub-
register uses which adds subtle complications to load folding, spiller rewrite,
etc.
llvm-svn: 42899
|
|
|
|
| |
llvm-svn: 42742
|
|
|
|
|
|
| |
of comparing begin() and end().
llvm-svn: 42585
|
|
|
|
|
|
|
| |
isRegister, isImmediate, and isMachineBasicBlock, which are equivalent,
and more popular.
llvm-svn: 41958
|
|
|
|
| |
llvm-svn: 41739
|
|
|
|
|
|
| |
This reduces coalescing time on siod Mac OS X PPC by 35%. Also remove the back ptr from VNInfo to LiveInterval and other tweaks.
llvm-svn: 41729
|
|
|
|
| |
llvm-svn: 41598
|
|
|
|
|
|
|
| |
Changes related modules so VNInfo's are not copied. This decrease
copy coalescing time by 45% and overall compilation time by 10% on siod.
llvm-svn: 41579
|
|
|
|
|
|
| |
to turn off remat for debugging.
llvm-svn: 41118
|
|
|
|
|
|
| |
intervals that are coalesced to be rematerialized.
llvm-svn: 41060
|
|
|
|
| |
llvm-svn: 41016
|
|
|
|
| |
llvm-svn: 40925
|
|
|
|
|
|
|
| |
- Fix some minor bugs related to special markers on val# def. ~0U means
undefined, ~1U means dead val#.
llvm-svn: 40916
|
|
|
|
|
|
|
|
|
|
| |
kill instruction #, and source register number (iff the value# is defined by a
copy).
- Now def instruction # is set for every value#, not just for copy defined ones.
- Update some outdated code related inactive live ranges.
- Kill info not yet set. That's next patch.
llvm-svn: 40913
|
|
|
|
| |
llvm-svn: 37764
|
|
|
|
| |
llvm-svn: 37743
|
|
|
|
|
|
|
|
|
|
| |
with a general target hook to identify rematerializable instructions. Some
instructions are only rematerializable with specific operands, such as loads
from constant pools, while others are always rematerializable. This hook
allows both to be identified as being rematerializable with the same
mechanism.
llvm-svn: 37644
|
|
|
|
|
|
|
|
| |
and an
implementation for x86.
llvm-svn: 37576
|
|
|
|
|
|
|
|
|
|
| |
simultaneously. Move that pass to SimpleRegisterCoalescing.
This makes it easier to implement alternative register allocation and
coalescing strategies while maintaining reuse of the existing live
interval analysis.
llvm-svn: 37520
|
|
|
|
|
|
| |
the interval.
llvm-svn: 37052
|