| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
No functional change, just moved header files.
Targets can inject custom passes between register allocation and
rewriting. This makes it possible to tweak the register allocation
before rewriting, using the full global interference checking available
from LiveRegMatrix.
llvm-svn: 168806
|
|
|
|
| |
llvm-svn: 163974
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
OK, not really. We don't want to reintroduce the old rewriter hacks.
This patch extracts virtual register rewriting as a separate pass that
runs after the register allocator. This is possible now that
CodeGen/Passes.cpp can configure the full optimizing register allocator
pipeline.
The rewriter pass uses register assignments in VirtRegMap to rewrite
virtual registers to physical registers, and it inserts kill flags based
on live intervals.
These finalization steps are the same for the optimizing register
allocators: RABasic, RAGreedy, and PBQP.
llvm-svn: 158244
|
|
|
|
|
|
| |
This thing is looking a lot like a virtual register map now.
llvm-svn: 144486
|
|
|
|
|
|
|
| |
Nobody cared, StackSlotColoring scans the instructions to find used stack
slots.
llvm-svn: 144485
|
|
|
|
|
|
|
| |
Most of this stuff was supporting the old deferred spill code insertion
mechanism. Modern spillers just edit machine code in place.
llvm-svn: 144484
|
|
|
|
|
|
|
| |
The information was only used by the register allocator in
StackSlotColoring.
llvm-svn: 144482
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
RAGreedy::tryAssign will now evict interference from the preferred
register even when another register is free.
To support this, add the EvictionCost struct that counts how many hints
are broken by an eviction. We don't want to break one hint just to
satisfy another.
Rename canEvict to shouldEvict, and add the first bit of eviction policy
that doesn't depend on spill weights: Always make room in the preferred
register as long as the evictees can be split and aren't already
assigned to their preferred register.
Also make the CSR avoidance more accurate. When looking for a cheaper
register it is OK to use a new volatile register. Only CSR aliases that
have never been used before should be avoided.
llvm-svn: 134735
|
|
|
|
| |
llvm-svn: 126002
|
|
|
|
|
|
|
|
| |
before splitting.
All new virtual registers created for spilling or splitting point back to their original.
llvm-svn: 125980
|
|
|
|
|
|
|
|
|
|
| |
The rewriter works almost identically to -rewriter=trivial, except it also
eliminates any identity copies.
This makes the new register allocators independent of VirtRegRewriter.cpp which
will be going away at the same time as RegAllocLinearScan.
llvm-svn: 125967
|
|
|
|
| |
llvm-svn: 123123
|
|
|
|
|
|
|
|
| |
depending on TRI::FirstVirtualRegister.
Also use TRI::printReg instead of printing virtual registers directly.
llvm-svn: 123101
|
|
|
|
|
|
|
|
|
|
|
| |
registers for a given virtual register.
Reserved registers are filtered from the allocation order, and any valid hint is
returned as the first suggestion.
For target dependent hints, a number of arcane target hooks are invoked.
llvm-svn: 121497
|
|
|
|
|
|
|
|
|
| |
Use amazing new function call technology instead of writing identical code in
multiple places.
This fixes PR8604.
llvm-svn: 119306
|
|
|
|
| |
llvm-svn: 110460
|
|
|
|
| |
llvm-svn: 110410
|
|
|
|
|
|
|
|
| |
address of the static
ID member as the sole unique type identifier. Clean up APIs related to this change.
llvm-svn: 110396
|
|
|
|
|
|
|
|
| |
rewrite instructions for live range splitting.
Still work in progress.
llvm-svn: 109469
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This introduces a new pass, SlotIndexes, which is responsible for numbering
instructions for register allocation (and other clients). SlotIndexes numbering
is designed to match the existing scheme, so this patch should not cause any
changes in the generated code.
For consistency, and to avoid naming confusion, LiveIndex has been renamed
SlotIndex.
The processImplicitDefs method of the LiveIntervals analysis has been moved
into its own pass so that it can be run prior to SlotIndexes. This was
necessary to match the existing numbering scheme.
llvm-svn: 85979
|
|
|
|
| |
llvm-svn: 83254
|
|
|
|
|
|
|
|
| |
a new class, MachineInstrIndex, which hides arithmetic details from
most clients. This is a step towards allowing the register allocator
to update/insert code during allocation.
llvm-svn: 81040
|
|
|
|
| |
llvm-svn: 79842
|
|
|
|
|
|
| |
LiveInterval, etc to raw_ostream.
llvm-svn: 76965
|
|
|
|
|
|
| |
MachineRegisterInfo. This allows more passes to set them.
llvm-svn: 73346
|
|
|
|
|
|
| |
class.
llvm-svn: 70821
|
|
|
|
|
|
|
|
|
|
| |
not utilizing registers at all. The fundamental problem is linearscan's backtracking can end up freeing more than one allocated registers. However, reloads and restores might be folded into uses / defs and freed registers might not be used at all.
VirtRegMap keeps track of allocations so it knows what's not used. As a horrible hack, the stack coloring can color spill slots with *free* registers. That is, it replace reload and spills with copies from and to the free register. It unfold instructions that load and store the spill slot and replace them with register using variants.
Not yet enabled. This is part 1. More coming.
llvm-svn: 70787
|
|
|
|
| |
llvm-svn: 68099
|
|
|
|
| |
llvm-svn: 68092
|
|
|
|
| |
llvm-svn: 66870
|
|
|
|
|
|
| |
No (intended) functionality change.
llvm-svn: 66720
|
|
|
|
| |
llvm-svn: 61715
|
|
|
|
| |
llvm-svn: 51932
|
|
|
|
|
|
| |
the uses when the live interval is being spilled.
llvm-svn: 49542
|
|
|
|
| |
llvm-svn: 48297
|
|
|
|
| |
llvm-svn: 48246
|
|
|
|
|
|
| |
around the def's and use's of the interval being allocated to make it possible for the interval to target a register and spill it right away and restore a register for uses. This likely generates terrible code but is before than aborting.
llvm-svn: 48218
|
|
|
|
| |
llvm-svn: 47657
|
|
|
|
| |
llvm-svn: 46930
|
|
|
|
| |
llvm-svn: 45418
|
|
|
|
| |
llvm-svn: 44612
|
|
|
|
|
|
| |
last use.
llvm-svn: 44611
|
|
|
|
| |
llvm-svn: 44609
|
|
|
|
| |
llvm-svn: 44517
|
|
|
|
| |
llvm-svn: 44428
|
|
|
|
| |
llvm-svn: 44386
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a live interval is being spilled, rather than creating short, non-spillable
intervals for every def / use, split the interval at BB boundaries. That is, for
every BB where the live interval is defined or used, create a new interval that
covers all the defs and uses in the BB.
This is designed to eliminate one common problem: multiple reloads of the same
value in a single basic block. Note, it does *not* decrease the number of spills
since no copies are inserted so the split intervals are *connected* through
spill and reloads (or rematerialization). The newly created intervals can be
spilled again, in that case, since it does not span multiple basic blocks, it's
spilled in the usual manner. However, it can reuse the same stack slot as the
previously split interval.
This is currently controlled by -split-intervals-at-bb.
llvm-svn: 44198
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Turn this:
movswl %ax, %eax
movl %eax, -36(%ebp)
xorl %edi, -36(%ebp)
into
movswl %ax, %eax
xorl %edi, %eax
movl %eax, -36(%ebp)
by unfolding the load / store xorl into an xorl and a store when we know the
value in the spill slot is available in a register. This doesn't change the
number of instructions but reduce the number of times memory is accessed.
Also unfold some load folding instructions and reuse the value when similar
situation presents itself.
llvm-svn: 42947
|
|
|
|
|
|
| |
intervals that are coalesced to be rematerialized.
llvm-svn: 41060
|
|
|
|
| |
llvm-svn: 40896
|