| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
llvm-svn: 126002
|
|
|
|
|
|
|
|
| |
before splitting.
All new virtual registers created for spilling or splitting point back to their original.
llvm-svn: 125980
|
|
|
|
|
|
|
|
|
|
| |
The rewriter works almost identically to -rewriter=trivial, except it also
eliminates any identity copies.
This makes the new register allocators independent of VirtRegRewriter.cpp which
will be going away at the same time as RegAllocLinearScan.
llvm-svn: 125967
|
|
|
|
| |
llvm-svn: 123123
|
|
|
|
|
|
|
|
| |
depending on TRI::FirstVirtualRegister.
Also use TRI::printReg instead of printing virtual registers directly.
llvm-svn: 123101
|
|
|
|
|
|
|
|
|
|
|
| |
registers for a given virtual register.
Reserved registers are filtered from the allocation order, and any valid hint is
returned as the first suggestion.
For target dependent hints, a number of arcane target hooks are invoked.
llvm-svn: 121497
|
|
|
|
|
|
|
|
|
| |
Use amazing new function call technology instead of writing identical code in
multiple places.
This fixes PR8604.
llvm-svn: 119306
|
|
|
|
| |
llvm-svn: 110460
|
|
|
|
| |
llvm-svn: 110410
|
|
|
|
|
|
|
|
| |
address of the static
ID member as the sole unique type identifier. Clean up APIs related to this change.
llvm-svn: 110396
|
|
|
|
|
|
|
|
| |
rewrite instructions for live range splitting.
Still work in progress.
llvm-svn: 109469
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This introduces a new pass, SlotIndexes, which is responsible for numbering
instructions for register allocation (and other clients). SlotIndexes numbering
is designed to match the existing scheme, so this patch should not cause any
changes in the generated code.
For consistency, and to avoid naming confusion, LiveIndex has been renamed
SlotIndex.
The processImplicitDefs method of the LiveIntervals analysis has been moved
into its own pass so that it can be run prior to SlotIndexes. This was
necessary to match the existing numbering scheme.
llvm-svn: 85979
|
|
|
|
| |
llvm-svn: 83254
|
|
|
|
|
|
|
|
| |
a new class, MachineInstrIndex, which hides arithmetic details from
most clients. This is a step towards allowing the register allocator
to update/insert code during allocation.
llvm-svn: 81040
|
|
|
|
| |
llvm-svn: 79842
|
|
|
|
|
|
| |
LiveInterval, etc to raw_ostream.
llvm-svn: 76965
|
|
|
|
|
|
| |
MachineRegisterInfo. This allows more passes to set them.
llvm-svn: 73346
|
|
|
|
|
|
| |
class.
llvm-svn: 70821
|
|
|
|
|
|
|
|
|
|
| |
not utilizing registers at all. The fundamental problem is linearscan's backtracking can end up freeing more than one allocated registers. However, reloads and restores might be folded into uses / defs and freed registers might not be used at all.
VirtRegMap keeps track of allocations so it knows what's not used. As a horrible hack, the stack coloring can color spill slots with *free* registers. That is, it replace reload and spills with copies from and to the free register. It unfold instructions that load and store the spill slot and replace them with register using variants.
Not yet enabled. This is part 1. More coming.
llvm-svn: 70787
|
|
|
|
| |
llvm-svn: 68099
|
|
|
|
| |
llvm-svn: 68092
|
|
|
|
| |
llvm-svn: 66870
|
|
|
|
|
|
| |
No (intended) functionality change.
llvm-svn: 66720
|
|
|
|
| |
llvm-svn: 61715
|
|
|
|
| |
llvm-svn: 51932
|
|
|
|
|
|
| |
the uses when the live interval is being spilled.
llvm-svn: 49542
|
|
|
|
| |
llvm-svn: 48297
|
|
|
|
| |
llvm-svn: 48246
|
|
|
|
|
|
| |
around the def's and use's of the interval being allocated to make it possible for the interval to target a register and spill it right away and restore a register for uses. This likely generates terrible code but is before than aborting.
llvm-svn: 48218
|
|
|
|
| |
llvm-svn: 47657
|
|
|
|
| |
llvm-svn: 46930
|
|
|
|
| |
llvm-svn: 45418
|
|
|
|
| |
llvm-svn: 44612
|
|
|
|
|
|
| |
last use.
llvm-svn: 44611
|
|
|
|
| |
llvm-svn: 44609
|
|
|
|
| |
llvm-svn: 44517
|
|
|
|
| |
llvm-svn: 44428
|
|
|
|
| |
llvm-svn: 44386
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a live interval is being spilled, rather than creating short, non-spillable
intervals for every def / use, split the interval at BB boundaries. That is, for
every BB where the live interval is defined or used, create a new interval that
covers all the defs and uses in the BB.
This is designed to eliminate one common problem: multiple reloads of the same
value in a single basic block. Note, it does *not* decrease the number of spills
since no copies are inserted so the split intervals are *connected* through
spill and reloads (or rematerialization). The newly created intervals can be
spilled again, in that case, since it does not span multiple basic blocks, it's
spilled in the usual manner. However, it can reuse the same stack slot as the
previously split interval.
This is currently controlled by -split-intervals-at-bb.
llvm-svn: 44198
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Turn this:
movswl %ax, %eax
movl %eax, -36(%ebp)
xorl %edi, -36(%ebp)
into
movswl %ax, %eax
xorl %edi, %eax
movl %eax, -36(%ebp)
by unfolding the load / store xorl into an xorl and a store when we know the
value in the spill slot is available in a register. This doesn't change the
number of instructions but reduce the number of times memory is accessed.
Also unfold some load folding instructions and reuse the value when similar
situation presents itself.
llvm-svn: 42947
|
|
|
|
|
|
| |
intervals that are coalesced to be rematerialized.
llvm-svn: 41060
|
|
|
|
| |
llvm-svn: 40896
|
|
|
|
| |
llvm-svn: 40757
|
|
|
|
| |
llvm-svn: 35660
|
|
|
|
| |
llvm-svn: 35208
|
|
|
|
| |
llvm-svn: 33749
|
|
|
|
|
|
|
| |
rework the hacks that had us passing OStream in. We pass in std::ostream*
instead, check for null, and then dispatch to the correct print() method.
llvm-svn: 32636
|
|
|
|
|
|
| |
now cerr, cout, and NullStream resp.
llvm-svn: 32298
|
|
|
|
| |
llvm-svn: 31806
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
actually *removes* one of the operands, instead of just assigning both operands
the same register. This make reasoning about instructions unnecessarily complex,
because you need to know if you are before or after register allocation to match
up operand #'s with the target description file.
Changing this also gets rid of a bunch of hacky code in various places.
This patch also includes changes to fold loads into cmp/test instructions in
the X86 backend, along with a significant simplification to the X86 spill
folding code.
llvm-svn: 30108
|