| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
| |
llvm-svn: 110410
|
|
|
|
|
|
|
|
| |
address of the static
ID member as the sole unique type identifier. Clean up APIs related to this change.
llvm-svn: 110396
|
|
|
|
|
|
| |
used by DBG_VALUE machine instructions or not. If a spilled register is used by DBG_VALUE machine instruction then insert a new DBG_VALUE machine instruction to encode variable's new location on stack.
llvm-svn: 110235
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
multiple defs, like t2LDRSB_POST.
The first def could accidentally steal the physreg that the second, tied def was
required to be allocated to.
Now, the tied use-def is treated more like an early clobber, and the physreg is
reserved before allocating the other defs.
This would never be a problem when the tied def was the only def which is the
usual case.
This fixes MallocBench/gs for thumb2 -O0.
llvm-svn: 109715
|
|
|
|
|
|
| |
Do not visit operands of old instruction. Visit all operands of new instruction.
llvm-svn: 108767
|
|
|
|
|
|
| |
TII::isMoveInstr is going tobe completely removed.
llvm-svn: 108507
|
|
|
|
| |
llvm-svn: 108023
|
|
|
|
|
|
|
| |
This code is transitional, it will soon be possible to eliminate
isExtractSubreg, isInsertSubreg, and isMoveInstr in most places.
llvm-svn: 107547
|
|
|
|
|
|
|
|
|
|
|
| |
A partial redefine needs to be treated like a tied operand, and the register
must be reloaded while processing use operands.
This fixes a bug where partially redefined registers were processed as normal
defs with a reload added. The reload could clobber another use operand if it was
a kill that allowed register reuse.
llvm-svn: 107193
|
|
|
|
|
|
|
|
|
|
|
|
| |
When an instruction has tied operands and physreg defines, we must take extra
care that the tied operands conflict with neither physreg defs nor uses.
The special treatment is given to inline asm and instructions with tied operands
/ early clobbers and physreg defines.
This fixes PR7509.
llvm-svn: 107043
|
|
|
|
|
|
|
|
|
| |
Early clobbers defining a virtual register were first alocated to a physreg and
then processed as a physreg EC, spilling the virtreg.
This fixes PR7382.
llvm-svn: 105998
|
|
|
|
|
|
|
|
|
|
|
| |
register allocation.
Process all of the clobber lists at the end of the function, marking the
registers as used in MachineRegisterInfo.
This is necessary in case the calls clobber callee-saved registers (sic).
llvm-svn: 105473
|
|
|
|
|
|
|
|
|
|
| |
A partial redef now triggers a reload if required. Also don't add
<imp-def,dead> operands for physical superregisters.
Kill flags are still treated as full register kills, and <imp-use,kill> operands
are added for physical superregisters as before.
llvm-svn: 104167
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
instruction.
This can happen on ARM:
>> %reg1035:5<def>, %reg1035:6<def> = VLD1q16 %reg1028, 0, pred:14, pred:%reg0
Regs: Q0=%reg1032* R0=%reg1028* R1=%reg1029* R2 R3=%reg1031*
Killing last use: %reg1028
Allocating %reg1035 from QPR
Assigning %reg1035 to Q1
<< %D2<def>, %D3<def> = VLD1q16 %R0<kill>, 0, pred:14, pred:%reg0, %Q1<imp-def>
llvm-svn: 104056
|
|
|
|
|
|
|
| |
This fixes the miscompilations of MultiSource/Applications/JM/l{en,de}cod.
Clang now successfully self hosts in a debug build with the fast register allocator.
llvm-svn: 103975
|
|
|
|
| |
llvm-svn: 103961
|
|
|
|
|
|
|
|
|
| |
While that approach works wonders for register pressure, it tends to break
everything.
This should unbreak the arm-linux builder and fix a number of miscompilations.
llvm-svn: 103946
|
|
|
|
| |
llvm-svn: 103940
|
|
|
|
|
|
|
|
|
|
| |
out aliases when allocating. Clean up allocVirtReg().
Use calcSpillCost() to allow more aggressive hinting. Now the hint is always
taken unless blocked by a reserved register. This leads to more coalescing,
lower register pressure, and less spilling.
llvm-svn: 103939
|
|
|
|
|
|
| |
This makes allocation independent on the ordering of use-def chains.
llvm-svn: 103935
|
|
|
|
| |
llvm-svn: 103934
|
|
|
|
|
|
| |
This is safe to do because the physreg has been marked UsedInInstr and the kill flag will be set on the last operand using the virtreg if there are more then one.
llvm-svn: 103933
|
|
|
|
|
|
| |
subregister indices.
llvm-svn: 103931
|
|
|
|
|
|
|
| |
through the very long list of call-clobbered registers. We just assume all
registers are clobbered.
llvm-svn: 103930
|
|
|
|
| |
llvm-svn: 103929
|
|
|
|
|
|
| |
Debug code doesn't use callee saved registers anyway, and the code is simpler this way. Now spillVirtReg always kills, and the isKill parameter is not needed.
llvm-svn: 103927
|
|
|
|
| |
llvm-svn: 103926
|
|
|
|
| |
llvm-svn: 103925
|
|
|
|
|
|
|
|
| |
a condition's grouping. Every other use of Allocatable.test(Hint) groups it the
same way as it is indented, so move the parentheses to agree with that
grouping.
llvm-svn: 103869
|
|
|
|
|
|
|
|
| |
When working top-down in a basic block, substituting physregs for virtregs, the use-def chains are kept up to date. That means we can recognize a virtreg kill by the use-def chain becoming empty.
This makes the fast allocator independent of incoming kill flags.
llvm-svn: 103866
|
|
|
|
| |
llvm-svn: 103831
|
|
|
|
|
|
| |
hint.
llvm-svn: 103828
|
|
|
|
| |
llvm-svn: 103823
|
|
|
|
| |
llvm-svn: 103821
|
|
|
|
| |
llvm-svn: 103820
|
|
|
|
|
|
|
|
|
| |
- Kill is implicit when use and def registers are identical.
- Only virtual registers can differ.
Add a -verify-fast-regalloc to run the verifier before the fast allocator.
llvm-svn: 103797
|
|
|
|
|
|
|
| |
This adds extra security against using clobbered physregs, and it adds kill
markers to physreg uses.
llvm-svn: 103784
|
|
|
|
| |
llvm-svn: 103764
|
|
|
|
| |
llvm-svn: 103748
|
|
|
|
|
|
|
|
|
|
|
|
| |
This loop is quadratic in the capacity for a DenseMap:
while(!map.empty())
map.erase(map.begin());
Instead we now do a normal begin() - end() iteration followed by map.clear().
That also has the nice sideeffect of shrinking the map capacity on demand.
llvm-svn: 103747
|
|
|
|
| |
llvm-svn: 103739
|
|
|
|
|
|
| |
This causes way more identity copies to be generated, ripe for coalescing.
llvm-svn: 103686
|
|
|
|
| |
llvm-svn: 103685
|
|
|
|
|
|
| |
The X86 floating point stack pass and others depend on good kill flags.
llvm-svn: 103635
|
|
|
|
| |
llvm-svn: 103530
|
|
|
|
| |
llvm-svn: 103528
|
|
|
|
| |
llvm-svn: 103522
|
|
|
|
|
|
|
| |
This allows us to add accurate kill markers, something the scavenger likes.
Add some more tests from ARM that needed this.
llvm-svn: 103521
|
|
|
|
|
|
|
|
| |
closure after allocating all blocks.
Add a few more test cases for -regalloc=fast.
llvm-svn: 103500
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sorry for the big change. The path leading up to this patch had some TableGen
changes that I didn't want to commit before I knew they were useful. They
weren't, and this version does not need them.
The fast register allocator now does no liveness calculations. Instead it relies
on kill flags provided by isel. (Currently those kill flags are also ignored due
to isel bugs). The allocation algorithm is supposed to work with any subset of
valid kill flags. More kill flags simply means fewer spills inserted.
Registers are allocated from a working set that contains no aliases. That means
most allocations can be done directly without expensive alias checks. When the
working set runs out of registers we do the full alias check to find new free
registers.
llvm-svn: 103488
|