| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
get FixedStack PseudoSourceValues.
llvm-svn: 84326
|
| |
|
|
| |
llvm-svn: 83608
|
| |
|
|
| |
llvm-svn: 83589
|
| |
|
|
| |
llvm-svn: 83255
|
| |
|
|
| |
llvm-svn: 83254
|
| |
|
|
|
|
| |
require a LiveIntervals instance in future.
llvm-svn: 81374
|
| |
|
|
|
|
|
|
| |
a new class, MachineInstrIndex, which hides arithmetic details from
most clients. This is a step towards allowing the register allocator
to update/insert code during allocation.
llvm-svn: 81040
|
| |
|
|
|
|
| |
update all code that this affects.
llvm-svn: 79830
|
| |
|
|
|
|
| |
register interval, or the defining register for a stack interval. Access is via getCopy/setCopy and getReg/setReg.
llvm-svn: 78620
|
| |
|
|
| |
llvm-svn: 77754
|
| |
|
|
|
|
|
|
| |
rematerialized instructions.
Avoid remat'ing instructions whose def have sub-register indices for now. It's just really really hard to get all the cases right.
llvm-svn: 75900
|
| |
|
|
| |
llvm-svn: 75423
|
| |
|
|
|
|
| |
and abort()/exit() -> llvm_report_error().
llvm-svn: 75363
|
| |
|
|
|
|
|
|
|
|
|
|
| |
as an (index,bool) pair. The bool flag records whether the kill is a
PHI kill or not. This code will be used to enable splitting of live
intervals containing PHI-kills.
A slight change to live interval weights introduced an extra spill
into lsr-code-insertion (outside the critical sections). The test
condition has been updated to reflect this.
llvm-svn: 75097
|
| |
|
|
| |
llvm-svn: 73634
|
| |
|
|
|
|
|
|
|
|
| |
not utilizing registers at all. The fundamental problem is linearscan's backtracking can end up freeing more than one allocated registers. However, reloads and restores might be folded into uses / defs and freed registers might not be used at all.
VirtRegMap keeps track of allocations so it knows what's not used. As a horrible hack, the stack coloring can color spill slots with *free* registers. That is, it replace reload and spills with copies from and to the free register. It unfold instructions that load and store the spill slot and replace them with register using variants.
Not yet enabled. This is part 1. More coming.
llvm-svn: 70787
|
| |
|
|
|
|
|
|
|
|
|
|
| |
register destinations that are tied to source operands. The
TargetInstrDescr::findTiedToSrcOperand method silently fails for inline
assembly. The existing MachineInstr::isRegReDefinedByTwoAddr was very
close to doing what is needed, so this revision makes a few changes to
that method and also renames it to isRegTiedToUseOperand (for consistency
with the very similar isRegTiedToDefOperand and because it handles both
two-address instructions and inline assembly with tied registers).
llvm-svn: 68714
|
| |
|
|
|
|
|
|
| |
were subtlely wrong in obscure cases. Patch the testcase
to account for this change.
llvm-svn: 68093
|
| |
|
|
|
|
| |
useful with it at the moment, but it will in the future.
llvm-svn: 67012
|
| |
|
|
| |
llvm-svn: 66158
|
| |
|
|
|
|
| |
This fixes some subtle miscompilations.
llvm-svn: 66147
|
| |
|
|
|
|
| |
Update a testcase to check this.
llvm-svn: 66029
|
| |
|
|
| |
llvm-svn: 65121
|
| |
|
|
|
|
| |
safe to move an instruction which defines a value in the register class. Replace pre-splitting specific IgnoreRegisterClassBarriers with this new hook.
llvm-svn: 63936
|
| |
|
|
|
|
|
| |
between call frame setup/restore points. Unfortunately, this regresses
code size a bit, but at least it's correct now!
llvm-svn: 63837
|
| |
|
|
|
|
|
|
|
| |
direction.
Live interval reconstruction needs to account for this, and scour its maps to
prevent dangling references.
llvm-svn: 63558
|
| |
|
|
| |
llvm-svn: 63536
|
| |
|
|
| |
llvm-svn: 63492
|
| |
|
|
|
|
| |
different set iteration order for the reg_iterator.
llvm-svn: 63490
|
| |
|
|
|
|
|
|
| |
don't try to insert loads/stores between call frame setup and the actual call.
This fixes the last known failure for the pre-alloc-splitter.
llvm-svn: 63339
|
| |
|
|
|
|
|
|
| |
and an iterator invalidation issue.
FreeBench/pifft no longer miscompiles with these fixes!
llvm-svn: 63293
|
| |
|
|
| |
llvm-svn: 63276
|
| |
|
|
|
|
| |
vast majority of code size regressions introduced by pre-alloc-splitting.
llvm-svn: 63274
|
| |
|
|
| |
llvm-svn: 63091
|
| |
|
|
| |
llvm-svn: 63049
|
| |
|
|
| |
llvm-svn: 63041
|
| |
|
|
| |
llvm-svn: 63040
|
| |
|
|
| |
llvm-svn: 63026
|
| |
|
|
|
|
|
|
|
| |
markers, and ended up foiling the interval reconstruction.
This allows us to turn on reconstruction in the pre alloc splitter, which
fixes a number of miscompilations.
llvm-svn: 63025
|
| |
|
|
| |
llvm-svn: 63021
|
| |
|
|
| |
llvm-svn: 62917
|
| |
|
|
|
|
|
|
|
|
| |
restores that are just
going to be re-spilled again.
This also helps performance. Pre-alloc-splitting now seems to be an overall win on SPEC.
llvm-svn: 62834
|
| |
|
|
| |
llvm-svn: 62821
|
| |
|
|
|
|
| |
in the pre alloc splitter.
llvm-svn: 62678
|
| |
|
|
| |
llvm-svn: 62639
|
| |
|
|
|
|
| |
sub-register indices as well.
llvm-svn: 62600
|
| |
|
|
| |
llvm-svn: 62073
|
| |
|
|
|
|
|
|
|
|
|
| |
complicated by
two address instructions. We need to keep track of things we've processed AS USES
independetly of whether we've processed them as defs.
This fixes all known miscompilations when reconstruction is turned on.
llvm-svn: 61802
|
| |
|
|
|
|
| |
problem, rather than fixing it. The problem has now been fixed the right way.
llvm-svn: 61723
|
| |
|
|
| |
llvm-svn: 61514
|