| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
| |
ownership of the intervals.
llvm-svn: 15155
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Interval. This generalizes the isDefinedOnce mechanism that we used before
to help us coallesce ranges that overlap. As part of this, every logical
range with a different value is assigned a different number in the interval.
For example, for code that looks like this:
0 X = ...
4 X += ...
...
N = X
We now generate a live interval that contains two ranges: [2,6:0),[6,?:1)
reflecting the fact that there are two different values in the range at
different positions in the code.
Currently we are not using this information at all, so this just slows down
liveintervals. In the future, this will change.
Note that this change also substantially refactors the joinIntervalsInMachineBB
method to merge the cases for virt-virt and phys-virt joining into a single
case, adds comments, and makes the code a bit easier to follow.
llvm-svn: 15154
|
|
|
|
|
|
|
| |
make overlapsAliases take pointers instead of references
fix indentation
llvm-svn: 15153
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Fix comment typeo
* add dump() methods
* add a few new methods like getLiveRangeContaining, removeRange & joinable
(which is currently the same as overlaps)
* Remove the unused operator==
Bigger change:
* In LiveInterval, instead of using a boolean isDefinedOnce to keep track of
if there are > 1 definitions in a particular interval, keep a counter,
NumValues to keep track of exactly how many there are.
* In LiveRange, add a new ValId element to indicate which of the numbered
values each LiveRange belongs to. We now no longer merge LiveRanges if
they are of differing value ID's even if they are neighbors.
llvm-svn: 15152
|
|
|
|
|
|
|
|
|
| |
* Inline some functions
* Eliminate some comparisons from the release build
This is good for another .3 on gcc.
llvm-svn: 15144
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
want to insert a new range into the middle of the vector, then delete ranges
one at a time next to the inserted one as they are merged.
Instead, if the inserted interval overlaps, just start merging. The only time
we insert into the middle of the vector is when we don't overlap at all. Also
delete blocks of live ranges if we overlap with many of them.
This patch speeds up joining by .7 seconds on a large testcase, but more
importantly gets all of the range adding code into addRangeFrom.
llvm-svn: 15141
|
|
|
|
|
|
| |
comparisons, reducing linscan by another .1 seconds :)
llvm-svn: 15139
|
|
|
|
| |
llvm-svn: 15138
|
|
|
|
| |
llvm-svn: 15137
|
|
|
|
|
|
|
|
| |
gives
a very modest speedup of .3 seconds compiling 176.gcc (out of 20s).
llvm-svn: 15136
|
|
|
|
| |
llvm-svn: 15135
|
|
|
|
|
|
|
|
|
|
|
|
| |
will soon be renamed) into their own file. The new file should not emit
DEBUG output or have other side effects. The LiveInterval class also now
doesn't know whether its working on registers or some other thing.
In the future we will want to use the LiveInterval class and friends to do
stack packing. In addition to a code simplification, this will allow us to
do it more easily.
llvm-svn: 15134
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use an explicit LiveRange class to represent ranges instead of an std::pair.
This is a minor cleanup, but is really intended to make a future patch simpler
and less invasive.
Alkis, could you please take a look at LiveInterval::liveAt? I suspect that
you can add an operator<(unsigned) to LiveRange, allowing us to speed up the
upper_bound call by quite a bit (this would also apply to other callers of
upper/lower_bound). I would do it myself, but I still don't understand that
crazy liveAt function, despite the comment. :)
Basically I would like to see this:
LiveRange dummy(index, index+1);
Ranges::const_iterator r = std::upper_bound(ranges.begin(),
ranges.end(),
dummy);
Turn into:
Ranges::const_iterator r = std::upper_bound(ranges.begin(),
ranges.end(),
index);
llvm-svn: 15130
|
|
|
|
|
|
| |
the live intervals for some registers.
llvm-svn: 15125
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
interfere. Because these intervals have a single definition, and one of them
is a copy instruction, they are always safe to merge even if their lifetimes
interfere. This slightly reduces the amount of spill code, for example on
252.eon, from:
12837 spiller - Number of loads added
7604 spiller - Number of stores added
5842 spiller - Number of register spills
18155 liveintervals - Number of identity moves eliminated after coalescing
to:
12754 spiller - Number of loads added
7585 spiller - Number of stores added
5803 spiller - Number of register spills
18262 liveintervals - Number of identity moves eliminated after coalescing
The much much bigger win would be to merge intervals with multiple definitions
(aka phi nodes) but this is not that day.
llvm-svn: 15124
|
|
|
|
| |
llvm-svn: 15118
|
|
|
|
| |
llvm-svn: 15115
|
|
|
|
| |
llvm-svn: 15114
|
|
|
|
| |
llvm-svn: 15111
|
|
|
|
| |
llvm-svn: 15108
|
|
|
|
| |
llvm-svn: 15107
|
|
|
|
|
|
|
| |
intervals need not be sorted anymore. Removing this redundant step
improves LiveIntervals running time by 5% on 176.gcc.
llvm-svn: 15106
|
|
|
|
| |
llvm-svn: 15105
|
|
|
|
|
|
|
|
|
|
|
| |
compilation of gcc:
* Use vectors instead of lists for the intervals sets
* Use a heap for the unhandled set to keep intervals always sorted and
makes insertions back to the heap very fast (compared to scanning a
list)
llvm-svn: 15103
|
|
|
|
| |
llvm-svn: 15098
|
|
|
|
|
|
| |
the end will reduce erase() runtimes.
llvm-svn: 15093
|
|
|
|
|
|
|
| |
fortunately, they are easy to handle if we know about them. This patch fixes
some serious pessimization of code produced by the linscan register allocator.
llvm-svn: 15092
|
|
|
|
| |
llvm-svn: 15091
|
|
|
|
|
|
| |
"Support/Debug.h".
llvm-svn: 15089
|
|
|
|
| |
llvm-svn: 15078
|
|
|
|
| |
llvm-svn: 15073
|
|
|
|
|
|
| |
compile time for 176.gcc from 5.6 secs to 4.7 secs.
llvm-svn: 15072
|
|
|
|
| |
llvm-svn: 15069
|
|
|
|
| |
llvm-svn: 15068
|
|
|
|
| |
llvm-svn: 15067
|
|
|
|
|
|
| |
stack slots. This is in preparation for the iterative linear scan.
llvm-svn: 15032
|
|
|
|
| |
llvm-svn: 15031
|
|
|
|
| |
llvm-svn: 15011
|
|
|
|
| |
llvm-svn: 15005
|
|
|
|
|
|
|
| |
is a simple change, but seems to improve code a little. For example, on
256.bzip2, we went from 75.0s -> 73.33s (2% speedup).
llvm-svn: 15004
|
|
|
|
| |
llvm-svn: 15003
|
|
|
|
|
|
| |
ask instructions for their parent.
llvm-svn: 14998
|
|
|
|
| |
llvm-svn: 14997
|
|
|
|
| |
llvm-svn: 14996
|
|
|
|
|
|
|
|
|
|
| |
* vreg <-> vreg joining now works, enable it unconditionally when joining
is enabled (which is the default).
* Fix a serious pessimization of spill code where we were saying that a
spilled DEF operand was live into the subsequent instruction. This allows
for substantially better code when spilling starts to happen.
llvm-svn: 14993
|
|
|
|
|
|
|
|
|
| |
order, causing the inactive list in the linearscan list to get unsorted, which
basically fuxored everything up severely.
These seems to fix the joiner, so with more testing I will enable it by default.
llvm-svn: 14992
|
|
|
|
| |
llvm-svn: 14991
|
|
|
|
|
|
|
| |
is sorted. This is not the case currently, which is causing no end of
problems.
llvm-svn: 14990
|
|
|
|
|
|
|
|
|
|
| |
Heavily refactor handleVirtualRegisterDef, adding comments and making it more
efficient. It is also much easier to follow and convince ones self that it is
correct :)
Add -debug output to the joine, showing the result of joining the intervals.
llvm-svn: 14989
|
|
|
|
|
|
| |
remove map that is not needed
llvm-svn: 14988
|