| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
| |
llvm-svn: 11001
|
| |
|
|
| |
llvm-svn: 11000
|
| |
|
|
|
|
|
| |
moving it to the start of removeDeadNodes. This speeds up DSA by 2s on perlbmk
from 41s
llvm-svn: 10999
|
| |
|
|
|
|
|
|
| |
scalar
map. This saves 5s in the TD pass, from 22->17s on perlbmk
llvm-svn: 10998
|
| |
|
|
|
|
|
|
|
| |
iterate over
the globals directly. This doesn't save any substantial time, however, because the
globals graph only contains globals!
llvm-svn: 10997
|
| |
|
|
|
|
|
|
| |
function to find the globals, iterate over all of the globals directly. This
speeds the function up from 14s to 6.3s on perlbmk, reducing DSA time from
53->46s.
llvm-svn: 10996
|
| |
|
|
|
|
| |
timers by default
llvm-svn: 10993
|
| |
|
|
|
|
|
|
| |
This reduces the number of nodes allocated, then immediately merged and DNE'd
from 2193852 to 1298049. unfortunately this only speeds DSA up by ~1.5s (of
53s), because it's spending most of its time waddling through the scalar map :(
llvm-svn: 10992
|
| |
|
|
|
|
| |
Also, use RC::merge when possible, reducing the number of nodes allocated, then immediately merged away from 2985444 to 2193852 on perlbmk.
llvm-svn: 10991
|
| |
|
|
|
|
| |
You gotta love spurious
llvm-svn: 10990
|
| |
|
|
| |
llvm-svn: 10989
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
it to be off. If it looks like it's completely unnecessary after testing, I
will remove it completely (which is the hope).
* Callers of the DSNode "copy ctor" can not choose to not copy links.
* Make node collapsing not create a garbage node in some cases, avoiding a
memory allocation, and a subsequent DNE.
* When merging types, allow two functions of different types to be merged
without collapsing.
* Use DSNodeHandle::isNull more often instead of DSNodeHandle::getNode() == 0,
as it is much more efficient.
*** Implement the new, more efficient reachability cloner class
In addition to only cloning nodes that are reachable from interesting
roots, this also fixes the huge inefficiency we had where we cloned lots
of nodes, only to merge them away immediately after they were cloned.
Now we only actually allocate a node if there isn't one to merge it into.
* Eliminate the now-obsolete cloneReachable* and clonePartiallyInto methods
* Rewrite updateFromGlobalsGraph to use the reachability cloner
* Rewrite mergeInGraph to use the reachability cloner
* Disable the scalar map scanning code in removeTriviallyDeadNodes. In large
SCC's, this is extremely expensive. We need a better data structure for the
scalar map, because we really want to scan the unique node handles, not ALL
of the scalars.
* Remove the incorrect SANER_CODE_FOR_CHECKING_IF_ALL_REFERRERS_ARE_FROM_SCALARMAP code.
* Move the code for eliminating integer nodes from the trivially dead
eliminator to the dead node eliminator.
* removeDeadNodes no longer uses removeTriviallyDeadNodes, as it contains a
superset of the node removal power.
* Only futz around with the globals graph in removeDeadNodes if it is modified
llvm-svn: 10987
|
| |
|
|
|
|
|
|
| |
efficient in the case where a function calls into the same graph multiple times
(ie, it either contains multiple calls to the same function, or multiple calls
to functions in the same SCC graph)
llvm-svn: 10986
|
| |
|
|
| |
llvm-svn: 10985
|
| |
|
|
| |
llvm-svn: 10984
|
| |
|
|
|
|
| |
every file.
llvm-svn: 10976
|
| |
|
|
|
|
|
|
|
| |
when joining we need to check if we overlap with the second interval
or any of its aliases.
Also make joining intervals the default.
llvm-svn: 10973
|
| |
|
|
| |
llvm-svn: 10972
|
| |
|
|
|
|
|
| |
PowerPCTargetMachine::addPassesToJITCompile() method, in favor of the
TargetJITInfo interface.
llvm-svn: 10971
|
| |
|
|
|
|
| |
cloneReachableSubgraph, though this support is currently disabled.
llvm-svn: 10970
|
| |
|
|
|
|
|
|
| |
out that the problem was actually the writer writing out a 'null' value
because it didn't normalize it. This fixes:
test/Regression/Assembler/2004-01-22-FloatNormalization.ll
llvm-svn: 10967
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
is a move between two registers, at least one of the registers is
virtual and the two live intervals do not overlap.
This results in about 40% reduction in intervals, 30% decrease in the
register allocators running time and a 20% increase in peephole
optimizations (mainly move eliminations).
The option can be enabled by passing -join-liveintervals where
appropriate.
llvm-svn: 10965
|
| |
|
|
|
|
| |
with the current one.
llvm-svn: 10959
|
| |
|
|
|
|
|
|
| |
virtReg lives on the stack. Now a virtual register has an entry in the
virtual->physical map or the virtual->stack slot map but never in
both.
llvm-svn: 10958
|
| |
|
|
| |
llvm-svn: 10957
|
| |
|
|
| |
llvm-svn: 10956
|
| |
|
|
|
|
|
|
| |
map was only used to implement a marginal GlobalsGraph optimization, and it
actually slows the analysis down (due to the overhead of keeping it), so just
eliminate it entirely.
llvm-svn: 10955
|
| |
|
|
| |
llvm-svn: 10954
|
| |
|
|
| |
llvm-svn: 10953
|
| |
|
|
|
|
|
|
|
|
|
| |
in terms of it.
Though clonePartiallyInto is not cloning partial graphs yet, this change
dramatically speeds up inlining of graphs with many scalars. For example,
this change speeds up the BU pass on 253.perlbmk from 69s to 36s, because
it avoids iteration over the scalar map, which can get pretty large.
llvm-svn: 10951
|
| |
|
|
| |
llvm-svn: 10948
|
| |
|
|
|
|
| |
that are still left in the lazy reader map.
llvm-svn: 10944
|
| |
|
|
|
|
|
| |
their implementation of book-keeping for which functions need to be materialized
and which don't.
llvm-svn: 10943
|
| |
|
|
| |
llvm-svn: 10938
|
| |
|
|
| |
llvm-svn: 10937
|
| |
|
|
| |
llvm-svn: 10931
|
| |
|
|
|
|
| |
Fix testcase test/Regression/Assembler/2004-01-20-MaxLongLong.llx
llvm-svn: 10928
|
| |
|
|
| |
llvm-svn: 10926
|
| |
|
|
| |
llvm-svn: 10925
|
| |
|
|
| |
llvm-svn: 10924
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
fact "profitable" to do so. This makes compactification "free" for small
programs (ie, it is completely disabled) and even helps large programs by
not having to encode pointless compactification planes.
On 176.gcc, this saves 50K from the bytecode file, which is, alas only
a couple percent.
This concludes my head bashing against the bytecode format, at least for
now.
llvm-svn: 10922
|
| |
|
|
| |
llvm-svn: 10920
|
| |
|
|
|
|
| |
intelligently.
llvm-svn: 10918
|
| |
|
|
| |
llvm-svn: 10917
|
| |
|
|
|
|
|
|
| |
This shrinks the bytecode file for 176.gcc by about 200K (10%), and 254.gap by
about 167K, a 25% reduction. There is still a lot of room for improvement in
the encoding of the compaction table.
llvm-svn: 10915
|
| |
|
|
|
|
|
|
| |
This shrinks the bytecode file for 176.gcc by about 200K (10%), and 254.gap by
about 167K, a 25% reduction. There is still a lot of room for improvement in
the encoding of the compaction table.
llvm-svn: 10914
|
| |
|
|
|
|
|
|
| |
the bytecode file for 176.gcc by about 200K (10%), and 254.gap by about 167K,
a 25% reduction. There is still a lot of room for improvement in the encoding
of the compaction table.
llvm-svn: 10913
|
| |
|
|
|
|
|
| |
Fix some problem cases where I was building the slot calculator in bytecode
writer mode instead of asmwriter mode.
llvm-svn: 10911
|
| |
|
|
|
|
|
| |
type planes. This saves about 5k on 176.gcc, and is needed for a subsequent
patch of mine I'm working on.
llvm-svn: 10908
|
| |
|
|
| |
llvm-svn: 10905
|