| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
| |
llvm-svn: 127807
|
|
|
|
| |
llvm-svn: 127779
|
|
|
|
|
|
| |
The register allocator needs to adjust its live interval unions when that happens.
llvm-svn: 127774
|
|
|
|
| |
llvm-svn: 127773
|
|
|
|
|
|
|
|
|
| |
register number.
The live range of a virtual register may change which invalidates the cached
interference information.
llvm-svn: 127772
|
|
|
|
| |
llvm-svn: 127771
|
|
|
|
|
|
|
|
|
|
|
| |
rather than an int. Thankfully, this only causes LLVM to miss optimizations, not
generate incorrect code.
This just fixes the zext at the return. We still insert an i32 ZextAssert when
reading a function's arguments, but it is followed by a truncate and another i8
ZextAssert so it is not optimized.
llvm-svn: 127766
|
|
|
|
| |
llvm-svn: 127764
|
|
|
|
|
|
|
| |
plus the test where it used to break.", which broke Clang self-host of a
Debug+Asserts compiler, on OS X.
llvm-svn: 127763
|
|
|
|
|
|
| |
where it used to break.
llvm-svn: 127757
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
defs.
After live range splitting, an original value may be available in multiple
registers. Tracing back through the registers containing the same value, find
the best place to insert a spill, determine if the value has already been
spilled, or discover a reaching def that may be rematerialized.
This is only the analysis part. The information is not used for anything yet.
llvm-svn: 127698
|
|
|
|
| |
llvm-svn: 127697
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
v2 = bitcast v1
...
v3 = bitcast v2
...
= v3
=>
v2 = bitcast v1
...
= v1
if v1 and v3 are of in the same register class.
bitcast between i32 and fp (and others) are often not nops since they
are in different register classes. These bitcast instructions are often
left because they are in different basic blocks and cannot be
eliminated by dag combine.
rdar://9104514
llvm-svn: 127668
|
|
|
|
|
|
| |
zext(undef) = 0, because the top bits will be zero.
llvm-svn: 127649
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
and then go kablooie. The problem was that it was tracking the PHI nodes anew
each time into this function. But it didn't need to. And because the recursion
didn't know that a PHINode was visited before, it would go ahead and call
itself.
There is a testcase, but unfortunately it's too big to add. This problem will go
away with the EH rewrite.
<rdar://problem/8856298>
llvm-svn: 127640
|
|
|
|
|
|
| |
Use the opportunity to get rid of the trailing underscore variable names.
llvm-svn: 127618
|
|
|
|
|
|
|
|
| |
Remove the unused reserved_ bit vector, no functional change intended.
This doesn't break 'svn blame', this file really is all my fault.
llvm-svn: 127607
|
|
|
|
| |
llvm-svn: 127600
|
|
|
|
| |
llvm-svn: 127598
|
|
|
|
|
|
|
|
| |
may be reused.
Use the virtual register number as a cache tag instead. They are not reused.
llvm-svn: 127561
|
|
|
|
|
|
|
| |
This allows the allocator to free any resources used by the virtual register,
including physical register assignments.
llvm-svn: 127560
|
|
|
|
|
|
|
|
|
|
| |
llvm-gcc-i386-linux-selfhost and llvm-x86_64-linux-checks buildbots.
The original log entry:
Remove optimization emitting a reference insted of label difference, since
it can create more relocations. Removed isBaseAddressKnownZero method,
because it is no longer used.
llvm-svn: 127540
|
|
|
|
| |
llvm-svn: 127530
|
|
|
|
|
|
|
|
|
|
|
| |
Live range splitting can create a number of small live ranges containing only a
single real use. Spill these small live ranges along with the large range they
are connected to with copies. This enables memory operand folding and maximizes
the spill to fill distance.
Work in progress with known bugs.
llvm-svn: 127529
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are too many compatibility problems with using mixed types in
std::upper_bound, and I don't want to spend 110 lines of boilerplate setting up
a call to a 10-line function. Binary search is not /that/ hard to implement
correctly.
I tried terminating the binary search with a linear search, but that actually
made the algorithm slower against my expectation. Most live intervals have less
than 4 segments. The early test against endIndex() does pay, and this version is
25% faster than plain std::upper_bound().
llvm-svn: 127522
|
|
|
|
|
|
|
| |
protector insertion not working correctly with unreachable code. Since that
revision was rolled out, this test doesn't actual fail before this fix.
llvm-svn: 127497
|
|
|
|
| |
llvm-svn: 127496
|
|
|
|
|
|
| |
it can create more relocations. Removed isBaseAddressKnownZero method, because it is no longer used.
llvm-svn: 127478
|
|
|
|
|
|
| |
without being touched, so no longer needs to pollute the hidden-help text.
llvm-svn: 127468
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The existing CompEnd predicate does not define a strict weak order as required
by the C++03 standard; therefore, its use as a predicate to std::upper_bound
is invalid. For a discussion of this issue, see
http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-defects.html#270
This patch replaces the asymmetrical comparison with an iterator adaptor that
achieves the same effect while being strictly standard-conforming by ensuring
an apples-to-apples comparison.
llvm-svn: 127462
|
|
|
|
|
|
| |
the load is indexed. rdar://9117613.
llvm-svn: 127440
|
|
|
|
| |
llvm-svn: 127398
|
|
|
|
|
|
|
| |
This makes it possible to register delegates and get callbacks when the spiller
edits live ranges.
llvm-svn: 127389
|
|
|
|
|
|
| |
SmallVectors.
llvm-svn: 127388
|
|
|
|
| |
llvm-svn: 127380
|
|
|
|
| |
llvm-svn: 127376
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
flexible.
If it returns a register class that's different from the input, then that's the
register class used for cross-register class copies.
If it returns a register class that's the same as the input, then no cross-
register class copies are needed (normal copies would do).
If it returns null, then it's not at all possible to copy registers of the
specified register class.
llvm-svn: 127368
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
register.
The damage done by physreg coalescing only depends on the number of instructions
the extended physreg live range covers. This fixes PR9438.
The heuristic is still luck-based, and physreg coalescing really should be
disabled completely. We need a register allocator with better hinting support
before that is possible.
Convert a test to FileCheck and force spilling by inserting an extra call. The
previous spilling behavior was dependent on misguided physreg coalescing
decisions.
llvm-svn: 127351
|
|
|
|
|
|
| |
This helps cases like 2008-07-19-movups-spills.ll, but doesn't have an obvious impact on benchmarks
llvm-svn: 127347
|
|
|
|
| |
llvm-svn: 127335
|
|
|
|
| |
llvm-svn: 127331
|
|
|
|
| |
llvm-svn: 127311
|
|
|
|
|
|
|
| |
This will we used for keeping register allocator data structures up to date
while LiveRangeEdit is trimming live intervals.
llvm-svn: 127300
|
|
|
|
| |
llvm-svn: 127295
|
|
|
|
|
|
|
|
| |
LiveRangeEdit::eliminateDeadDefs() will eventually be used by coalescing,
splitting, and spilling for dead code elimination. It can delete chains of dead
instructions as long as there are no dependency loops.
llvm-svn: 127287
|
|
|
|
|
|
|
|
| |
the sorted array.
Patch by Olaf Krzikalla!
llvm-svn: 127264
|
|
|
|
|
|
|
|
|
|
|
|
| |
with this before since none of the register tracking or nightly tests
had unschedulable nodes.
This should probably be refixed with a special default Node that just
returns some "don't touch me" values.
Fixes PR9427
llvm-svn: 127263
|
|
|
|
|
|
|
|
|
|
| |
MSVC 9."
The "fix" was meaningless.
This reverts commit r127245.
llvm-svn: 127260
|
|
|
|
| |
llvm-svn: 127254
|
|
|
|
| |
llvm-svn: 127245
|