| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Region splitting includes loop splitting as a subset, and it is more generic.
The splitting heuristics for variables that are live in more than one block are
now:
1. Try to create a region that covers multiple basic blocks.
2. Try to create a new live range for each block with multiple uses.
3. Spill.
Steps 2 and 3 are similar to what the standard spiller is doing.
llvm-svn: 123853
|
| |
|
|
|
|
|
|
|
|
| |
Analyze the live range's behavior entering and leaving basic blocks. Compute an
interference pattern for each allocation candidate, and use SpillPlacement to
find an optimal region where that register can be live.
This code is still not enabled.
llvm-svn: 123774
|
| |
|
|
| |
llvm-svn: 123707
|
| |
|
|
|
|
|
| |
ranges, add legalizer support for nested calls. Necessary for ARM
byval support. Radar 7662569.
llvm-svn: 123704
|
| |
|
|
| |
llvm-svn: 123664
|
| |
|
|
|
|
|
|
|
| |
This shaves off 4 popcounts from the hacked 186.crafty source.
This is enabled even when a native popcount instruction is available. The
combined code is one operation longer but it should be faster nevertheless.
llvm-svn: 123621
|
| |
|
|
|
|
|
| |
multi-instruction sequences like calls. Many thanks to Jakob for
finding a testcase.
llvm-svn: 123559
|
| |
|
|
| |
llvm-svn: 123549
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel
In a silly microbenchmark on a 65 nm core2 this is 1.5x faster than the old
code in 32 bit mode and about 2x faster in 64 bit mode. It's also a lot shorter,
especially when counting 64 bit population on a 32 bit target.
I hope this is fast enough to replace Kernighan-style counting loops even when
the input is rather sparse.
llvm-svn: 123547
|
| |
|
|
| |
llvm-svn: 123491
|
| |
|
|
|
|
| |
comments.
llvm-svn: 123479
|
| |
|
|
|
|
| |
description emission. Currently all the backends use table-based stuff.
llvm-svn: 123476
|
| |
|
|
| |
llvm-svn: 123474
|
| |
|
|
| |
llvm-svn: 123473
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
disabled in this checkin. Sorry for the large diffs due to
refactoring. New functionality is all guarded by EnableSchedCycles.
Scheduling the isel DAG is inherently imprecise, but we give it a best
effort:
- Added MayReduceRegPressure to allow stalled nodes in the queue only
if there is a regpressure need.
- Added BUHasStall to allow checking for either dependence stalls due to
latency or resource stalls due to pipeline hazards.
- Added BUCompareLatency to encapsulate and standardize the heuristics
for minimizing stall cycles (vs. reducing register pressure).
- Modified the bottom-up heuristic (now in BUCompareLatency) to
prioritize nodes by their depth rather than height. As long as it
doesn't stall, height is irrelevant. Depth represents the critical
path to the DAG root.
- Added hybrid_ls_rr_sort::isReady to filter stalled nodes before
adding them to the available queue.
Related Cleanup: most of the register reduction routines do not need
to be templates.
llvm-svn: 123468
|
| |
|
|
|
|
| |
This time let's rephrase to trick gcc-4.3 into not miscompiling.
llvm-svn: 123432
|
| |
|
|
| |
llvm-svn: 123423
|
| |
|
|
|
|
| |
they should go *before* the new instruction not after it.
llvm-svn: 123420
|
| |
|
|
|
|
| |
Fix some callers to better deal with debug values.
llvm-svn: 123419
|
| |
|
|
|
|
|
| |
This approach also works when the terminator doesn't have a slot index. (Which
can happen??)
llvm-svn: 123413
|
| |
|
|
| |
llvm-svn: 123400
|
| |
|
|
| |
llvm-svn: 123399
|
| |
|
|
|
|
| |
happy.
llvm-svn: 123389
|
| |
|
|
|
|
|
| |
It will still return an iterator that points to the first terminator or end(),
but there may be DBG_VALUE instructions following the first terminator.
llvm-svn: 123384
|
| |
|
|
| |
llvm-svn: 123352
|
| |
|
|
| |
llvm-svn: 123351
|
| |
|
|
|
|
| |
further on the associated testcase before aborting.
llvm-svn: 123346
|
| |
|
|
| |
llvm-svn: 123342
|
| |
|
|
|
|
| |
after all.
llvm-svn: 123339
|
| |
|
|
| |
llvm-svn: 123338
|
| |
|
|
|
|
| |
Make sure we don't crash in that case, but simply turn them into %noreg instead.
llvm-svn: 123335
|
| |
|
|
|
|
| |
It was leaving dangling pointers in the slot index maps.
llvm-svn: 123334
|
| |
|
|
| |
llvm-svn: 123333
|
| |
|
|
|
|
| |
The slot indexes must be monotonically increasing through the function.
llvm-svn: 123324
|
| |
|
|
| |
llvm-svn: 123322
|
| |
|
|
| |
llvm-svn: 123290
|
| |
|
|
| |
llvm-svn: 123282
|
| |
|
|
|
|
|
|
| |
For one, MachineBasicBlock::getFirstTerminator() doesn't understand what is
happening, and it also makes sense to have all control flow run through the
DBG_VALUE.
llvm-svn: 123277
|
| |
|
|
|
|
| |
This is not yet completely enabled.
llvm-svn: 123274
|
| |
|
|
| |
llvm-svn: 123202
|
| |
|
|
|
|
|
|
|
|
|
| |
There's an inherent tension in DAGCombine between assuming
that things will be put in canonical form, and the Depth
mechanism that disables transformations when recursion gets
too deep. It would not surprise me if there's a lot of little
bugs like this one waiting to be discovered. The mechanism
seems fragile and I'd suggest looking at it from a design viewpoint.
llvm-svn: 123191
|
| |
|
|
|
|
| |
and fixes here and there.
llvm-svn: 123170
|
| |
|
|
|
|
| |
rolled std::find.
llvm-svn: 123164
|
| |
|
|
|
|
|
|
| |
These functions not longer assert when passed 0, but simply return false instead.
No functional change intended.
llvm-svn: 123155
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
when no virtual registers have been allocated.
It was only used to resize IndexedMaps, so provide an IndexedMap::resize()
method such that
Map.grow(MRI.getLastVirtReg());
can be replaced with the simpler
Map.resize(MRI.getNumVirtRegs());
This works correctly when no virtuals are allocated, and it bypasses the to/from
index conversions.
llvm-svn: 123130
|
| |
|
|
| |
llvm-svn: 123129
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
physical register numbers.
This makes the hack used in LiveInterval official, and lets LiveInterval be
oblivious of stack slots.
The isPhysicalRegister() and isVirtualRegister() predicates don't know about
this, so when a variable may contain a stack slot, isStackSlot() should always
be tested first.
llvm-svn: 123128
|
| |
|
|
| |
llvm-svn: 123123
|
| |
|
|
| |
llvm-svn: 123115
|
| |
|
|
| |
llvm-svn: 123114
|