| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
| |
llvm-svn: 44233
|
| |
|
|
| |
llvm-svn: 44204
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a live interval is being spilled, rather than creating short, non-spillable
intervals for every def / use, split the interval at BB boundaries. That is, for
every BB where the live interval is defined or used, create a new interval that
covers all the defs and uses in the BB.
This is designed to eliminate one common problem: multiple reloads of the same
value in a single basic block. Note, it does *not* decrease the number of spills
since no copies are inserted so the split intervals are *connected* through
spill and reloads (or rematerialization). The newly created intervals can be
spilled again, in that case, since it does not span multiple basic blocks, it's
spilled in the usual manner. However, it can reuse the same stack slot as the
previously split interval.
This is currently controlled by -split-intervals-at-bb.
llvm-svn: 44198
|
| |
|
|
|
|
| |
Codegen bits and llvm-gcc support will follow.
llvm-svn: 44182
|
| |
|
|
| |
llvm-svn: 44181
|
| |
|
|
| |
llvm-svn: 44167
|
| |
|
|
| |
llvm-svn: 44166
|
| |
|
|
| |
llvm-svn: 44154
|
| |
|
|
| |
llvm-svn: 44153
|
| |
|
|
|
|
|
|
| |
applied
to all targets uses GOT-relative offsets for PIC (Alpha?)
llvm-svn: 44108
|
| |
|
|
|
|
|
|
|
|
|
| |
MachineOperand auxInfo. Previous clunky implementation uses an external map
to track sub-register uses. That works because register allocator uses
a new virtual register for each spilled use. With interval splitting (coming
soon), we may have multiple uses of the same register some of which are
of using different sub-registers from others. It's too fragile to constantly
update the information.
llvm-svn: 44104
|
| |
|
|
|
|
| |
interference.
llvm-svn: 44064
|
| |
|
|
| |
llvm-svn: 44063
|
| |
|
|
|
|
|
|
| |
to use different mappings for EH and debug info;
no functional change yet.
Fix warning in X86CodeEmitter.
llvm-svn: 44056
|
| |
|
|
|
|
|
|
|
|
|
| |
adjustment fields, and an optional flag. If there is a "dynamic_stackalloc" in
the code, make sure that it's bracketed by CALLSEQ_START and CALLSEQ_END. If
not, then there is the potential for the stack to be changed while the stack's
being used by another instruction (like a call).
This can only result in tears...
llvm-svn: 44037
|
| |
|
|
| |
llvm-svn: 44019
|
| |
|
|
| |
llvm-svn: 44010
|
| |
|
|
|
|
| |
to be a pass of its own. Instead, move it out into a helper method.
llvm-svn: 44002
|
| |
|
|
| |
llvm-svn: 43960
|
| |
|
|
| |
llvm-svn: 43944
|
| |
|
|
|
|
|
|
| |
apints on big-endian machines if the bitwidth is
not a multiple of 8. Introduce a new helper,
MVT::getStoreSizeInBits, and use it.
llvm-svn: 43934
|
| |
|
|
| |
llvm-svn: 43933
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Then:
call "L1$pb"
"L1$pb":
popl %eax
...
LBB1_1: # entry
imull $4, %ecx, %ecx
leal LJTI1_0-"L1$pb"(%eax), %edx
addl LJTI1_0-"L1$pb"(%ecx,%eax), %edx
jmpl *%edx
.align 2
.set L1_0_set_3,LBB1_3-LJTI1_0
.set L1_0_set_2,LBB1_2-LJTI1_0
.set L1_0_set_5,LBB1_5-LJTI1_0
.set L1_0_set_4,LBB1_4-LJTI1_0
LJTI1_0:
.long L1_0_set_3
.long L1_0_set_2
Now:
call "L1$pb"
"L1$pb":
popl %eax
...
LBB1_1: # entry
addl LJTI1_0-"L1$pb"(%eax,%ecx,4), %eax
jmpl *%eax
.align 2
.set L1_0_set_3,LBB1_3-"L1$pb"
.set L1_0_set_2,LBB1_2-"L1$pb"
.set L1_0_set_5,LBB1_5-"L1$pb"
.set L1_0_set_4,LBB1_4-"L1$pb"
LJTI1_0:
.long L1_0_set_3
.long L1_0_set_2
llvm-svn: 43924
|
| |
|
|
| |
llvm-svn: 43923
|
| |
|
|
| |
llvm-svn: 43922
|
| |
|
|
| |
llvm-svn: 43911
|
| |
|
|
| |
llvm-svn: 43910
|
| |
|
|
|
|
| |
is used, try simplify it.
llvm-svn: 43888
|
| |
|
|
|
|
|
|
| |
was written by Fernando, cleanup and updating to TOT by me.
This still needs a bit of work, particularly to handle jump tables properly.
llvm-svn: 43885
|
| |
|
|
| |
llvm-svn: 43869
|
| |
|
|
| |
llvm-svn: 43866
|
| |
|
|
| |
llvm-svn: 43819
|
| |
|
|
| |
llvm-svn: 43805
|
| |
|
|
| |
llvm-svn: 43781
|
| |
|
|
|
|
|
|
| |
replaces other operands of the same register. Watch out for situations where
only some of the operands are sub-register uses.
llvm-svn: 43776
|
| |
|
|
| |
llvm-svn: 43764
|
| |
|
|
| |
llvm-svn: 43763
|
| |
|
|
|
|
|
| |
other uses. There was a overly restricted check that prevented some obvious
cases.
llvm-svn: 43762
|
| |
|
|
| |
llvm-svn: 43755
|
| |
|
|
| |
llvm-svn: 43754
|
| |
|
|
| |
llvm-svn: 43751
|
| |
|
|
| |
llvm-svn: 43744
|
| |
|
|
|
|
| |
Thanks for the suggestions Bill :-)
llvm-svn: 43742
|
| |
|
|
|
|
|
| |
parameters. Rename ValueRefList to ParamList
in AsmParser, since its only use is for parameters.
llvm-svn: 43734
|
| |
|
|
|
|
|
| |
size for the field we get ABI padding automatically, so
no need to put it in again when we emit the field.
llvm-svn: 43720
|
| |
|
|
|
|
| |
register coalescer interface: RegisterCoalescing.
llvm-svn: 43714
|
| |
|
|
| |
llvm-svn: 43700
|
| |
|
|
|
|
| |
defined on the same instruction. This fixes PR1767.
llvm-svn: 43699
|
| |
|
|
| |
llvm-svn: 43692
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
should only effect x86 when using long double. Now
12/16 bytes are output for long double globals (the
exact amount depends on the alignment). This brings
globals in line with the rest of LLVM: the space
reserved for an object is now always the ABI size.
One tricky point is that only 10 bytes should be
output for long double if it is a field in a packed
struct, which is the reason for the additional
argument to EmitGlobalConstant.
llvm-svn: 43688
|