| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
| |
llvm-svn: 42966
|
| |
|
|
|
|
| |
so, and clean up the checks by putting them in an inline function.
llvm-svn: 42965
|
| |
|
|
|
|
|
| |
memory for the significand once up-front. Also ignore insignificant
trailing zeroes; this saves unnecessary multiplications later.
llvm-svn: 42964
|
| |
|
|
| |
llvm-svn: 42963
|
| |
|
|
| |
llvm-svn: 42962
|
| |
|
|
|
|
|
|
| |
the source register will be coalesced to the super register of the LHS. Properly
merge in the live ranges of the resulting coalesced interval that were part of
the original source interval to the live interval of the super-register.
llvm-svn: 42961
|
| |
|
|
| |
llvm-svn: 42960
|
| |
|
|
|
|
| |
a problem when asserts are on). From vecLib.
llvm-svn: 42959
|
| |
|
|
|
|
| |
long double.
llvm-svn: 42958
|
| |
|
|
| |
llvm-svn: 42956
|
| |
|
|
|
|
| |
trampolines, rather than with nested functions themselves.
llvm-svn: 42955
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
values and propagate demanded bits through them in simple cases.
This allows this code:
void foo(char *P) {
strcpy(P, "abc");
}
to compile to:
_foo:
ldrb r3, [r1]
ldrb r2, [r1, #+1]
ldrb r12, [r1, #+2]!
ldrb r1, [r1, #+1]
strb r1, [r0, #+3]
strb r2, [r0, #+1]
strb r12, [r0, #+2]
strb r3, [r0]
bx lr
instead of:
_foo:
ldrb r3, [r1, #+3]
ldrb r2, [r1, #+2]
orr r3, r2, r3, lsl #8
ldrb r2, [r1, #+1]
ldrb r1, [r1]
orr r2, r1, r2, lsl #8
orr r3, r2, r3, lsl #16
strb r3, [r0]
mov r2, r3, lsr #24
strb r2, [r0, #+3]
mov r2, r3, lsr #16
strb r2, [r0, #+2]
mov r3, r3, lsr #8
strb r3, [r0, #+1]
bx lr
testcase here: test/CodeGen/ARM/truncstore-dag-combine.ll
This also helps occasionally for X86 and other cases not involving
unaligned load/stores.
llvm-svn: 42954
|
| |
|
|
| |
llvm-svn: 42953
|
| |
|
|
|
|
|
| |
truncate and truncstore instructions, based on the
knowledge that they don't demand the top bits.
llvm-svn: 42952
|
| |
|
|
|
|
| |
one half-ulps. This prevents an infinite loop in rare cases.
llvm-svn: 42950
|
| |
|
|
| |
llvm-svn: 42949
|
| |
|
|
| |
llvm-svn: 42948
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Turn this:
movswl %ax, %eax
movl %eax, -36(%ebp)
xorl %edi, -36(%ebp)
into
movswl %ax, %eax
xorl %edi, %eax
movl %eax, -36(%ebp)
by unfolding the load / store xorl into an xorl and a store when we know the
value in the spill slot is available in a register. This doesn't change the
number of instructions but reduce the number of times memory is accessed.
Also unfold some load folding instructions and reuse the value when similar
situation presents itself.
llvm-svn: 42947
|
| |
|
|
|
|
|
| |
register used by the unfolded instructions. User can also specify whether to
unfold the load, the store, or both.
llvm-svn: 42946
|
| |
|
|
| |
llvm-svn: 42945
|
| |
|
|
| |
llvm-svn: 42935
|
| |
|
|
|
|
|
| |
for fastcc from X86CallingConv.td. This means that nested functions
are not supported for calling convention 'fastcc'.
llvm-svn: 42934
|
| |
|
|
|
|
|
| |
that includes the string "st". This probably fixes the regression on
Darwin.
llvm-svn: 42932
|
| |
|
|
|
|
| |
Do not filter memmove.
llvm-svn: 42930
|
| |
|
|
|
|
| |
Thanks to Török Edvin for helping to track this down.
llvm-svn: 42927
|
| |
|
|
|
|
| |
longer be created for fastcc functions.
llvm-svn: 42925
|
| |
|
|
| |
llvm-svn: 42924
|
| |
|
|
| |
llvm-svn: 42922
|
| |
|
|
| |
llvm-svn: 42921
|
| |
|
|
| |
llvm-svn: 42920
|
| |
|
|
| |
llvm-svn: 42919
|
| |
|
|
|
|
| |
pointer correctly.
llvm-svn: 42918
|
| |
|
|
| |
llvm-svn: 42916
|
| |
|
|
| |
llvm-svn: 42913
|
| |
|
|
|
|
|
|
|
|
|
| |
from user input strings.
Such conversions are more intricate and subtle than they may appear;
it is unlikely I have got it completely right first time. I would
appreciate being informed of any bugs and incorrect roundings you
might discover.
llvm-svn: 42912
|
| |
|
|
| |
llvm-svn: 42911
|
| |
|
|
|
|
| |
it. Needed for dec->bin conversions.
llvm-svn: 42910
|
| |
|
|
| |
llvm-svn: 42909
|
| |
|
|
|
|
|
|
|
| |
function symbol name instead of a codegen-assigned function
number.
Thanks Evan! :-)
llvm-svn: 42908
|
| |
|
|
| |
llvm-svn: 42907
|
| |
|
|
|
|
| |
is a scalar integer.
llvm-svn: 42906
|
| |
|
|
| |
llvm-svn: 42905
|
| |
|
|
| |
llvm-svn: 42904
|
| |
|
|
| |
llvm-svn: 42903
|
| |
|
|
| |
llvm-svn: 42901
|
| |
|
|
| |
llvm-svn: 42900
|
| |
|
|
|
|
|
|
|
| |
(almost) a register copy. However, it always coalesced to the register of the
RHS (the super-register). All uses of the result of a EXTRACT_SUBREG are sub-
register uses which adds subtle complications to load folding, spiller rewrite,
etc.
llvm-svn: 42899
|
| |
|
|
| |
llvm-svn: 42898
|
| |
|
|
| |
llvm-svn: 42897
|
| |
|
|
| |
llvm-svn: 42896
|