| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
where both pointers have non-zero offsets.
llvm-svn: 41491
|
| |
|
|
| |
llvm-svn: 41409
|
| |
|
|
|
|
|
| |
over uses in DAGCombiner. Fix interfaces to work
with APFloats.
llvm-svn: 41407
|
| |
|
|
| |
llvm-svn: 41163
|
| |
|
|
|
|
|
| |
offsets. The SrcValueOffset values are the real offsets from the
SrcValue base pointers.
llvm-svn: 40534
|
| |
|
|
|
|
| |
feedback. This theoretically makes the common (scalar) case more efficient.
llvm-svn: 39823
|
| |
|
|
|
|
| |
Thanks to Lauro for spotting this!
llvm-svn: 38491
|
| |
|
|
|
|
| |
undef in either the left or right operand.
llvm-svn: 38489
|
| |
|
|
|
|
| |
simplifying loads and stores.
llvm-svn: 38473
|
| |
|
|
|
|
|
|
|
| |
DAGCombiner.cpp: In member function 'llvm::SDOperand<unnamed>::DAGCombiner::visitOR(llvm::SDNode*)':
DAGCombiner.cpp:1608: warning: passing negative value '-0x00000000000000001' for argument 1 to 'llvm::SDOperand llvm::SelectionDAG::getConstant(uint64_t, llvm::MVT::ValueType, bool)'
oiy.
llvm-svn: 38458
|
| |
|
|
|
|
| |
follow the rules for undef used in instcombine.
llvm-svn: 37851
|
| |
|
|
|
|
|
|
|
|
| |
visitFSUB to fold 0-B to -B in UnsafeFPMath mode. Also change visitFNEG
to use isNegatibleForFree/GetNegatedExpression instead of doing a subset
of the same thing manually.
This fixes test/CodeGen/X86/negative-sin.ll.
llvm-svn: 37842
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
extended vector types. Remove the special SDNode opcodes used for pre-legalize
vector operations, and the special MVT::Vector type used with them. Adjust
lowering and legalize to work with the normal SDNode kinds instead, and to
use the normal MVT functions to work with vector types instead of using the
two special operands that the pre-legalize nodes held.
This allows pre-legalize and post-legalize DAGs, and the code that operates
on them, to be more consistent. Pre-legalize vector operators can be handled
more consistently with scalar operators. And, -view-dag-combine1-dags and
-view-legalize-dags now look prettier for vector code.
llvm-svn: 37719
|
| |
|
|
|
|
|
|
|
| |
TargetLowering to SelectionDAG so that they have more convenient
access to the current DAG, in preparation for the ValueType routines
being changed from standalone functions to members of SelectionDAG for
the pre-legalize vector type changes.
llvm-svn: 37704
|
| |
|
|
|
|
|
| |
(add (select cc, 0, c), x) -> (select cc, x, (add, x, c))
(sub x, (select cc, 0, c)) -> (select cc, x, (sub, x, c))
llvm-svn: 37685
|
| |
|
|
|
|
|
| |
for needing the DAG node to print pre-legalize extended value types, and
to get better debug messages with target-specific nodes.
llvm-svn: 37656
|
| |
|
|
| |
llvm-svn: 37579
|
| |
|
|
| |
llvm-svn: 37330
|
| |
|
|
|
|
| |
value store is the same as the base pointer.
llvm-svn: 37318
|
| |
|
|
| |
llvm-svn: 37310
|
| |
|
|
| |
llvm-svn: 37233
|
| |
|
|
| |
llvm-svn: 37130
|
| |
|
|
|
|
| |
This fixes PR1423
llvm-svn: 37102
|
| |
|
|
| |
llvm-svn: 37094
|
| |
|
|
| |
llvm-svn: 37086
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CodeGen/PowerPC/fneg.ll into:
_t4:
fmul f0, f3, f4
fmadd f1, f1, f2, f0
blr
instead of:
_t4:
fneg f0, f3
fmul f0, f0, f4
fmsub f1, f1, f2, f0
blr
llvm-svn: 37054
|
| |
|
|
| |
llvm-svn: 36962
|
| |
|
|
| |
llvm-svn: 36910
|
| |
|
|
|
|
|
|
|
| |
- (store (bitconvert v)) -> (store v) if resultant store does not require
higher alignment
- (bitconvert (load v)) -> (load (bitconvert*)v) if resultant load does not
require higher alignment
llvm-svn: 36908
|
| |
|
|
| |
llvm-svn: 36716
|
| |
|
|
| |
llvm-svn: 36622
|
| |
|
|
|
|
|
| |
produce two results.)
* Do not touch volatile loads.
llvm-svn: 36604
|
| |
|
|
| |
llvm-svn: 36356
|
| |
|
|
| |
llvm-svn: 36309
|
| |
|
|
| |
llvm-svn: 36301
|
| |
|
|
| |
llvm-svn: 36245
|
| |
|
|
|
|
|
| |
single-use nodes, they will be dead soon. Make sure to remove them before
processing other nodes. This implements CodeGen/X86/shl_elim.ll
llvm-svn: 36244
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a chance to hack on it. This compiles:
int baz(long long a) { return (short)(((int)(a >>24)) >> 9); }
into:
_baz:
slwi r2, r3, 8
srwi r2, r2, 9
extsh r3, r2
blr
instead of:
_baz:
srwi r2, r4, 24
rlwimi r2, r3, 8, 0, 23
srwi r2, r2, 9
extsh r3, r2
blr
This implements CodeGen/PowerPC/sign_ext_inreg1.ll
llvm-svn: 36212
|
| |
|
|
| |
llvm-svn: 35910
|
| |
|
|
| |
llvm-svn: 35888
|
| |
|
|
| |
llvm-svn: 35887
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
allows other simplifications. For example, this compiles:
int isnegative(unsigned int X) {
return !(X < 2147483648U);
}
Into this code:
x86:
movl 4(%esp), %eax
shrl $31, %eax
ret
arm:
mov r0, r0, lsr #31
bx lr
thumb:
lsr r0, r0, #31
bx lr
instead of:
x86:
cmpl $0, 4(%esp)
sets %al
movzbl %al, %eax
ret
arm:
mov r3, #0
cmp r0, #0
movlt r3, #1
mov r0, r3
bx lr
thumb:
mov r2, #1
mov r1, #0
cmp r0, #0
blt LBB1_2 @entry
LBB1_1: @entry
cpy r2, r1
LBB1_2: @entry
cpy r0, r2
bx lr
Testcase here: test/CodeGen/Generic/ispositive.ll
llvm-svn: 35883
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
improves codegen on many architectures. Tests committed as CodeGen/*/iabs.ll
X86 Old: X86 New:
_test: _test:
movl 4(%esp), %ecx movl 4(%esp), %eax
movl %ecx, %eax movl %eax, %ecx
negl %eax sarl $31, %ecx
testl %ecx, %ecx addl %ecx, %eax
cmovns %ecx, %eax xorl %ecx, %eax
ret ret
PPC Old: PPC New:
_test: _test:
cmpwi cr0, r3, -1 srawi r2, r3, 31
neg r2, r3 add r3, r3, r2
bgt cr0, LBB1_2 ; xor r3, r3, r2
LBB1_1: ; blr
mr r3, r2
LBB1_2: ;
blr
ARM Old: ARM New:
_test: _test:
rsb r3, r0, #0 add r3, r0, r0, asr #31
cmp r0, #0 eor r0, r3, r0, asr #31
movge r3, r0 bx lr
mov r0, r3
bx lr
Thumb Old: Thumb New:
_test: _test:
neg r2, r0 asr r2, r0, #31
cmp r0, #0 add r0, r0, r2
bge LBB1_2 eor r0, r2
LBB1_1: @ bx lr
cpy r0, r2
LBB1_2: @
bx lr
Sparc Old: Sparc New:
test: test:
save -96, %o6, %o6 save -96, %o6, %o6
sethi 0, %l0 sra %i0, 31, %l0
sub %l0, %i0, %l0 add %i0, %l0, %l1
subcc %i0, -1, %l1 xor %l1, %l0, %i0
bg .BB1_2 restore %g0, %g0, %g0
nop retl
.BB1_1: nop
or %g0, %l0, %i0
.BB1_2:
restore %g0, %g0, %g0
retl
nop
It also helps alpha/ia64 :)
llvm-svn: 35881
|
| |
|
|
|
|
|
|
|
| |
2. Help DAGCombiner recognize zero/sign/any-extended versions of ROTR and ROTL
patterns. This was motivated by the X86/rotate.ll testcase, which should now
generate code for other platforms (and soon-to-come platforms.) Rewrote code
slightly to make it easier to read.
llvm-svn: 35605
|
| |
|
|
|
|
| |
combination.
llvm-svn: 35517
|
| |
|
|
|
|
| |
big endian targets until llvm-gcc build issue has been resolved.
llvm-svn: 35449
|
| |
|
|
| |
llvm-svn: 35350
|
| |
|
|
| |
llvm-svn: 35293
|
| |
|
|
| |
llvm-svn: 35289
|
| |
|
|
| |
llvm-svn: 35286
|