| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
| |
llvm-svn: 47702
|
| |
|
|
|
|
| |
uses the same encoding everywhere. Linux FIXME'ed.
llvm-svn: 47701
|
| |
|
|
|
|
| |
provide TAI hook for selection of EH data emission format. Currently unused.
llvm-svn: 47699
|
| |
|
|
|
|
|
| |
and was causing aborts with the new APInt changes. This may also be
fixing an obscure ppc64 bug.
llvm-svn: 47692
|
| |
|
|
| |
llvm-svn: 47688
|
| |
|
|
| |
llvm-svn: 47663
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
stack slot and store if the SINT_TO_FP is actually legal. This allows
us to compile:
double a(double b) {return (unsigned)b;}
to:
_a:
cvttsd2siq %xmm0, %rax
movl %eax, %eax
cvtsi2sdq %rax, %xmm0
ret
instead of:
_a:
subq $8, %rsp
cvttsd2siq %xmm0, %rax
movl %eax, %eax
cvtsi2sdq %rax, %xmm0
addq $8, %rsp
ret
crazy.
llvm-svn: 47660
|
| |
|
|
| |
llvm-svn: 47659
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
_test:
movl %edi, %eax
ret
instead of:
_test:
movl $4294967295, %ecx
movq %rdi, %rax
andq %rcx, %rax
ret
It would be great to write this as a Pat pattern that used subregs
instead of a 'pseudo' instruction, but I don't know how to do that
in td files.
llvm-svn: 47658
|
| |
|
|
| |
llvm-svn: 47657
|
| |
|
|
|
|
|
| |
ComputeMaskedBits to use the APInt form, and remove the
non-APInt form.
llvm-svn: 47654
|
| |
|
|
| |
llvm-svn: 47652
|
| |
|
|
| |
llvm-svn: 47635
|
| |
|
|
| |
llvm-svn: 47629
|
| |
|
|
|
|
| |
would have been a Godsend here!
llvm-svn: 47625
|
| |
|
|
| |
llvm-svn: 47606
|
| |
|
|
| |
llvm-svn: 47600
|
| |
|
|
|
|
|
|
| |
GOT-style position independent code. Before only tail calls to
protected/hidden functions within the same module were optimized.
Now all function calls are tail call optimized.
llvm-svn: 47594
|
| |
|
|
|
|
|
|
|
|
|
|
| |
calls. Before arguments that could overwrite each other were
explicitly lowered to a stack slot, not giving the register allocator
a chance to optimize. Now a sequence of copyto/copyfrom virtual
registers ensures that arguments are loaded in (virtual) registers
before they are lowered to the stack slot (and might overwrite each
other). Also parameter stack slots are marked mutable for
(potentially) tail calling functions.
llvm-svn: 47593
|
| |
|
|
|
|
| |
pointed out that this isn't correct at -O0.
llvm-svn: 47575
|
| |
|
|
| |
llvm-svn: 47573
|
| |
|
|
|
|
| |
{S,U}MUL_LOHI with an unused high value.
llvm-svn: 47569
|
| |
|
|
|
|
|
|
| |
result into a MUL late in the X86 codegen process. ISD::MUL is
once again Legal on X86, so this is no longer needed. And, the
hack was suboptimal; see PR1874 for details.
llvm-svn: 47567
|
| |
|
|
|
|
| |
a SignBitIsZero function to simplify a common use case.
llvm-svn: 47561
|
| |
|
|
|
|
| |
of TokenFactor underneath chain (seems to be enough)
llvm-svn: 47554
|
| |
|
|
|
|
|
|
|
| |
%r3 on PPC) in their ASM files. However, it's hard for humans to read
during debugging. Adding a new field to the register data that lets you
specify a different name to be printed than the one that goes into the
ASM file -- %x3 instead of %r3, for instance.
llvm-svn: 47534
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
for CellSPU modifications:
- SPUInstrInfo.td refactoring: "multiclass" really is _your_ friend.
- Other improvements based on refactoring effort in SPUISelLowering.cpp,
esp. in SPUISelLowering::PerformDAGCombine(), where zero amount shifts and
rotates are now eliminiated, other scalar-to-vector-to-scalar silliness
is also eliminated.
- 64-bit operations are being implemented, _muldi3.c gcc runtime now
compiles and generates the right code. More work still needs to be done.
llvm-svn: 47532
|
| |
|
|
| |
llvm-svn: 47524
|
| |
|
|
|
|
| |
LiveIntervalAnalysis already handles it as a special case.
llvm-svn: 47522
|
| |
|
|
|
|
|
| |
stuff into ParamAttrsList.h. Per feedback from
ParamAttrs changes.
llvm-svn: 47504
|
| |
|
|
| |
llvm-svn: 47483
|
| |
|
|
| |
llvm-svn: 47476
|
| |
|
|
|
|
|
|
| |
instead of with mmx registers. This horribleness is apparently
done by gcc to avoid having to insert emms in places that really
should have it. This is the second half of rdar://5741668.
llvm-svn: 47474
|
| |
|
|
|
|
|
|
|
|
| |
GCC apparently does this, and code depends on not having to do
emms when this happens. This is x86-64 only so far, second half
should handle x86-32.
rdar://5741668
llvm-svn: 47470
|
| |
|
|
|
|
| |
new things.
llvm-svn: 47458
|
| |
|
|
| |
llvm-svn: 47431
|
| |
|
|
|
|
| |
failing on archs that haven't implemented them yet
llvm-svn: 47430
|
| |
|
|
| |
llvm-svn: 47400
|
| |
|
|
| |
llvm-svn: 47385
|
| |
|
|
| |
llvm-svn: 47375
|
| |
|
|
| |
llvm-svn: 47370
|
| |
|
|
| |
llvm-svn: 47369
|
| |
|
|
|
|
| |
annoying warnings.
llvm-svn: 47367
|
| |
|
|
| |
llvm-svn: 47354
|
| |
|
|
|
|
| |
folding change.
llvm-svn: 47351
|
| |
|
|
| |
llvm-svn: 47337
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This compiles test-nofold.ll into:
_test:
movl $15, %ecx
andl 4(%esp), %ecx
testl %ecx, %ecx
movl $42, %eax
cmove %ecx, %eax
ret
instead of:
_test:
movl 4(%esp), %eax
movl %eax, %ecx
andl $15, %ecx
testl $15, %eax
movl $42, %eax
cmove %ecx, %eax
ret
llvm-svn: 47330
|
| |
|
|
| |
llvm-svn: 47300
|
| |
|
|
|
|
|
|
| |
check if it's essentially a SCALAR_TO_VECTOR. Avoid turning (v8i16) <10, u, u, u> to <10, 0, u, u, u, u, u, u>. Instead, simply convert it to a SCALAR_TO_VECTOR of the proper type.
- X86 now normalize SCALAR_TO_VECTOR to (BIT_CONVERT (v4i32 SCALAR_TO_VECTOR)). Get rid of X86ISD::S2VEC.
llvm-svn: 47290
|
| |
|
|
|
|
|
| |
on x86-32 since i64 itself is not a Legal type. And, update some
comments.
llvm-svn: 47282
|