| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
| |
llvm-svn: 101434
|
|
|
|
|
|
| |
wanted the alignment of the pointee.
llvm-svn: 101432
|
|
|
|
| |
llvm-svn: 101429
|
|
|
|
|
|
| |
place.
llvm-svn: 101427
|
|
|
|
|
|
| |
ConvertToScalarInfo.
llvm-svn: 101425
|
|
|
|
| |
llvm-svn: 101423
|
|
|
|
|
|
|
| |
CanConvertToScalar/MergeInType. Eliminate a pointless
LLVMContext argument to MergeInType.
llvm-svn: 101422
|
|
|
|
|
|
|
|
|
|
| |
MachineLoopInfo is already available when MachineSinking runs, so the check is
free.
There is no test case because it would require a critical edge into a loop, and
CodeGenPrepare splits those. This check is just to be extra careful.
llvm-svn: 101420
|
|
|
|
|
|
| |
am2offset. Modified the instruction table entry and added a new test case.
llvm-svn: 101415
|
|
|
|
|
|
| |
is doing the right thing and codegen looks correct for both Thumb and Thumb2.
llvm-svn: 101410
|
|
|
|
| |
llvm-svn: 101405
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
with a fix
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101397
|
|
|
|
| |
llvm-svn: 101392
|
|
|
|
| |
llvm-svn: 101388
|
|
|
|
| |
llvm-svn: 101387
|
|
|
|
| |
llvm-svn: 101385
|
|
|
|
|
|
|
|
| |
directly. In cases where there are two dyn_alloc in the same BB it would have caused the old SP value to be reused and badness ensues. rdar://7493908
llvm is generating poor code for dynamic alloca, I'll fix that later.
llvm-svn: 101383
|
|
|
|
| |
llvm-svn: 101382
|
|
|
|
| |
llvm-svn: 101379
|
|
|
|
|
|
| |
can't be static.
llvm-svn: 101377
|
|
|
|
| |
llvm-svn: 101376
|
|
|
|
| |
llvm-svn: 101375
|
|
|
|
|
|
| |
in addition to the predecessor.
llvm-svn: 101374
|
|
|
|
| |
llvm-svn: 101371
|
|
|
|
| |
llvm-svn: 101368
|
|
|
|
|
|
|
|
|
|
| |
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101364
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tokenfactor in between the load/store. This allows us to
optimize test7 into:
_test7: ## @test7
## BB#0: ## %entry
movl (%rdx), %eax
## kill: SIL<def> ESI<kill>
movb %sil, 5(%rdi)
ret
instead of:
_test7: ## @test7
## BB#0: ## %entry
movl 4(%esp), %ecx
movl $-65281, %eax ## imm = 0xFFFFFFFFFFFF00FF
andl 4(%ecx), %eax
movzbl 8(%esp), %edx
shll $8, %edx
addl %eax, %edx
movl 12(%esp), %eax
movl (%eax), %eax
movl %edx, 4(%ecx)
ret
llvm-svn: 101355
|
|
|
|
|
|
|
|
| |
This doesn't occur much at all, it only seems to formed in the case
when the trunc optimization kicks in due to phase ordering. In that
case it is saves a few bytes on x86-32.
llvm-svn: 101350
|
|
|
|
|
|
| |
and. This happens with the store->load narrowing stuff.
llvm-svn: 101348
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a load/or/and/store sequence into a narrower store when it is
safe. Daniel tells me that clang will start producing this sort
of thing with bitfields, and this does trigger a few dozen times
on 176.gcc produced by llvm-gcc even now.
This compiles code like CodeGen/X86/2009-05-28-DAGCombineCrash.ll
into:
movl %eax, 36(%rdi)
instead of:
movl $4294967295, %eax ## imm = 0xFFFFFFFF
andq 32(%rdi), %rax
shlq $32, %rcx
addq %rax, %rcx
movq %rcx, 32(%rdi)
and each of the testcases into a single store. Each of them used
to compile into craziness like this:
_test4:
movl $65535, %eax ## imm = 0xFFFF
andl (%rdi), %eax
shll $16, %esi
addl %eax, %esi
movl %esi, (%rdi)
ret
llvm-svn: 101343
|
|
|
|
| |
llvm-svn: 101342
|
|
|
|
| |
llvm-svn: 101335
|
|
|
|
| |
llvm-svn: 101334
|
|
|
|
|
|
| |
patterns to handle the lowering.
llvm-svn: 101331
|
|
|
|
| |
llvm-svn: 101330
|
|
|
|
| |
llvm-svn: 101329
|
|
|
|
| |
llvm-svn: 101325
|
|
|
|
| |
llvm-svn: 101317
|
|
|
|
|
|
| |
unit.
llvm-svn: 101315
|
|
|
|
| |
llvm-svn: 101314
|
|
|
|
|
|
|
|
| |
The commit "Adding IPSCCP and Internalize passes to the C-bindings" introduced
new dependencies for IPO. Add these to the CMAKE build as otherwise the
BUILD_SHARED_LIBS=1 build fails.
llvm-svn: 101313
|
|
|
|
|
|
|
| |
function checks whether we have a valid submode for VLDM/VSTM (must be either
"ia" or "db") before calling ARM_AM::getAM5Opc(AMSubMode, unsigned char).
llvm-svn: 101306
|
|
|
|
|
|
|
|
| |
kernel linker happier when dealing with kexts.
Radar 7805069
llvm-svn: 101303
|
|
|
|
| |
llvm-svn: 101298
|
|
|
|
| |
llvm-svn: 101294
|
|
|
|
|
|
| |
Change the error msg to read "Encoding error: msb < lsb".
llvm-svn: 101293
|
|
|
|
|
|
| |
be used in ImmutableCallSite too.
llvm-svn: 101292
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
was asserting because the (RegClass, RegNum) combination doesn't make sense from
an encoding point of view.
Since getRegisterEnum() is used all over the place, to change the code to check
for encoding error after each call would not only bloat the code, but also make
it less readable. An Err flag is added to the ARMBasicMCBuilder where a client
can set a non-zero value to indicate some kind of error condition while building
up the MCInst. ARMBasicMCBuilder::BuildIt() checks this flag and returns false
if a non-zero value is detected.
llvm-svn: 101290
|
|
|
|
|
|
| |
- TryToOptimizeStoreOfMallocToGlobal should check if TargetData is available and bail out if it is not. The transformations being done requires TD.
llvm-svn: 101285
|
|
|
|
|
|
|
| |
does not have a legal type. The legalizer does not know how to handle those
nodes. Radar 7854640.
llvm-svn: 101282
|