| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
is doing the right thing and codegen looks correct for both Thumb and Thumb2.
llvm-svn: 101410
|
| |
|
|
| |
llvm-svn: 101405
|
| |
|
|
|
|
| |
system.
llvm-svn: 101404
|
| |
|
|
|
|
|
|
| |
where
the tag was declared. WIP.
llvm-svn: 101403
|
| |
|
|
|
|
|
|
| |
deposit the file
in the original source directory.
llvm-svn: 101402
|
| |
|
|
|
|
|
|
| |
structs, typedefs.
Those still need work to disambiguate them across translation units.
llvm-svn: 101401
|
| |
|
|
|
|
|
|
|
|
| |
native linking export files, including running sed to prepend underscores
on darwin, and make use of it in libLTO and libEnhancedDisassembly.
Remove the leading underscores from library export files so that they
work with the new EXPORTED_SYMBOL_FILE support.
llvm-svn: 101399
|
| |
|
|
| |
llvm-svn: 101398
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
with a fix
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101397
|
| |
|
|
| |
llvm-svn: 101396
|
| |
|
|
| |
llvm-svn: 101395
|
| |
|
|
| |
llvm-svn: 101392
|
| |
|
|
| |
llvm-svn: 101388
|
| |
|
|
| |
llvm-svn: 101387
|
| |
|
|
| |
llvm-svn: 101385
|
| |
|
|
| |
llvm-svn: 101384
|
| |
|
|
|
|
|
|
| |
directly. In cases where there are two dyn_alloc in the same BB it would have caused the old SP value to be reused and badness ensues. rdar://7493908
llvm is generating poor code for dynamic alloca, I'll fix that later.
llvm-svn: 101383
|
| |
|
|
| |
llvm-svn: 101382
|
| |
|
|
| |
llvm-svn: 101381
|
| |
|
|
| |
llvm-svn: 101379
|
| |
|
|
|
|
|
|
|
|
| |
the default
case in GRExprEngine::Visit (in r101129). Instead, enumerate all Stmt cases and have
no 'default' case in the switch statement. When we encounter a Stmt we don't handle,
we should explicitly add it to the switch statement.
llvm-svn: 101378
|
| |
|
|
|
|
| |
can't be static.
llvm-svn: 101377
|
| |
|
|
| |
llvm-svn: 101376
|
| |
|
|
| |
llvm-svn: 101375
|
| |
|
|
|
|
| |
in addition to the predecessor.
llvm-svn: 101374
|
| |
|
|
|
|
| |
fixes a bug where we would lay out virtual bases in the wrong order.
llvm-svn: 101373
|
| |
|
|
|
|
|
|
|
|
| |
ASTContext::getTypeSize() rather than ASTContext::getIntWidth() for
the width of an integral type. The former includes padding for bools
(to the target's size) while the latter does not, so we woud end up
zero-extending bools to the target width when we shouldn't. Fixes a
crash-on-valid in the included test.
llvm-svn: 101372
|
| |
|
|
| |
llvm-svn: 101371
|
| |
|
|
| |
llvm-svn: 101370
|
| |
|
|
| |
llvm-svn: 101369
|
| |
|
|
| |
llvm-svn: 101368
|
| |
|
|
| |
llvm-svn: 101366
|
| |
|
|
| |
llvm-svn: 101365
|
| |
|
|
|
|
|
|
|
|
| |
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101364
|
| |
|
|
|
|
| |
-fixit-at specified a particular fixit to fix, or the -o flag was used.
llvm-svn: 101359
|
| |
|
|
| |
llvm-svn: 101358
|
| |
|
|
| |
llvm-svn: 101357
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Used to determine whether the alignment of the type in a bit-field is
respected when laying out structures. The default is true, targets can
override this as needed.
- This is designed to correspond to the PCC_BITFIELD_TYPE_MATTERS macro in
gcc. The AST/Sema implementation only affects one line, unless I have
forgotten something. I'd appreciate further review.
- IRgen still needs to be updated to fully support this (which is effectively
PR5591).
llvm-svn: 101356
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tokenfactor in between the load/store. This allows us to
optimize test7 into:
_test7: ## @test7
## BB#0: ## %entry
movl (%rdx), %eax
## kill: SIL<def> ESI<kill>
movb %sil, 5(%rdi)
ret
instead of:
_test7: ## @test7
## BB#0: ## %entry
movl 4(%esp), %ecx
movl $-65281, %eax ## imm = 0xFFFFFFFFFFFF00FF
andl 4(%ecx), %eax
movzbl 8(%esp), %edx
shll $8, %edx
addl %eax, %edx
movl 12(%esp), %eax
movl (%eax), %eax
movl %edx, 4(%ecx)
ret
llvm-svn: 101355
|
| |
|
|
|
|
|
|
| |
option parser.
- Note that this is a behavior change, previously -mllvm at the driver level forwarded to clang -cc1. The driver does a little magic to make sure that '-mllvm -disable-llvm-optzns' works correctly, but other users will need to be updated to use -Xclang.
llvm-svn: 101354
|
| |
|
|
|
|
|
|
| |
This doesn't occur much at all, it only seems to formed in the case
when the trunc optimization kicks in due to phase ordering. In that
case it is saves a few bytes on x86-32.
llvm-svn: 101350
|
| |
|
|
|
|
| |
and. This happens with the store->load narrowing stuff.
llvm-svn: 101348
|
| |
|
|
|
|
|
|
| |
arguments, it is now an immutable object.
Also, add some checking of various invariants that should hold on the CGBitFieldInfo access.
llvm-svn: 101345
|
| |
|
|
| |
llvm-svn: 101344
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a load/or/and/store sequence into a narrower store when it is
safe. Daniel tells me that clang will start producing this sort
of thing with bitfields, and this does trigger a few dozen times
on 176.gcc produced by llvm-gcc even now.
This compiles code like CodeGen/X86/2009-05-28-DAGCombineCrash.ll
into:
movl %eax, 36(%rdi)
instead of:
movl $4294967295, %eax ## imm = 0xFFFFFFFF
andq 32(%rdi), %rax
shlq $32, %rcx
addq %rax, %rcx
movq %rcx, 32(%rdi)
and each of the testcases into a single store. Each of them used
to compile into craziness like this:
_test4:
movl $65535, %eax ## imm = 0xFFFF
andl (%rdi), %eax
shll $16, %esi
addl %eax, %esi
movl %esi, (%rdi)
ret
llvm-svn: 101343
|
| |
|
|
| |
llvm-svn: 101342
|
| |
|
|
| |
llvm-svn: 101341
|
| |
|
|
| |
llvm-svn: 101340
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
use new CGBitfieldInfo::AccessInfo decomposition, instead of computing the access policy itself.
- Sadly, this doesn't seem to give any .ll size win so far. It is possible to make this routine significantly smarter & avoid various shifting, masking, and zext/sext, but I'm not really convinced it is worth it. It is tricky, and this is really instcombine's job.
- No intended functionality change; the test case is just to increase coverage & serves as a demo file, it worked before this commit.
The new fixes from r101222 are:
1. The shift to the target position needs to occur after the value is extended to the correct size. This broke Clang bootstrap, among other things no doubt.
2. Swap the order of arguments to OR, to get a tad more constant folding.
llvm-svn: 101339
|
| |
|
|
| |
llvm-svn: 101338
|