| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Remove previous DwarfCFI hack.
llvm-svn: 130187
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for switch and indirectbr edges. This works by densely numbering
all blocks which have such terminators, and then separately numbering the
possible successors. The predecessors write down a number, the successor knows
its own number (as a ConstantInt) and sends that and the pointer to the number
the predecessor wrote down to the runtime, who looks up the counter in a
per-function table.
Coverage data should now be functional, but I haven't tested it on anything
other than my 2-file synthetic test program for coverage.
llvm-svn: 130186
|
|
|
|
| |
llvm-svn: 130181
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
return it as a clobber. This allows GVN to do smart things.
Enhance GVN to be smart about the case when a small load is clobbered
by a larger overlapping load. In this case, forward the value. This
allows us to compile stuff like this:
int test(void *P) {
int tmp = *(unsigned int*)P;
return tmp+*((unsigned char*)P+1);
}
into:
_test: ## @test
movl (%rdi), %ecx
movzbl %ch, %eax
addl %ecx, %eax
ret
which has one load. We already handled the case where the smaller
load was from a must-aliased base pointer.
llvm-svn: 130180
|
|
|
|
|
|
| |
space, if requested, will be used for complex addresses of the Blocks' variables.
llvm-svn: 130178
|
|
|
|
| |
llvm-svn: 130171
|
|
|
|
|
|
| |
s/addVariableAddress/addFrameVariableAddress/g
llvm-svn: 130170
|
|
|
|
|
|
| |
Observed this while reading code, so I do not have a test case handy here.
llvm-svn: 130167
|
|
|
|
| |
llvm-svn: 130166
|
|
|
|
| |
llvm-svn: 130160
|
|
|
|
|
|
| |
patch by Johannes Schaub!
llvm-svn: 130151
|
|
|
|
| |
llvm-svn: 130137
|
|
|
|
| |
llvm-svn: 130131
|
|
|
|
|
|
| |
incoming argument. However, It is appropriate to emit DBG_VALUE referring to this incoming argument in entry block in MachineFunction.
llvm-svn: 130129
|
|
|
|
|
|
|
| |
these was just one line of a file. Explicitly set the eol-style property on the
files to try and ensure this fix stays.
llvm-svn: 130125
|
|
|
|
| |
llvm-svn: 130120
|
|
|
|
| |
llvm-svn: 130116
|
|
|
|
|
|
| |
Fixes PR9787.
llvm-svn: 130115
|
|
|
|
| |
llvm-svn: 130097
|
|
|
|
| |
llvm-svn: 130096
|
|
|
|
| |
llvm-svn: 130095
|
|
|
|
| |
llvm-svn: 130093
|
|
|
|
| |
llvm-svn: 130086
|
|
|
|
| |
llvm-svn: 130068
|
|
|
|
| |
llvm-svn: 130054
|
|
|
|
| |
llvm-svn: 130053
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes Thumb2 ADCS and SBCS lowering: <rdar://problem/9275821>.
t2ADCS/t2SBCS are now pseudo instructions, consistent with ARM, so the
assembly printer correctly prints the 's' suffix.
Fixes Thumb2 adde -> SBC matching to check for live/dead carry flags.
Fixes the internal ARM machine opcode mnemonic for ADCS/SBCS.
Fixes ARM SBC lowering to check for live carry (potential bug).
llvm-svn: 130048
|
|
|
|
| |
llvm-svn: 130046
|
|
|
|
| |
llvm-svn: 130033
|
|
|
|
| |
llvm-svn: 130028
|
|
|
|
|
|
|
|
| |
splitting.
Sometimes it is better to split per block, and we missed those cases.
llvm-svn: 130025
|
|
|
|
|
|
|
|
|
|
|
| |
fix bugs exposed by the gcc dejagnu testsuite:
1. The load may actually be used by a dead instruction, which
would cause an assert.
2. The load may not be used by the current chain of instructions,
and we could move it past a side-effecting instruction. Change
how we process uses to define the problem away.
llvm-svn: 130018
|
|
|
|
|
|
|
|
|
|
| |
should
print out ldr, not ldr.n.
rdar://problem/9267772
llvm-svn: 130008
|
|
|
|
|
|
|
|
|
|
|
|
| |
On x86 this allows to fold a load into the cmp, greatly reducing register pressure.
movzbl (%rdi), %eax
cmpl $47, %eax
->
cmpb $47, (%rdi)
This shaves 8k off gcc.o on i386. I'll leave applying the patch in README.txt to Chris :)
llvm-svn: 130005
|
|
|
|
| |
llvm-svn: 130004
|
|
|
|
| |
llvm-svn: 129995
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(C2 >> C1)) & C1. (Part of PR5039)
This tends to happen a lot with bitfield code generated by clang. A simple example for x86_64 is
uint64_t foo(uint64_t x) { return (x&1) << 42; }
which used to compile into bloated code:
shlq $42, %rdi ## encoding: [0x48,0xc1,0xe7,0x2a]
movabsq $4398046511104, %rax ## encoding: [0x48,0xb8,0x00,0x00,0x00,0x00,0x00,0x04,0x00,0x00]
andq %rdi, %rax ## encoding: [0x48,0x21,0xf8]
ret ## encoding: [0xc3]
with this patch we can fold the immediate into the and:
andq $1, %rdi ## encoding: [0x48,0x83,0xe7,0x01]
movq %rdi, %rax ## encoding: [0x48,0x89,0xf8]
shlq $42, %rax ## encoding: [0x48,0xc1,0xe0,0x2a]
ret ## encoding: [0xc3]
It's possible to save another byte by using 'andl' instead of 'andq' but I currently see no way of doing
that without making this code even more complicated. See the TODOs in the code.
llvm-svn: 129990
|
|
|
|
| |
llvm-svn: 129984
|
|
|
|
| |
llvm-svn: 129980
|
|
|
|
| |
llvm-svn: 129978
|
|
|
|
| |
llvm-svn: 129976
|
|
|
|
| |
llvm-svn: 129975
|
|
|
|
|
|
| |
Patch by Patrick Walton!
llvm-svn: 129974
|
|
|
|
| |
llvm-svn: 129973
|
|
|
|
|
|
|
|
|
|
|
| |
add <rd>, sp, #<imm8>
ldr <rd>, [sp, #<imm8>]
When the offset from sp is multiple of 4 and in range of 0-1020.
This saves code size by utilizing 16-bit instructions.
rdar://9321541
llvm-svn: 129971
|
|
|
|
| |
llvm-svn: 129970
|
|
|
|
|
|
| |
the first time through.
llvm-svn: 129969
|
|
|
|
|
|
|
|
| |
Silences GCC warning.
I wonder why Clang doesn't warn on this...
llvm-svn: 129968
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
An exception is thrown via a call to _cxa_throw, which we don't expect to
return. Therefore, the "true" part of the invoke goes to a BB that has
'unreachable' as its only instruction. This is lowered into an empty MachineBB.
The landing pad for this invoke, however, is directly after the "true" MBB.
When the empty MBB is removed, the landing pad is directly below the BB with the
invoke call. The unconditional branch is removed and then the two blocks are
merged together.
The testcase is too big for a regression test.
<rdar://problem/9305728>
llvm-svn: 129965
|
|
|
|
|
|
| |
X8664_ELFTargetObjectFile::getFDEEncoding to match reality.
llvm-svn: 129959
|