| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
Remove previous DwarfCFI hack.
llvm-svn: 130187
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for switch and indirectbr edges. This works by densely numbering
all blocks which have such terminators, and then separately numbering the
possible successors. The predecessors write down a number, the successor knows
its own number (as a ConstantInt) and sends that and the pointer to the number
the predecessor wrote down to the runtime, who looks up the counter in a
per-function table.
Coverage data should now be functional, but I haven't tested it on anything
other than my 2-file synthetic test program for coverage.
llvm-svn: 130186
|
| |
|
|
| |
llvm-svn: 130181
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
return it as a clobber. This allows GVN to do smart things.
Enhance GVN to be smart about the case when a small load is clobbered
by a larger overlapping load. In this case, forward the value. This
allows us to compile stuff like this:
int test(void *P) {
int tmp = *(unsigned int*)P;
return tmp+*((unsigned char*)P+1);
}
into:
_test: ## @test
movl (%rdi), %ecx
movzbl %ch, %eax
addl %ecx, %eax
ret
which has one load. We already handled the case where the smaller
load was from a must-aliased base pointer.
llvm-svn: 130180
|
| |
|
|
|
|
| |
space, if requested, will be used for complex addresses of the Blocks' variables.
llvm-svn: 130178
|
| |
|
|
| |
llvm-svn: 130171
|
| |
|
|
|
|
| |
s/addVariableAddress/addFrameVariableAddress/g
llvm-svn: 130170
|
| |
|
|
|
|
| |
Observed this while reading code, so I do not have a test case handy here.
llvm-svn: 130167
|
| |
|
|
| |
llvm-svn: 130166
|
| |
|
|
| |
llvm-svn: 130165
|
| |
|
|
| |
llvm-svn: 130160
|
| |
|
|
| |
llvm-svn: 130153
|
| |
|
|
|
|
| |
patch by Johannes Schaub!
llvm-svn: 130151
|
| |
|
|
| |
llvm-svn: 130137
|
| |
|
|
| |
llvm-svn: 130131
|
| |
|
|
|
|
| |
incoming argument. However, It is appropriate to emit DBG_VALUE referring to this incoming argument in entry block in MachineFunction.
llvm-svn: 130129
|
| |
|
|
|
|
| |
lit needs a linter ...
llvm-svn: 130126
|
| |
|
|
|
|
|
| |
these was just one line of a file. Explicitly set the eol-style property on the
files to try and ensure this fix stays.
llvm-svn: 130125
|
| |
|
|
| |
llvm-svn: 130120
|
| |
|
|
| |
llvm-svn: 130116
|
| |
|
|
|
|
| |
Fixes PR9787.
llvm-svn: 130115
|
| |
|
|
| |
llvm-svn: 130097
|
| |
|
|
| |
llvm-svn: 130096
|
| |
|
|
| |
llvm-svn: 130095
|
| |
|
|
| |
llvm-svn: 130094
|
| |
|
|
| |
llvm-svn: 130093
|
| |
|
|
| |
llvm-svn: 130092
|
| |
|
|
| |
llvm-svn: 130091
|
| |
|
|
| |
llvm-svn: 130090
|
| |
|
|
| |
llvm-svn: 130086
|
| |
|
|
| |
llvm-svn: 130068
|
| |
|
|
| |
llvm-svn: 130054
|
| |
|
|
| |
llvm-svn: 130053
|
| |
|
|
| |
llvm-svn: 130050
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes Thumb2 ADCS and SBCS lowering: <rdar://problem/9275821>.
t2ADCS/t2SBCS are now pseudo instructions, consistent with ARM, so the
assembly printer correctly prints the 's' suffix.
Fixes Thumb2 adde -> SBC matching to check for live/dead carry flags.
Fixes the internal ARM machine opcode mnemonic for ADCS/SBCS.
Fixes ARM SBC lowering to check for live carry (potential bug).
llvm-svn: 130048
|
| |
|
|
| |
llvm-svn: 130047
|
| |
|
|
| |
llvm-svn: 130046
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
<h2>Section Example</h2>
<div> <!-- h2+div is applied -->
<p>Section preamble.</p>
<h3>Subsection Example</h3>
<p> <!-- h3+p is applied -->
Subsection body
</p>
<!-- End of section body -->
</div>
FIXME: Care H5 better.
llvm-svn: 130040
|
| |
|
|
| |
llvm-svn: 130039
|
| |
|
|
| |
llvm-svn: 130033
|
| |
|
|
| |
llvm-svn: 130028
|
| |
|
|
| |
llvm-svn: 130027
|
| |
|
|
|
|
|
|
| |
splitting.
Sometimes it is better to split per block, and we missed those cases.
llvm-svn: 130025
|
| |
|
|
| |
llvm-svn: 130021
|
| |
|
|
|
|
|
|
|
|
|
| |
fix bugs exposed by the gcc dejagnu testsuite:
1. The load may actually be used by a dead instruction, which
would cause an assert.
2. The load may not be used by the current chain of instructions,
and we could move it past a side-effecting instruction. Change
how we process uses to define the problem away.
llvm-svn: 130018
|
| |
|
|
|
|
|
|
|
|
| |
should
print out ldr, not ldr.n.
rdar://problem/9267772
llvm-svn: 130008
|
| |
|
|
|
|
|
|
|
|
|
|
| |
On x86 this allows to fold a load into the cmp, greatly reducing register pressure.
movzbl (%rdi), %eax
cmpl $47, %eax
->
cmpb $47, (%rdi)
This shaves 8k off gcc.o on i386. I'll leave applying the patch in README.txt to Chris :)
llvm-svn: 130005
|
| |
|
|
| |
llvm-svn: 130004
|
| |
|
|
| |
llvm-svn: 129995
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(C2 >> C1)) & C1. (Part of PR5039)
This tends to happen a lot with bitfield code generated by clang. A simple example for x86_64 is
uint64_t foo(uint64_t x) { return (x&1) << 42; }
which used to compile into bloated code:
shlq $42, %rdi ## encoding: [0x48,0xc1,0xe7,0x2a]
movabsq $4398046511104, %rax ## encoding: [0x48,0xb8,0x00,0x00,0x00,0x00,0x00,0x04,0x00,0x00]
andq %rdi, %rax ## encoding: [0x48,0x21,0xf8]
ret ## encoding: [0xc3]
with this patch we can fold the immediate into the and:
andq $1, %rdi ## encoding: [0x48,0x83,0xe7,0x01]
movq %rdi, %rax ## encoding: [0x48,0x89,0xf8]
shlq $42, %rax ## encoding: [0x48,0xc1,0xe0,0x2a]
ret ## encoding: [0xc3]
It's possible to save another byte by using 'andl' instead of 'andq' but I currently see no way of doing
that without making this code even more complicated. See the TODOs in the code.
llvm-svn: 129990
|