| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
|
|
|
|
| |
into and/shift would cause nodes to move around and a dangling pointer
to happen. The code tried to avoid this with a HandleSDNode, but
got the details wrong.
llvm-svn: 123578
|
| |
|
|
|
|
| |
compare.
llvm-svn: 123560
|
| |
|
|
|
|
|
| |
multi-instruction sequences like calls. Many thanks to Jakob for
finding a testcase.
llvm-svn: 123559
|
| |
|
|
|
|
|
|
| |
declaration and its assignments.
Found by clang static analyzer.
llvm-svn: 123486
|
| |
|
|
|
|
| |
description emission. Currently all the backends use table-based stuff.
llvm-svn: 123476
|
| |
|
|
| |
llvm-svn: 123475
|
| |
|
|
|
|
| |
llvm-gcc-i386-linux-selfhost buildbot heartburn...
llvm-svn: 123431
|
| |
|
|
| |
llvm-svn: 123427
|
| |
|
|
| |
llvm-svn: 123422
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
after sext's generated for addressing that got folded. Previously we compiled
test5 into:
_test5: ## @test5
## BB#0:
movq -8(%rsp), %rax ## 8-byte Reload
movq (%rdi,%rax), %rdi
addq %rdx, %rdi
movslq %esi, %rax
movq %rax, -8(%rsp) ## 8-byte Spill
movq %rdi, %rax
ret
which is insane and wrong. Now we produce:
_test5: ## @test5
## BB#0:
movslq %esi, %rax
movq (%rdi,%rax), %rax
addq %rdx, %rax
ret
llvm-svn: 123414
|
| |
|
|
| |
llvm-svn: 123408
|
| |
|
|
| |
llvm-svn: 123399
|
| |
|
|
|
|
| |
16 bytes for PR8969. Update all testcases accordingly.
llvm-svn: 123367
|
| |
|
|
| |
llvm-svn: 123242
|
| |
|
|
| |
llvm-svn: 123171
|
| |
|
|
|
|
| |
and fixes here and there.
llvm-svn: 123170
|
| |
|
|
|
|
|
|
| |
These functions not longer assert when passed 0, but simply return false instead.
No functional change intended.
llvm-svn: 123155
|
| |
|
|
| |
llvm-svn: 123102
|
| |
|
|
| |
llvm-svn: 123048
|
| |
|
|
|
|
|
|
|
|
| |
Instead encode llvm IR level property "HasSideEffects" in an operand (shared
with IsAlignStack). Added MachineInstrs::hasUnmodeledSideEffects() to check
the operand when the instruction is an INLINEASM.
This allows memory instructions to be moved around INLINEASM instructions.
llvm-svn: 123044
|
| |
|
|
|
|
| |
regression (even on Nehalem) in edge cases. I also didn't see any real performance benefit.
llvm-svn: 123015
|
| |
|
|
|
|
| |
Patch by Richard Simth.
llvm-svn: 122962
|
| |
|
|
| |
llvm-svn: 122957
|
| |
|
|
|
|
|
|
| |
The theory is it's still faster than a pair of movq / a quad of movl. This
will probably hurt older chips like P4 but should run faster on current
and future Intel processors. rdar://8817010
llvm-svn: 122955
|
| |
|
|
|
|
|
| |
etc. takes an option OptSize. If OptSize is true, it would return
the inline limit for functions with attribute OptSize.
llvm-svn: 122952
|
| |
|
|
|
|
|
| |
works only on MinGW32. On 64-bit, the function to call is "__chkstk".
Patch by KS Sreeram!
llvm-svn: 122934
|
| |
|
|
|
|
|
|
| |
beginning of the "main" function. The assembler complains about the invalid
suffix for the 'call' instruction. The right instruction is "callq __main".
Patch by KS Sreeram!
llvm-svn: 122933
|
| |
|
|
| |
llvm-svn: 122921
|
| |
|
|
| |
llvm-svn: 122920
|
| |
|
|
|
|
| |
bundles in the pass.
llvm-svn: 122833
|
| |
|
|
|
|
|
|
|
|
| |
The analysis will be needed by both the greedy register allocator and the
X86FloatingPoint pass. It only needs to be computed once when the CFG doesn't
change.
This pass is very fast, usually showing up as 0.0% wall time.
llvm-svn: 122832
|
| |
|
|
|
|
| |
warning is overzealous but gcc is what it is.)
llvm-svn: 122829
|
| |
|
|
|
|
|
|
|
|
|
| |
prologue and epilogue if the adjustment is 8. Similarly, use pushl / popl if
the adjustment is 4 in 32-bit mode.
In the epilogue, takes care to pop to a caller-saved register that's not live
at the exit (either return or tailcall instruction).
rdar://8771137
llvm-svn: 122783
|
| |
|
|
|
|
|
|
|
|
| |
This allows us to compile:
void test(char *s, int a) {
__builtin_memset(s, a, 15);
}
into 1 mul + 3 stores instead of 3 muls + 3 stores.
llvm-svn: 122710
|
| |
|
|
| |
llvm-svn: 122706
|
| |
|
|
| |
llvm-svn: 122700
|
| |
|
|
| |
llvm-svn: 122667
|
| |
|
|
|
|
|
| |
is necessary for executing the custom command that runs the
assember. Fixes PR8877.
llvm-svn: 122649
|
| |
|
|
|
|
| |
Fixes PR8861.
llvm-svn: 122641
|
| |
|
|
|
|
| |
files in Target/ARM and Target/X86.
llvm-svn: 122623
|
| |
|
|
|
|
| |
supports.
llvm-svn: 122577
|
| |
|
|
| |
llvm-svn: 122560
|
| |
|
|
| |
llvm-svn: 122559
|
| |
|
|
| |
llvm-svn: 122528
|
| |
|
|
| |
llvm-svn: 122513
|
| |
|
|
| |
llvm-svn: 122495
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
int test(unsigned long a, unsigned long b) { return -(a < b); }
compiles to
_test: ## @test
cmpq %rsi, %rdi ## encoding: [0x48,0x39,0xf7]
sbbl %eax, %eax ## encoding: [0x19,0xc0]
ret ## encoding: [0xc3]
instead of
_test: ## @test
xorl %ecx, %ecx ## encoding: [0x31,0xc9]
cmpq %rsi, %rdi ## encoding: [0x48,0x39,0xf7]
movl $-1, %eax ## encoding: [0xb8,0xff,0xff,0xff,0xff]
cmovael %ecx, %eax ## encoding: [0x0f,0x43,0xc1]
ret ## encoding: [0xc3]
llvm-svn: 122451
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(add Y, (sete X, 0)) -> cmp X, 1; adc 0, Y
(add Y, (setne X, 0)) -> cmp X, 1; sbb -1, Y
(sub (sete X, 0), Y) -> cmp X, 1; sbb 0, Y
(sub (setne X, 0), Y) -> cmp X, 1; adc -1, Y
for
unsigned foo(unsigned a, unsigned b) {
if (a == 0) b++;
return b;
}
we now get:
foo:
cmpl $1, %edi
movl %esi, %eax
adcl $0, %eax
ret
instead of:
foo:
testl %edi, %edi
sete %al
movzbl %al, %eax
addl %esi, %eax
ret
llvm-svn: 122364
|
| |
|
|
|
|
|
| |
something that just glues two nodes together, even if it is
sometimes used for flags.
llvm-svn: 122310
|
| |
|
|
|
|
|
|
| |
addition to being an intrinsic, and convert
lowering to use it. Hopefully the pattern fragment is doing the right thing with XMM0, looks correct in testing.
llvm-svn: 122277
|