| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
Bill Wendling!!
llvm-svn: 20649
|
| |
|
|
|
|
|
|
| |
using Function::arg_{iterator|begin|end}. Likewise Module::g* -> Module::global_*.
This patch is contributed by Gabor Greif, thanks!
llvm-svn: 20597
|
| |
|
|
| |
llvm-svn: 20555
|
| |
|
|
|
|
|
| |
because we were checking the wrong thing. Thanks to andrew for pointing
this out!
llvm-svn: 20554
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
numbering values in live ranges for physical registers.
The alpha backend currently generates code that looks like this:
vreg = preg
...
preg = vreg
use preg
...
preg = vreg
use preg
etc. Because vreg contains the value of preg coming in, each of the
copies back into preg contain that initial value as well.
In the case of the Alpha, this allows this testcase:
void "foo"(int %blah) {
store int 5, int *%MyVar
store int 12, int* %MyVar2
ret void
}
to compile to:
foo:
ldgp $29, 0($27)
ldiq $0,5
stl $0,MyVar
ldiq $0,12
stl $0,MyVar2
ret $31,($26),1
instead of:
foo:
ldgp $29, 0($27)
bis $29,$29,$0
ldiq $1,5
bis $0,$0,$29
stl $1,MyVar
ldiq $1,12
bis $0,$0,$29
stl $1,MyVar2
ret $31,($26),1
This does not seem to have any noticable effect on X86 code.
This fixes PR535.
llvm-svn: 20536
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows the alpha backend to compile:
bool %test(uint %P) {
%c = seteq uint %P, 0
ret bool %c
}
into:
test:
ldgp $29, 0($27)
ZAP $16,240,$0
CMPEQ $0,0,$0
AND $0,1,$0
ret $31,($26),1
instead of:
test:
ldgp $29, 0($27)
ZAP $16,240,$0
ldiq $1,0
ZAP $1,240,$1
CMPEQ $0,$1,$0
AND $0,1,$0
ret $31,($26),1
... and fixes PR534.
llvm-svn: 20534
|
| |
|
|
| |
llvm-svn: 20382
|
| |
|
|
| |
llvm-svn: 20375
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changing 'op' here caused us to not enter the store into a map, causing
reemission of the code!! In practice, a simple loop like this:
no_exit: ; preds = %no_exit, %entry
%indvar = phi uint [ %indvar.next, %no_exit ], [ 0, %entry ] ; <uint> [#uses=3]
%tmp.4 = getelementptr "complex long double"* %P, uint %indvar, uint 0 ; <double*> [#uses=1]
store double 0.000000e+00, double* %tmp.4
%indvar.next = add uint %indvar, 1 ; <uint> [#uses=2]
%exitcond = seteq uint %indvar.next, %N ; <bool> [#uses=1]
br bool %exitcond, label %return, label %no_exit
was being code gen'd to:
.LBBtest_1: # no_exit
movl %edx, %esi
shll $4, %esi
movl $0, 4(%eax,%esi)
movl $0, (%eax,%esi)
incl %edx
movl $0, (%eax,%esi)
movl $0, 4(%eax,%esi)
cmpl %ecx, %edx
jne .LBBtest_1 # no_exit
Note that we are doing 4 32-bit stores instead of 2. Now we generate:
.LBBtest_1: # no_exit
movl %edx, %esi
incl %esi
shll $4, %edx
movl $0, (%eax,%edx)
movl $0, 4(%eax,%edx)
cmpl %ecx, %esi
movl %esi, %edx
jne .LBBtest_1 # no_exit
This is much happier, though it would be even better if the increment of ESI
was scheduled after the compare :-/
llvm-svn: 20265
|
| |
|
|
| |
llvm-svn: 20231
|
| |
|
|
|
|
| |
for 0.0 and -0.0.
llvm-svn: 20230
|
| |
|
|
|
|
| |
folding of argument loads with instructions that are not in the entry block.
llvm-svn: 20228
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
prints:
getelementptr (int* %A, int -1)
as: "(A) - 4" instead of "(A) + 18446744073709551612", which makes the
assembler much happier.
This fixes test/Regression/CodeGen/X86/2005-02-14-IllegalAssembler.ll,
and Benchmarks/Prolangs-C/cdecl with LLC on X86.
llvm-svn: 20183
|
| |
|
|
|
|
| |
targets.
llvm-svn: 20030
|
| |
|
|
| |
llvm-svn: 20026
|
| |
|
|
| |
llvm-svn: 19986
|
| |
|
|
| |
llvm-svn: 19969
|
| |
|
|
| |
llvm-svn: 19930
|
| |
|
|
| |
llvm-svn: 19924
|
| |
|
|
| |
llvm-svn: 19880
|
| |
|
|
|
|
| |
truncated, e.g. (truncate:i8 something:i16) on a 32 or 64-bit RISC.
llvm-svn: 19879
|
| |
|
|
| |
llvm-svn: 19878
|
| |
|
|
|
|
| |
legalized, and actually return the correct result when we legalize the chain first.
llvm-svn: 19866
|
| |
|
|
| |
llvm-svn: 19797
|
| |
|
|
|
|
|
| |
registers. This information is computed directly by the register allocator
now.
llvm-svn: 19795
|
| |
|
|
| |
llvm-svn: 19793
|
| |
|
|
| |
llvm-svn: 19792
|
| |
|
|
| |
llvm-svn: 19791
|
| |
|
|
| |
llvm-svn: 19789
|
| |
|
|
| |
llvm-svn: 19787
|
| |
|
|
|
|
|
| |
The first half of correct chain insertion for libcalls. This is not enough
to fix Fhourstones yet though.
llvm-svn: 19781
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the new TLI that is available.
Implement support for handling out of range shifts. This allows us to
compile this code (a 64-bit rotate):
unsigned long long f3(unsigned long long x) {
return (x << 32) | (x >> (64-32));
}
into this:
f3:
mov %EDX, DWORD PTR [%ESP + 4]
mov %EAX, DWORD PTR [%ESP + 8]
ret
GCC produces this:
$ gcc t.c -masm=intel -O3 -S -o - -fomit-frame-pointer
..
f3:
push %ebx
mov %ebx, DWORD PTR [%esp+12]
mov %ecx, DWORD PTR [%esp+8]
mov %eax, %ebx
mov %edx, %ecx
pop %ebx
ret
The Simple ISEL produces (eww gross):
f3:
sub %ESP, 4
mov DWORD PTR [%ESP], %ESI
mov %EDX, DWORD PTR [%ESP + 8]
mov %ECX, DWORD PTR [%ESP + 12]
mov %EAX, 0
mov %ESI, 0
or %EAX, %ECX
or %EDX, %ESI
mov %ESI, DWORD PTR [%ESP]
add %ESP, 4
ret
llvm-svn: 19780
|
| |
|
|
| |
llvm-svn: 19779
|
| |
|
|
| |
llvm-svn: 19763
|
| |
|
|
|
|
| |
This fixes the return-address-not-being-saved problem in the Alpha backend.
llvm-svn: 19741
|
| |
|
|
| |
llvm-svn: 19739
|
| |
|
|
| |
llvm-svn: 19738
|
| |
|
|
| |
llvm-svn: 19737
|
| |
|
|
| |
llvm-svn: 19736
|
| |
|
|
| |
llvm-svn: 19735
|
| |
|
|
| |
llvm-svn: 19727
|
| |
|
|
|
|
| |
operations for 64-bit integers.
llvm-svn: 19724
|
| |
|
|
| |
llvm-svn: 19721
|
| |
|
|
| |
llvm-svn: 19715
|
| |
|
|
| |
llvm-svn: 19714
|
| |
|
|
| |
llvm-svn: 19712
|
| |
|
|
| |
llvm-svn: 19707
|
| |
|
|
| |
llvm-svn: 19704
|
| |
|
|
| |
llvm-svn: 19703
|
| |
|
|
| |
llvm-svn: 19701
|