| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before we'd compile the example into something like:
%coerce.dive2 = getelementptr %struct.v4f32wrapper* %retval, i32 0, i32 0 ; <<4 x float>*> [#uses=1]
%1 = bitcast <4 x float>* %coerce.dive2 to <2 x double>* ; <<2 x double>*> [#uses=1]
%2 = load <2 x double>* %1, align 1 ; <<2 x double>> [#uses=1]
ret <2 x double> %2
Now we produce:
%coerce.dive2 = getelementptr %struct.v4f32wrapper* %retval, i32 0, i32 0 ; <<4 x float>*> [#uses=1]
%0 = load <4 x float>* %coerce.dive2, align 1 ; <<4 x float>> [#uses=1]
ret <4 x float> %0
llvm-svn: 109732
|
|
|
|
|
|
| |
with return values, improving stuff that returns __m128 etc.
llvm-svn: 109731
|
|
|
|
| |
llvm-svn: 109730
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
for return values too. Instead of compiling something like:
struct foo {
int *X;
float *Y;
};
struct foo test(struct foo *P) { return *P; }
to:
%1 = type { i64, i64 }
define %1 @test(%struct.foo* %P) nounwind {
entry:
%retval = alloca %struct.foo, align 8 ; <%struct.foo*> [#uses=2]
%P.addr = alloca %struct.foo*, align 8 ; <%struct.foo**> [#uses=2]
store %struct.foo* %P, %struct.foo** %P.addr
%tmp = load %struct.foo** %P.addr ; <%struct.foo*> [#uses=1]
%tmp1 = bitcast %struct.foo* %retval to i8* ; <i8*> [#uses=1]
%tmp2 = bitcast %struct.foo* %tmp to i8* ; <i8*> [#uses=1]
call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 16, i32 8, i1 false)
%0 = bitcast %struct.foo* %retval to %1* ; <%1*> [#uses=1]
%1 = load %1* %0, align 1 ; <%1> [#uses=1]
ret %1 %1
}
We now get the result more type safe, with:
define %struct.foo @test(%struct.foo* %P) nounwind {
entry:
%retval = alloca %struct.foo, align 8 ; <%struct.foo*> [#uses=2]
%P.addr = alloca %struct.foo*, align 8 ; <%struct.foo**> [#uses=2]
store %struct.foo* %P, %struct.foo** %P.addr
%tmp = load %struct.foo** %P.addr ; <%struct.foo*> [#uses=1]
%tmp1 = bitcast %struct.foo* %retval to i8* ; <i8*> [#uses=1]
%tmp2 = bitcast %struct.foo* %tmp to i8* ; <i8*> [#uses=1]
call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 16, i32 8, i1 false)
%0 = load %struct.foo* %retval ; <%struct.foo> [#uses=1]
ret %struct.foo %0
}
That memcpy is completely terrible, but I don't know how to fix it.
llvm-svn: 109729
|
|
|
|
|
|
|
| |
improve codegen for vaarg or something, because its codepath is
getting preferred types now.
llvm-svn: 109728
|
|
|
|
|
|
|
| |
compute its own preferred types instead of having CGT compute
them then pass them (circuituously) down into ABIInfo.
llvm-svn: 109726
|
|
|
|
| |
llvm-svn: 109724
|
|
|
|
|
|
|
| |
things as TargetData, ASTContext, LLVMContext etc. Stop passing
them through so many APIs.
llvm-svn: 109723
|
|
|
|
|
|
| |
This will simplify a bunch of code, coming up next.
llvm-svn: 109722
|
|
|
|
|
|
|
|
| |
possible. This improves the example to pass <4 x float> instead of
<2 x double> but we still get awful code, and still don't get the
return value right.
llvm-svn: 109700
|
|
|
|
|
|
| |
remove some now-dead code.
llvm-svn: 109690
|
|
|
|
|
|
| |
don't get errors similar to PR7714 on the return path.
llvm-svn: 109689
|
|
|
|
|
|
| |
and making Get8ByteTypeAtOffset always succeed and documented.
llvm-svn: 109685
|
|
|
|
|
|
|
| |
x86-64 abi. This also improves codegen as well. Some refactoring is needed of
this code.
llvm-svn: 109681
|
|
|
|
| |
llvm-svn: 108423
|
|
|
|
|
|
| |
from PR7583
llvm-svn: 107788
|
|
|
|
|
|
|
|
|
|
| |
r107173, "fix PR7519: after thrashing around and remembering how all this stuff"
r107216, "fix PR7523, which was caused by the ABI code calling ConvertType instead"
This includes a fix to make ConvertTypeForMem handle the "recursive" case, and call
it as such when lowering function types which have an indirect result.
llvm-svn: 107310
|
|
|
|
|
|
| |
this stuff", it broke bootstrap.
llvm-svn: 107232
|
|
|
|
|
|
|
| |
works, the fix is quite simple: just make sure to call ConvertTypeRecursive
when the function type being lowered is in the midst of ConvertType.
llvm-svn: 107173
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
avoid passing ASTContext down through all the methods it has.
When classifying an argument, or argument piece, as INTEGER, check
to see if we have a pointer at exactly the same offset in the
preferred type. If so, use that pointer type instead of i64. This
allows us to compile A function taking a stringref into something
like this:
define i8* @foo(i64 %D.coerce0, i8* %D.coerce1) nounwind ssp {
entry:
%D = alloca %struct.DeclGroup, align 8 ; <%struct.DeclGroup*> [#uses=4]
%0 = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1]
store i64 %D.coerce0, i64* %0
%1 = getelementptr %struct.DeclGroup* %D, i32 0, i32 1 ; <i8**> [#uses=1]
store i8* %D.coerce1, i8** %1
%tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1]
%tmp1 = load i64* %tmp ; <i64> [#uses=1]
%tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i8**> [#uses=1]
%tmp3 = load i8** %tmp2 ; <i8*> [#uses=1]
%add.ptr = getelementptr inbounds i8* %tmp3, i64 %tmp1 ; <i8*> [#uses=1]
ret i8* %add.ptr
}
instead of this:
define i8* @foo(i64 %D.coerce0, i64 %D.coerce1) nounwind ssp {
entry:
%D = alloca %struct.DeclGroup, align 8 ; <%struct.DeclGroup*> [#uses=3]
%0 = insertvalue %0 undef, i64 %D.coerce0, 0 ; <%0> [#uses=1]
%1 = insertvalue %0 %0, i64 %D.coerce1, 1 ; <%0> [#uses=1]
%2 = bitcast %struct.DeclGroup* %D to %0* ; <%0*> [#uses=1]
store %0 %1, %0* %2, align 1
%tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1]
%tmp1 = load i64* %tmp ; <i64> [#uses=1]
%tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i8**> [#uses=1]
%tmp3 = load i8** %tmp2 ; <i8*> [#uses=1]
%add.ptr = getelementptr inbounds i8* %tmp3, i64 %tmp1 ; <i8*> [#uses=1]
ret i8* %add.ptr
}
This implements rdar://7375902 - [codegen quality] clang x86-64 ABI lowering code punishing StringRef
llvm-svn: 107123
|
|
|
|
|
|
| |
no functionality change.
llvm-svn: 107115
|
|
|
|
|
|
|
|
|
| |
This is somewhat annoying to do this at this level, but it avoids
having ABIInfo know depend on CodeGenTypes for a hint.
Nothing is using this yet, so no functionality change.
llvm-svn: 107111
|
|
|
|
|
|
| |
aweful through-memory coersion, just like we do for i32 now.
llvm-svn: 107078
|
|
|
|
| |
llvm-svn: 107076
|
|
|
|
| |
llvm-svn: 107050
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pass/return structs of float/int as float/i32 instead of double/i64
to make the code generated for ABI cleaner. Passing in the low part
of a double is the same as passing in a float.
For example, we now compile:
struct DeclGroup { float NumDecls; };
float foo(DeclGroup D);
void bar(DeclGroup *D) {
foo(*D);
}
into:
%struct.DeclGroup = type { float }
define void @_Z3barP9DeclGroup(%struct.DeclGroup* %D) nounwind {
entry:
%D.addr = alloca %struct.DeclGroup*, align 8 ; <%struct.DeclGroup**> [#uses=2]
%agg.tmp = alloca %struct.DeclGroup, align 4 ; <%struct.DeclGroup*> [#uses=2]
store %struct.DeclGroup* %D, %struct.DeclGroup** %D.addr
%tmp = load %struct.DeclGroup** %D.addr ; <%struct.DeclGroup*> [#uses=1]
%tmp1 = bitcast %struct.DeclGroup* %agg.tmp to i8* ; <i8*> [#uses=1]
%tmp2 = bitcast %struct.DeclGroup* %tmp to i8* ; <i8*> [#uses=1]
call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 4, i32 4, i1 false)
%coerce.dive = getelementptr %struct.DeclGroup* %agg.tmp, i32 0, i32 0 ; <float*> [#uses=1]
%0 = load float* %coerce.dive, align 1 ; <float> [#uses=1]
%call = call float @_Z3foo9DeclGroup(float %0) ; <float> [#uses=0]
ret void
}
instead of:
%struct.DeclGroup = type { float }
define void @_Z3barP9DeclGroup(%struct.DeclGroup* %D) nounwind {
entry:
%D.addr = alloca %struct.DeclGroup*, align 8 ; <%struct.DeclGroup**> [#uses=2]
%agg.tmp = alloca %struct.DeclGroup, align 4 ; <%struct.DeclGroup*> [#uses=2]
%tmp3 = alloca double ; <double*> [#uses=2]
store %struct.DeclGroup* %D, %struct.DeclGroup** %D.addr
%tmp = load %struct.DeclGroup** %D.addr ; <%struct.DeclGroup*> [#uses=1]
%tmp1 = bitcast %struct.DeclGroup* %agg.tmp to i8* ; <i8*> [#uses=1]
%tmp2 = bitcast %struct.DeclGroup* %tmp to i8* ; <i8*> [#uses=1]
call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 4, i32 4, i1 false)
%coerce.dive = getelementptr %struct.DeclGroup* %agg.tmp, i32 0, i32 0 ; <float*> [#uses=1]
%0 = bitcast double* %tmp3 to float* ; <float*> [#uses=1]
%1 = load float* %coerce.dive ; <float> [#uses=1]
store float %1, float* %0, align 1
%2 = load double* %tmp3 ; <double> [#uses=1]
%call = call float @_Z3foo9DeclGroup(double %2) ; <float> [#uses=0]
ret void
}
which is this machine code (at -O0):
__Z3barP9DeclGroup:
subq $24, %rsp
movq %rdi, 16(%rsp)
movq 16(%rsp), %rdi
leaq 8(%rsp), %rax
movl (%rdi), %ecx
movl %ecx, (%rax)
movss 8(%rsp), %xmm0
callq __Z3foo9DeclGroup
addq $24, %rsp
ret
vs this:
__Z3barP9DeclGroup:
subq $24, %rsp
movq %rdi, 16(%rsp)
movq 16(%rsp), %rdi
leaq 8(%rsp), %rax
movl (%rdi), %ecx
movl %ecx, (%rax)
movss 8(%rsp), %xmm0
movss %xmm0, (%rsp)
movsd (%rsp), %xmm0
callq __Z3foo9DeclGroup
addq $24, %rsp
ret
At -O3, it is the difference between this now:
__Z3barP9DeclGroup:
movss (%rdi), %xmm0
jmp __Z3foo9DeclGroup # TAILCALL
vs this before:
__Z3barP9DeclGroup:
movl (%rdi), %eax
movd %rax, %xmm0
jmp __Z3foo9DeclGroup # TAILCALL
llvm-svn: 107048
|
|
|
|
|
|
|
|
| |
have CGF create and make accessible standard int32,int64 and
intptr types. This fixes a ton of 80 column violations
introduced by LLVMContextification and cleans up stuff a lot.
llvm-svn: 106977
|
|
|
|
| |
llvm-svn: 106958
|
|
|
|
|
|
| |
(the last argument of the triple).
llvm-svn: 106131
|
|
|
|
| |
llvm-svn: 106106
|
|
|
|
|
|
|
|
|
|
|
|
| |
provides C "integer type" semantics in C and C++ "integral type"
semantics in C++.
Note that I still need to update isIntegerType (and possibly other
predicates) using the same approach I've taken for
isIntegralType(). The two should have the same meaning, but currently
don't (!).
llvm-svn: 106074
|
|
|
|
|
|
|
|
|
|
| |
in C++ that involve both integral and enumeration types. Convert all
of the callers to Type::isIntegralType() that are meant to work with
both integral and enumeration types over to
Type::isIntegralOrEnumerationType(), to prepare to eliminate
enumeration types as integral types.
llvm-svn: 106071
|
|
|
|
|
|
|
|
| |
ARM.
Fixes PR7310.
llvm-svn: 105592
|
|
|
|
|
|
|
| |
for 32-bit MIPS processors. Hat-tip to rdivacky for providing gcc dumps
on this.
llvm-svn: 104816
|
|
|
|
| |
llvm-svn: 103945
|
|
|
|
|
|
|
|
| |
- Check bases as part of isEmptyRecord().
- C++ record fields are never empty in the Itanium ABI.
llvm-svn: 103944
|
|
|
|
| |
llvm-svn: 103843
|
|
|
|
| |
llvm-svn: 103842
|
|
|
|
| |
llvm-svn: 103761
|
|
|
|
|
|
| |
- Fixes PR7098.
llvm-svn: 103514
|
|
|
|
|
|
| |
exceeds the minimum ABI alignment.
llvm-svn: 102019
|
|
|
|
| |
llvm-svn: 102015
|
|
|
|
| |
llvm-svn: 100534
|
|
|
|
| |
llvm-svn: 98264
|
|
|
|
| |
llvm-svn: 98206
|
|
|
|
|
|
|
| |
and ARM. Implement __builtin_init_dwarf_reg_size_table for i386 (both) and
x86-64 (all).
llvm-svn: 97859
|
|
|
|
|
|
|
|
|
|
|
| |
a common source of oddities and, in theory, removes some redundant ABI
computations. Also fixes a miscompile I introduced yesterday by refactoring
some code and causing a slightly different code path to be taken that
didn't perform *parameter* type canonicalization, just normal type
canonicalization; this in turn caused a bit of ABI code to misfire because
it was looking for 'double' or 'float' but received 'const float'.
llvm-svn: 97030
|
|
|
|
| |
llvm-svn: 96446
|
|
|
|
|
|
| |
isInteger, we now have isFloatTy and isIntegerTy. Requested by Chris!
llvm-svn: 96224
|
|
|
|
|
|
|
| |
marked 'force_align_arg_pointer'. Almost there; now all I need to do is finish
up the backend.
llvm-svn: 96100
|