| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
| |
sections on", this change uncovered a possible linker bug which resulted in the
wrong messages getting dispatched. Backing this out while we investigate...
llvm-svn: 109817
|
| |
|
|
| |
llvm-svn: 109814
|
| |
|
|
|
|
| |
that needs it and remove getCoerceResult.
llvm-svn: 109807
|
| |
|
|
| |
llvm-svn: 109805
|
| |
|
|
|
|
| |
float, the special case hack in getCoerceResult can go away.
llvm-svn: 109804
|
| |
|
|
|
|
| |
functionality change.
llvm-svn: 109797
|
| |
|
|
|
|
|
|
| |
<2 x float> instead of double. This works but can't be turned
on until I teach codegen to pass <2 x float> as one XMM register
instead of two.
llvm-svn: 109790
|
| |
|
|
| |
llvm-svn: 109786
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CodeGenModule::MayDeferGeneration into a new function,
DeclIsRequiredFunctionOrFileScopedVar.
This is essentially a CodeGen predicate that is also needed by the PCH mechanism to determine whether a decl
needs to be deserialized during PCH loading for codegen purposes.
Since this logic is shared by CodeGen and the PCH mechanism, move it to the ASTContext,
thus CodeGenModule's GetLinkageForFunction/GetLinkageForVariable and the GVALinkage enum is moved out of CodeGen.
This fixes current (and avoids future) codegen-from-PCH bugs.
llvm-svn: 109784
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
end of a struct. This improves the case when the struct being passed
contains 3 floats, either due to a struct or array of 3 things. Before
we'd generate this IR for the testcase:
define float @bar(double %X.coerce0, double %X.coerce1) nounwind {
entry:
%X = alloca %struct.foof, align 8 ; <%struct.foof*> [#uses=2]
%0 = bitcast %struct.foof* %X to %1* ; <%1*> [#uses=2]
%1 = getelementptr %1* %0, i32 0, i32 0 ; <double*> [#uses=1]
store double %X.coerce0, double* %1
%2 = getelementptr %1* %0, i32 0, i32 1 ; <double*> [#uses=1]
store double %X.coerce1, double* %2
%tmp = getelementptr inbounds %struct.foof* %X, i32 0, i32 2 ; <float*> [#uses=1]
%tmp1 = load float* %tmp ; <float> [#uses=1]
ret float %tmp1
}
which compiled (with optimization) to:
_bar: ## @bar
## BB#0: ## %entry
movd %xmm1, %rax
movd %eax, %xmm0
ret
Now we produce:
define float @bar(double %X.coerce0, float %X.coerce1) nounwind {
entry:
%X = alloca %struct.foof, align 8 ; <%struct.foof*> [#uses=2]
%0 = bitcast %struct.foof* %X to %0* ; <%0*> [#uses=2]
%1 = getelementptr %0* %0, i32 0, i32 0 ; <double*> [#uses=1]
store double %X.coerce0, double* %1
%2 = getelementptr %0* %0, i32 0, i32 1 ; <float*> [#uses=1]
store float %X.coerce1, float* %2
%tmp = getelementptr inbounds %struct.foof* %X, i32 0, i32 2 ; <float*> [#uses=1]
%tmp1 = load float* %tmp ; <float> [#uses=1]
ret float %tmp1
}
and:
_bar: ## @bar
## BB#0: ## %entry
movaps %xmm1, %xmm0
ret
llvm-svn: 109776
|
| |
|
|
|
|
|
| |
as <2 x float> instead of as double. The backend isn't ready
yet, but infrastructure in the frontend can come up.
llvm-svn: 109768
|
| |
|
|
|
|
|
| |
make it clear that this function should only return a type
that the codegen will classify the same as an INTEGER type.
llvm-svn: 109763
|
| |
|
|
|
|
| |
that Eli pointed out, rdar://8249586
llvm-svn: 109762
|
| |
|
|
|
|
|
|
|
|
|
| |
return where the struct has a base but no fields. This
was because the x86-64 abi logic was checking the wrong
predicate in one place.
This was introduced in r91874, which was a fix for PR5831,
which lacked a CHECK line, so I verified and added it.
llvm-svn: 109759
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
struct a {
struct c {
double x;
int y;
} x[1];
};
void foo(struct a A) {
}
into:
define void @foo(double %A.coerce0, i32 %A.coerce1) nounwind {
entry:
%A = alloca %struct.a, align 8 ; <%struct.a*> [#uses=1]
%0 = bitcast %struct.a* %A to %struct.c* ; <%struct.c*> [#uses=2]
%1 = getelementptr %struct.c* %0, i32 0, i32 0 ; <double*> [#uses=1]
store double %A.coerce0, double* %1
%2 = getelementptr %struct.c* %0, i32 0, i32 1 ; <i32*> [#uses=1]
store i32 %A.coerce1, i32* %2
instead of:
define void @foo(double %A.coerce0, i64 %A.coerce1) nounwind {
entry:
%A = alloca %struct.a, align 8 ; <%struct.a*> [#uses=1]
%0 = bitcast %struct.a* %A to %0* ; <%0*> [#uses=2]
%1 = getelementptr %0* %0, i32 0, i32 0 ; <double*> [#uses=1]
store double %A.coerce0, double* %1
%2 = getelementptr %0* %0, i32 0, i32 1 ; <i64*> [#uses=1]
store i64 %A.coerce1, i64* %2
I only do this now because I never want to look at this code again :)
llvm-svn: 109738
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
small integer + padding as that small integer. On code
like:
struct c { double x; int y; };
void bar(struct c C) { }
This means that we compile to:
define void @bar(double %C.coerce0, i32 %C.coerce1) nounwind {
entry:
%C = alloca %struct.c, align 8 ; <%struct.c*> [#uses=2]
%0 = getelementptr %struct.c* %C, i32 0, i32 0 ; <double*> [#uses=1]
store double %C.coerce0, double* %0
%1 = getelementptr %struct.c* %C, i32 0, i32 1 ; <i32*> [#uses=1]
store i32 %C.coerce1, i32* %1
instead of:
define void @bar(double %C.coerce0, i64 %C.coerce1) nounwind {
entry:
%C = alloca %struct.c, align 8 ; <%struct.c*> [#uses=3]
%0 = bitcast %struct.c* %C to %0* ; <%0*> [#uses=2]
%1 = getelementptr %0* %0, i32 0, i32 0 ; <double*> [#uses=1]
store double %C.coerce0, double* %1
%2 = getelementptr %0* %0, i32 0, i32 1 ; <i64*> [#uses=1]
store i64 %C.coerce1, i64* %2
which gives SRoA heartburn.
This implements rdar://5711709, a nice low number :)
llvm-svn: 109737
|
| |
|
|
| |
llvm-svn: 109735
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
have a "coerce to" type which often matches the default lowering of Clang
type to LLVM IR type, but the coerce case can be handled by making them
not be the same.
This simplifies things and fixes issues where X86-64 abi lowering would
return coerce after making preferred types exactly match up. This caused
us to compile:
typedef float v4f32 __attribute__((__vector_size__(16)));
v4f32 foo(v4f32 X) {
return X+X;
}
into this code at -O0:
define <4 x float> @foo(<4 x float> %X.coerce) nounwind {
entry:
%retval = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=2]
%coerce = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=2]
%X.addr = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=3]
store <4 x float> %X.coerce, <4 x float>* %coerce
%X = load <4 x float>* %coerce ; <<4 x float>> [#uses=1]
store <4 x float> %X, <4 x float>* %X.addr
%tmp = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%tmp1 = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%add = fadd <4 x float> %tmp, %tmp1 ; <<4 x float>> [#uses=1]
store <4 x float> %add, <4 x float>* %retval
%0 = load <4 x float>* %retval ; <<4 x float>> [#uses=1]
ret <4 x float> %0
}
Now we get:
define <4 x float> @foo(<4 x float> %X) nounwind {
entry:
%X.addr = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=3]
store <4 x float> %X, <4 x float>* %X.addr
%tmp = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%tmp1 = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%add = fadd <4 x float> %tmp, %tmp1 ; <<4 x float>> [#uses=1]
ret <4 x float> %add
}
This implements rdar://8248065
llvm-svn: 109733
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before we'd compile the example into something like:
%coerce.dive2 = getelementptr %struct.v4f32wrapper* %retval, i32 0, i32 0 ; <<4 x float>*> [#uses=1]
%1 = bitcast <4 x float>* %coerce.dive2 to <2 x double>* ; <<2 x double>*> [#uses=1]
%2 = load <2 x double>* %1, align 1 ; <<2 x double>> [#uses=1]
ret <2 x double> %2
Now we produce:
%coerce.dive2 = getelementptr %struct.v4f32wrapper* %retval, i32 0, i32 0 ; <<4 x float>*> [#uses=1]
%0 = load <4 x float>* %coerce.dive2, align 1 ; <<4 x float>> [#uses=1]
ret <4 x float> %0
llvm-svn: 109732
|
| |
|
|
|
|
| |
with return values, improving stuff that returns __m128 etc.
llvm-svn: 109731
|
| |
|
|
| |
llvm-svn: 109730
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
for return values too. Instead of compiling something like:
struct foo {
int *X;
float *Y;
};
struct foo test(struct foo *P) { return *P; }
to:
%1 = type { i64, i64 }
define %1 @test(%struct.foo* %P) nounwind {
entry:
%retval = alloca %struct.foo, align 8 ; <%struct.foo*> [#uses=2]
%P.addr = alloca %struct.foo*, align 8 ; <%struct.foo**> [#uses=2]
store %struct.foo* %P, %struct.foo** %P.addr
%tmp = load %struct.foo** %P.addr ; <%struct.foo*> [#uses=1]
%tmp1 = bitcast %struct.foo* %retval to i8* ; <i8*> [#uses=1]
%tmp2 = bitcast %struct.foo* %tmp to i8* ; <i8*> [#uses=1]
call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 16, i32 8, i1 false)
%0 = bitcast %struct.foo* %retval to %1* ; <%1*> [#uses=1]
%1 = load %1* %0, align 1 ; <%1> [#uses=1]
ret %1 %1
}
We now get the result more type safe, with:
define %struct.foo @test(%struct.foo* %P) nounwind {
entry:
%retval = alloca %struct.foo, align 8 ; <%struct.foo*> [#uses=2]
%P.addr = alloca %struct.foo*, align 8 ; <%struct.foo**> [#uses=2]
store %struct.foo* %P, %struct.foo** %P.addr
%tmp = load %struct.foo** %P.addr ; <%struct.foo*> [#uses=1]
%tmp1 = bitcast %struct.foo* %retval to i8* ; <i8*> [#uses=1]
%tmp2 = bitcast %struct.foo* %tmp to i8* ; <i8*> [#uses=1]
call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 16, i32 8, i1 false)
%0 = load %struct.foo* %retval ; <%struct.foo> [#uses=1]
ret %struct.foo %0
}
That memcpy is completely terrible, but I don't know how to fix it.
llvm-svn: 109729
|
| |
|
|
|
|
|
| |
improve codegen for vaarg or something, because its codepath is
getting preferred types now.
llvm-svn: 109728
|
| |
|
|
|
|
|
| |
compute its own preferred types instead of having CGT compute
them then pass them (circuituously) down into ABIInfo.
llvm-svn: 109726
|
| |
|
|
| |
llvm-svn: 109724
|
| |
|
|
|
|
|
| |
things as TargetData, ASTContext, LLVMContext etc. Stop passing
them through so many APIs.
llvm-svn: 109723
|
| |
|
|
|
|
| |
This will simplify a bunch of code, coming up next.
llvm-svn: 109722
|
| |
|
|
|
|
|
|
| |
possible. This improves the example to pass <4 x float> instead of
<2 x double> but we still get awful code, and still don't get the
return value right.
llvm-svn: 109700
|
| |
|
|
| |
llvm-svn: 109699
|
| |
|
|
|
|
| |
names used by gcc in debug info. This makes gdb testsuite happy.
llvm-svn: 109694
|
| |
|
|
|
|
| |
remove some now-dead code.
llvm-svn: 109690
|
| |
|
|
|
|
| |
don't get errors similar to PR7714 on the return path.
llvm-svn: 109689
|
| |
|
|
|
|
| |
and making Get8ByteTypeAtOffset always succeed and documented.
llvm-svn: 109685
|
| |
|
|
|
|
|
| |
x86-64 abi. This also improves codegen as well. Some refactoring is needed of
this code.
llvm-svn: 109681
|
| |
|
|
|
|
|
| |
block returns structs. Fies radar 8241648.
Executable test added to llvm test suite.
llvm-svn: 109620
|
| |
|
|
| |
llvm-svn: 109607
|
| |
|
|
|
|
| |
a fixme mentioning the simplification when CallSite can clone itself
llvm-svn: 109575
|
| |
|
|
|
|
| |
Tested by mi1-var-obj.exp in gdb testsuite.
llvm-svn: 109571
|
| |
|
|
|
|
|
|
|
| |
enclosing normal cleanup, not the top of the EH stack. I'm *really*
surprised this hasn't been causing more problems.
Fixes rdar://problem/8231514.
llvm-svn: 109569
|
| |
|
|
| |
llvm-svn: 109550
|
| |
|
|
|
|
|
|
|
|
|
|
| |
CodeGenModule::MayDeferGeneration into a new function,
DeclIsRequiredFunctionOrFileScopedVar.
This function is part of the public CodeGen interface since it's essentially a CodeGen predicate that is also
needed by the PCH mechanism to determine whether a decl needs to be deserialized during PCH loading for codegen purposes.
This fixes current (and avoids future) codegen-from-PCH bugs.
llvm-svn: 109546
|
| |
|
|
| |
llvm-svn: 109535
|
| |
|
|
| |
llvm-svn: 109507
|
| |
|
|
|
|
|
| |
if it hs side-effect to matchgcc's behaviour.
Addresses radar 8172109.
llvm-svn: 109467
|
| |
|
|
|
|
| |
I knew this code duplication would bite me.
llvm-svn: 109463
|
| |
|
|
| |
llvm-svn: 109426
|
| |
|
|
|
|
| |
since we aren't going to be calling them ever.
llvm-svn: 109377
|
| |
|
|
| |
llvm-svn: 109315
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Avoid use of Path.makeAbsolute().
DW_TAG_compile_unit uses two attributes DW_AT_name and DW_AT_comp_dir. Their expected values are:
$ clang foo.c -g
DW_AT_name - foo.c
DW_AT_comp_dir - `pwd`
$ clang one/two/foo.c -g
DW_AT_name - one/two/foo.c
DW_AT_comp_dir - `pwd`
$ clang /tmp/one/foo.c -g
DW_AT_name - /tmp/one/foo.c
DW_AT_comp_dir - empty
llvm-svn: 109303
|
| |
|
|
|
|
| |
Diagnose attempts to do this under the GNU or fragile NeXT runtimes.
llvm-svn: 109298
|