| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
<rdar://problem/8278732>
llvm-svn: 110420
|
|
|
|
| |
llvm-svn: 110418
|
|
|
|
|
|
|
| |
these, but it's convenient to mangle them when deferring them (in the 99.99%
case where it's not an anonymous union, of course).
llvm-svn: 110381
|
|
|
|
|
|
| |
(objc gc and blocks in NeXt runtime).
llvm-svn: 110377
|
|
|
|
|
|
|
|
|
|
|
| |
do the right thing with mixed-visibility symbols, so disable the visibility
optimization where that's possible, i.e. with template classes (since it's
possible that an arbitrary template might be subject to an explicit
instantiation elsewhere). 447.dealII actually does this.
I've put the code under an option that's currently not hooked up to anything.
llvm-svn: 110374
|
|
|
|
| |
llvm-svn: 110354
|
|
|
|
| |
llvm-svn: 110347
|
|
|
|
|
|
| |
(objc gc specific).
llvm-svn: 110340
|
|
|
|
| |
llvm-svn: 110339
|
|
|
|
| |
llvm-svn: 110326
|
|
|
|
| |
llvm-svn: 110315
|
|
|
|
| |
llvm-svn: 110290
|
|
|
|
| |
llvm-svn: 110287
|
|
|
|
|
|
|
|
|
| |
functions with in-line definitions, since such thunks will be emitted at any
use of the function.
Completes the feature work for rdar://problem/7523229.
llvm-svn: 110285
|
|
|
|
| |
llvm-svn: 110239
|
|
|
|
|
|
| |
for objective-c/c++ blocks (NeXt runtime).
llvm-svn: 110213
|
|
|
|
|
|
|
|
|
|
| |
Apply hidden visibility to most RTTI; libstdc++ does not rely on exact
pointer equality for the type info (just the type info names). Apply
the same optimization to RTTI that we do to vtables.
Fixes PR5962.
llvm-svn: 110192
|
|
|
|
|
|
| |
haven't been explicitly instantiated.
llvm-svn: 110189
|
|
|
|
|
|
|
|
|
|
|
| |
ObjC exceptions:
- don't enter a try for the catch blocks unless there's a finally
- put the setjmp buffer in the locals set for liveness reasons
- dump the sync object into an alloca in the locals set for liveness reasons
Some of this can go away if the backend starts to properly calculate liveness
in the presence of setjmp (which would also be a *much* stabler solution).
llvm-svn: 110188
|
|
|
|
|
|
|
| |
mark it nounwind based on whether it contains any non-nounwind calls.
<rdar://problem/8087431>
llvm-svn: 110163
|
|
|
|
| |
llvm-svn: 110153
|
|
|
|
| |
llvm-svn: 110107
|
|
|
|
|
|
| |
initializations now.
llvm-svn: 110063
|
|
|
|
|
|
|
|
|
| |
the magic of inline assembly. Essentially we use read and write hazards
on the set of local variables to force flushing locals to memory
immediately before any protected calls and to inhibit optimizing locals
across the setjmp->catch edge. Fixes rdar://problem/8160285
llvm-svn: 109960
|
|
|
|
|
|
|
|
| |
an initializer requiring temporary object disposal.
Fixes rdar:://problem/8246444.
llvm-svn: 109849
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The X86-64 ABI code didn't handle the case when a struct
would get classified and turn up as "NoClass INTEGER" for
example. This is perfectly possible when the first slot
is all padding (e.g. due to empty base classes). In this
situation, the first 8-byte doesn't take a register at all,
only the second 8-byte does.
This fixes this by enhancing the x86-64 abi stuff to allow
and handle this case, reverts the broken fix for PR5831,
and enhances the target independent stuff to be able to
handle an argument value in registers being accessed at an
offset from the memory value.
This is the last x86-64 calling convention related miscompile
that I'm aware of.
llvm-svn: 109848
|
|
|
|
|
|
|
| |
sections on", this change uncovered a possible linker bug which resulted in the
wrong messages getting dispatched. Backing this out while we investigate...
llvm-svn: 109817
|
|
|
|
| |
llvm-svn: 109814
|
|
|
|
|
|
| |
that needs it and remove getCoerceResult.
llvm-svn: 109807
|
|
|
|
| |
llvm-svn: 109805
|
|
|
|
|
|
| |
float, the special case hack in getCoerceResult can go away.
llvm-svn: 109804
|
|
|
|
|
|
| |
functionality change.
llvm-svn: 109797
|
|
|
|
|
|
|
|
| |
<2 x float> instead of double. This works but can't be turned
on until I teach codegen to pass <2 x float> as one XMM register
instead of two.
llvm-svn: 109790
|
|
|
|
| |
llvm-svn: 109786
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CodeGenModule::MayDeferGeneration into a new function,
DeclIsRequiredFunctionOrFileScopedVar.
This is essentially a CodeGen predicate that is also needed by the PCH mechanism to determine whether a decl
needs to be deserialized during PCH loading for codegen purposes.
Since this logic is shared by CodeGen and the PCH mechanism, move it to the ASTContext,
thus CodeGenModule's GetLinkageForFunction/GetLinkageForVariable and the GVALinkage enum is moved out of CodeGen.
This fixes current (and avoids future) codegen-from-PCH bugs.
llvm-svn: 109784
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
end of a struct. This improves the case when the struct being passed
contains 3 floats, either due to a struct or array of 3 things. Before
we'd generate this IR for the testcase:
define float @bar(double %X.coerce0, double %X.coerce1) nounwind {
entry:
%X = alloca %struct.foof, align 8 ; <%struct.foof*> [#uses=2]
%0 = bitcast %struct.foof* %X to %1* ; <%1*> [#uses=2]
%1 = getelementptr %1* %0, i32 0, i32 0 ; <double*> [#uses=1]
store double %X.coerce0, double* %1
%2 = getelementptr %1* %0, i32 0, i32 1 ; <double*> [#uses=1]
store double %X.coerce1, double* %2
%tmp = getelementptr inbounds %struct.foof* %X, i32 0, i32 2 ; <float*> [#uses=1]
%tmp1 = load float* %tmp ; <float> [#uses=1]
ret float %tmp1
}
which compiled (with optimization) to:
_bar: ## @bar
## BB#0: ## %entry
movd %xmm1, %rax
movd %eax, %xmm0
ret
Now we produce:
define float @bar(double %X.coerce0, float %X.coerce1) nounwind {
entry:
%X = alloca %struct.foof, align 8 ; <%struct.foof*> [#uses=2]
%0 = bitcast %struct.foof* %X to %0* ; <%0*> [#uses=2]
%1 = getelementptr %0* %0, i32 0, i32 0 ; <double*> [#uses=1]
store double %X.coerce0, double* %1
%2 = getelementptr %0* %0, i32 0, i32 1 ; <float*> [#uses=1]
store float %X.coerce1, float* %2
%tmp = getelementptr inbounds %struct.foof* %X, i32 0, i32 2 ; <float*> [#uses=1]
%tmp1 = load float* %tmp ; <float> [#uses=1]
ret float %tmp1
}
and:
_bar: ## @bar
## BB#0: ## %entry
movaps %xmm1, %xmm0
ret
llvm-svn: 109776
|
|
|
|
|
|
|
| |
as <2 x float> instead of as double. The backend isn't ready
yet, but infrastructure in the frontend can come up.
llvm-svn: 109768
|
|
|
|
|
|
|
| |
make it clear that this function should only return a type
that the codegen will classify the same as an INTEGER type.
llvm-svn: 109763
|
|
|
|
|
|
| |
that Eli pointed out, rdar://8249586
llvm-svn: 109762
|
|
|
|
|
|
|
|
|
|
|
| |
return where the struct has a base but no fields. This
was because the x86-64 abi logic was checking the wrong
predicate in one place.
This was introduced in r91874, which was a fix for PR5831,
which lacked a CHECK line, so I verified and added it.
llvm-svn: 109759
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
struct a {
struct c {
double x;
int y;
} x[1];
};
void foo(struct a A) {
}
into:
define void @foo(double %A.coerce0, i32 %A.coerce1) nounwind {
entry:
%A = alloca %struct.a, align 8 ; <%struct.a*> [#uses=1]
%0 = bitcast %struct.a* %A to %struct.c* ; <%struct.c*> [#uses=2]
%1 = getelementptr %struct.c* %0, i32 0, i32 0 ; <double*> [#uses=1]
store double %A.coerce0, double* %1
%2 = getelementptr %struct.c* %0, i32 0, i32 1 ; <i32*> [#uses=1]
store i32 %A.coerce1, i32* %2
instead of:
define void @foo(double %A.coerce0, i64 %A.coerce1) nounwind {
entry:
%A = alloca %struct.a, align 8 ; <%struct.a*> [#uses=1]
%0 = bitcast %struct.a* %A to %0* ; <%0*> [#uses=2]
%1 = getelementptr %0* %0, i32 0, i32 0 ; <double*> [#uses=1]
store double %A.coerce0, double* %1
%2 = getelementptr %0* %0, i32 0, i32 1 ; <i64*> [#uses=1]
store i64 %A.coerce1, i64* %2
I only do this now because I never want to look at this code again :)
llvm-svn: 109738
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
small integer + padding as that small integer. On code
like:
struct c { double x; int y; };
void bar(struct c C) { }
This means that we compile to:
define void @bar(double %C.coerce0, i32 %C.coerce1) nounwind {
entry:
%C = alloca %struct.c, align 8 ; <%struct.c*> [#uses=2]
%0 = getelementptr %struct.c* %C, i32 0, i32 0 ; <double*> [#uses=1]
store double %C.coerce0, double* %0
%1 = getelementptr %struct.c* %C, i32 0, i32 1 ; <i32*> [#uses=1]
store i32 %C.coerce1, i32* %1
instead of:
define void @bar(double %C.coerce0, i64 %C.coerce1) nounwind {
entry:
%C = alloca %struct.c, align 8 ; <%struct.c*> [#uses=3]
%0 = bitcast %struct.c* %C to %0* ; <%0*> [#uses=2]
%1 = getelementptr %0* %0, i32 0, i32 0 ; <double*> [#uses=1]
store double %C.coerce0, double* %1
%2 = getelementptr %0* %0, i32 0, i32 1 ; <i64*> [#uses=1]
store i64 %C.coerce1, i64* %2
which gives SRoA heartburn.
This implements rdar://5711709, a nice low number :)
llvm-svn: 109737
|
|
|
|
| |
llvm-svn: 109735
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
have a "coerce to" type which often matches the default lowering of Clang
type to LLVM IR type, but the coerce case can be handled by making them
not be the same.
This simplifies things and fixes issues where X86-64 abi lowering would
return coerce after making preferred types exactly match up. This caused
us to compile:
typedef float v4f32 __attribute__((__vector_size__(16)));
v4f32 foo(v4f32 X) {
return X+X;
}
into this code at -O0:
define <4 x float> @foo(<4 x float> %X.coerce) nounwind {
entry:
%retval = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=2]
%coerce = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=2]
%X.addr = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=3]
store <4 x float> %X.coerce, <4 x float>* %coerce
%X = load <4 x float>* %coerce ; <<4 x float>> [#uses=1]
store <4 x float> %X, <4 x float>* %X.addr
%tmp = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%tmp1 = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%add = fadd <4 x float> %tmp, %tmp1 ; <<4 x float>> [#uses=1]
store <4 x float> %add, <4 x float>* %retval
%0 = load <4 x float>* %retval ; <<4 x float>> [#uses=1]
ret <4 x float> %0
}
Now we get:
define <4 x float> @foo(<4 x float> %X) nounwind {
entry:
%X.addr = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=3]
store <4 x float> %X, <4 x float>* %X.addr
%tmp = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%tmp1 = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1]
%add = fadd <4 x float> %tmp, %tmp1 ; <<4 x float>> [#uses=1]
ret <4 x float> %add
}
This implements rdar://8248065
llvm-svn: 109733
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before we'd compile the example into something like:
%coerce.dive2 = getelementptr %struct.v4f32wrapper* %retval, i32 0, i32 0 ; <<4 x float>*> [#uses=1]
%1 = bitcast <4 x float>* %coerce.dive2 to <2 x double>* ; <<2 x double>*> [#uses=1]
%2 = load <2 x double>* %1, align 1 ; <<2 x double>> [#uses=1]
ret <2 x double> %2
Now we produce:
%coerce.dive2 = getelementptr %struct.v4f32wrapper* %retval, i32 0, i32 0 ; <<4 x float>*> [#uses=1]
%0 = load <4 x float>* %coerce.dive2, align 1 ; <<4 x float>> [#uses=1]
ret <4 x float> %0
llvm-svn: 109732
|
|
|
|
|
|
| |
with return values, improving stuff that returns __m128 etc.
llvm-svn: 109731
|
|
|
|
| |
llvm-svn: 109730
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
for return values too. Instead of compiling something like:
struct foo {
int *X;
float *Y;
};
struct foo test(struct foo *P) { return *P; }
to:
%1 = type { i64, i64 }
define %1 @test(%struct.foo* %P) nounwind {
entry:
%retval = alloca %struct.foo, align 8 ; <%struct.foo*> [#uses=2]
%P.addr = alloca %struct.foo*, align 8 ; <%struct.foo**> [#uses=2]
store %struct.foo* %P, %struct.foo** %P.addr
%tmp = load %struct.foo** %P.addr ; <%struct.foo*> [#uses=1]
%tmp1 = bitcast %struct.foo* %retval to i8* ; <i8*> [#uses=1]
%tmp2 = bitcast %struct.foo* %tmp to i8* ; <i8*> [#uses=1]
call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 16, i32 8, i1 false)
%0 = bitcast %struct.foo* %retval to %1* ; <%1*> [#uses=1]
%1 = load %1* %0, align 1 ; <%1> [#uses=1]
ret %1 %1
}
We now get the result more type safe, with:
define %struct.foo @test(%struct.foo* %P) nounwind {
entry:
%retval = alloca %struct.foo, align 8 ; <%struct.foo*> [#uses=2]
%P.addr = alloca %struct.foo*, align 8 ; <%struct.foo**> [#uses=2]
store %struct.foo* %P, %struct.foo** %P.addr
%tmp = load %struct.foo** %P.addr ; <%struct.foo*> [#uses=1]
%tmp1 = bitcast %struct.foo* %retval to i8* ; <i8*> [#uses=1]
%tmp2 = bitcast %struct.foo* %tmp to i8* ; <i8*> [#uses=1]
call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 16, i32 8, i1 false)
%0 = load %struct.foo* %retval ; <%struct.foo> [#uses=1]
ret %struct.foo %0
}
That memcpy is completely terrible, but I don't know how to fix it.
llvm-svn: 109729
|
|
|
|
|
|
|
| |
improve codegen for vaarg or something, because its codepath is
getting preferred types now.
llvm-svn: 109728
|
|
|
|
|
|
|
| |
compute its own preferred types instead of having CGT compute
them then pass them (circuituously) down into ABIInfo.
llvm-svn: 109726
|