summaryrefslogtreecommitdiffstats
path: root/clang/lib/CodeGen/CGCall.cpp
Commit message (Collapse)AuthorAgeFilesLines
* Simplify mem{cpy, move, set} creation with IRBuilder.Benjamin Kramer2010-12-301-8/+6
| | | | llvm-svn: 122634
* Fix Whitespace.Michael J. Spencer2010-10-191-60/+60
| | | | llvm-svn: 116798
* IRgen/ABI: Add support for realigning structures which are passed by indirectDaniel Dunbar2010-09-161-2/+22
| | | | | | reference. llvm-svn: 114114
* Add symantic support for the Pascal calling convention viaDawn Perchik2010-09-031-0/+4
| | | | | | | "__attribute((pascal))" or "__pascal" (and "_pascal" under -fborland-extensions). Support still needs to be added to llvm. llvm-svn: 112939
* Re-commit r112916 with an additional fix for the self-host failures.John McCall2010-09-031-0/+3
| | | | | | I've audited the remaining getFunctionInfo call sites. llvm-svn: 112936
* Revert r112916, it's breaking selfhost pretty badly.John McCall2010-09-031-3/+0
| | | | llvm-svn: 112925
* It's not safe to use the generic CXXMethodDecl overload of CGT::getFunctionInfoJohn McCall2010-09-031-0/+3
| | | | | | | to set up a destructor call, because ABIs can tweak these conventions. Fixes rdar://problem/8386802. llvm-svn: 112916
* Teach IR generation to return 'this' from constructors and destructorsJohn McCall2010-08-311-18/+28
| | | | | | under the ARM ABI. llvm-svn: 112588
* IRgen: Switch more MakeAddr() users to MakeAddrLValue; this time for calls ↵Daniel Dunbar2010-08-211-2/+1
| | | | | | which were previously not computing the qualifier list. In most cases, I don't think it matters, but I believe this is conservatively more correct / consistent. llvm-svn: 111717
* IRgen: Change Emit{Load,Store}OfScalar to take a required Alignment argument andDaniel Dunbar2010-08-211-11/+23
| | | | | | | | update callers as best I can. - This is a work in progress, our alignment handling is very horrible / sketchy -- I am just aiming for monotonic improvement. - Serious review appreciated. llvm-svn: 111707
* fix PR5179 and correctly fix PR5831 to not miscompile.Chris Lattner2010-07-301-9/+47
| | | | | | | | | | | | | | | | | | | | The X86-64 ABI code didn't handle the case when a struct would get classified and turn up as "NoClass INTEGER" for example. This is perfectly possible when the first slot is all padding (e.g. due to empty base classes). In this situation, the first 8-byte doesn't take a register at all, only the second 8-byte does. This fixes this by enhancing the x86-64 abi stuff to allow and handle this case, reverts the broken fix for PR5831, and enhances the target independent stuff to be able to handle an argument value in registers being accessed at an offset from the memory value. This is the last x86-64 calling convention related miscompile that I'm aware of. llvm-svn: 109848
* fix a builder, why didn't clang++ catch this?Chris Lattner2010-07-291-1/+2
| | | | llvm-svn: 109735
* Kill off the 'coerce' ABI passing form. Now 'direct' and 'extend' alwaysChris Lattner2010-07-291-151/+138
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | have a "coerce to" type which often matches the default lowering of Clang type to LLVM IR type, but the coerce case can be handled by making them not be the same. This simplifies things and fixes issues where X86-64 abi lowering would return coerce after making preferred types exactly match up. This caused us to compile: typedef float v4f32 __attribute__((__vector_size__(16))); v4f32 foo(v4f32 X) { return X+X; } into this code at -O0: define <4 x float> @foo(<4 x float> %X.coerce) nounwind { entry: %retval = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=2] %coerce = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=2] %X.addr = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=3] store <4 x float> %X.coerce, <4 x float>* %coerce %X = load <4 x float>* %coerce ; <<4 x float>> [#uses=1] store <4 x float> %X, <4 x float>* %X.addr %tmp = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1] %tmp1 = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1] %add = fadd <4 x float> %tmp, %tmp1 ; <<4 x float>> [#uses=1] store <4 x float> %add, <4 x float>* %retval %0 = load <4 x float>* %retval ; <<4 x float>> [#uses=1] ret <4 x float> %0 } Now we get: define <4 x float> @foo(<4 x float> %X) nounwind { entry: %X.addr = alloca <4 x float>, align 16 ; <<4 x float>*> [#uses=3] store <4 x float> %X, <4 x float>* %X.addr %tmp = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1] %tmp1 = load <4 x float>* %X.addr ; <<4 x float>> [#uses=1] %add = fadd <4 x float> %tmp, %tmp1 ; <<4 x float>> [#uses=1] ret <4 x float> %add } This implements rdar://8248065 llvm-svn: 109733
* dissolve some more complexity: make the x86-64 abi lowering codeChris Lattner2010-07-291-22/+2
| | | | | | | compute its own preferred types instead of having CGT compute them then pass them (circuituously) down into ABIInfo. llvm-svn: 109726
* now that ABIInfo depends on CGT, it has trivial access to suchChris Lattner2010-07-291-2/+2
| | | | | | | things as TargetData, ASTContext, LLVMContext etc. Stop passing them through so many APIs. llvm-svn: 109723
* tidy upChris Lattner2010-07-281-8/+6
| | | | llvm-svn: 109699
* some cleanups and get alignments correct for various coerce cases.Chris Lattner2010-07-281-9/+16
| | | | llvm-svn: 109607
* Vectors are not integer types, so the type system should not classifyDouglas Gregor2010-07-231-2/+2
| | | | | | | | | | | them as such. Type::is(Signed|Unsigned|)IntegerType() now return false for vector types, and new functions has(Signed|Unsigned|)IntegerRepresentation() cover integer types and vector-of-integer types. This fixes a bunch of latent bugs. Patch from Anton Yartsev! llvm-svn: 109229
* Fix regression caused by r108911.Devang Patel2010-07-211-1/+2
| | | | | | | Do not override known debug loc with unknown debug loc. This is tested by sections.exp in gdb testsuite. llvm-svn: 109022
* Use getDebugLoc and setDebugLoc instead of getDbgMetadata and setDbgMetadata,Dan Gohman2010-07-201-4/+3
| | | | | | avoiding MDNode overhead. llvm-svn: 108911
* CodeGen/ObjC/NeXT: Fix Obj-C message send to match llvm-gcc when choosingDaniel Dunbar2010-07-141-3/+21
| | | | | | | | whether to use objc_msgSend_fpret; the choice is target dependent, not Obj-C ABI dependent. - <rdar://problem/8139758> arm objc _objc_msgSend_fpret bug llvm-svn: 108379
* Mark calls to 'throw()' functions as nounwind, and mark the functions nounwindJohn McCall2010-07-081-0/+6
| | | | | | as well. llvm-svn: 107858
* Validated by nightly-test runs on x86 and x86-64 darwin, including afterJohn McCall2010-07-061-2/+23
| | | | | | | | | | | | | | | | | | | | | | | | self-host. Hopefully these results hold up on different platforms. I tried to keep the GNU ObjC runtime happy, but it's hard for me to test. Reimplement how clang generates IR for exceptions. Instead of creating new invoke destinations which sequentially chain to the previous destination, push a more semantic representation of *why* we need the cleanup/catch/filter behavior, then collect that information into a single landing pad upon request. Also reorganizes how normal cleanups (i.e. cleanups triggered by non-exceptional control flow) are generated, since it's actually fairly closely tied in with the former. Remove the need to track which cleanup scope a block is associated with. Document a lot of previously poorly-understood (by me, at least) behavior. The new framework implements the Horrible Hack (tm), which requires every landing pad to have a catch-all so that inlining will work. Clang no longer requires the Horrible Hack just to make exceptions flow correctly within a function, however. The HH is an unfortunate requirement of LLVM's EH IR. llvm-svn: 107631
* Generate fewer first class aggregate values for otherChris Lattner2010-07-051-35/+13
| | | | | | | coerce cases (e.g. {double,int}) which avoids fastisel bailing out at -O0. llvm-svn: 107628
* in the "coerce" case, the ABI handling code ends up making theChris Lattner2010-07-051-2/+4
| | | | | | | | | alloca for an argument. Make sure the argument gets the proper decl alignment, which may be different than the type alignment. This fixes PR7567 llvm-svn: 107627
* fix rdar://8147692 - yet another crash due to my abi work.Chris Lattner2010-07-011-7/+16
| | | | llvm-svn: 107387
* IRgen: Fix debug info regression in r106970; when we eliminate the return valueDaniel Dunbar2010-06-301-5/+6
| | | | | | | | | store make sure to move the debug metadata from the store (which is actual 'return' statement location) to the return instruction (which otherwise would have the function end location as its debug info). - Tested by gdb test suite. llvm-svn: 107322
* Reapply:Chris Lattner2010-06-301-21/+36
| | | | | | | | | | r107173, "fix PR7519: after thrashing around and remembering how all this stuff" r107216, "fix PR7523, which was caused by the ABI code calling ConvertType instead" This includes a fix to make ConvertTypeForMem handle the "recursive" case, and call it as such when lowering function types which have an indirect result. llvm-svn: 107310
* Revert r107173, "fix PR7519: after thrashing around and remembering how all ↵Daniel Dunbar2010-06-301-22/+9
| | | | | | this stuff", it broke bootstrap. llvm-svn: 107232
* Revert r107216, "fix PR7523, which was caused by the ABI code calling ↵Daniel Dunbar2010-06-301-13/+11
| | | | | | ConvertType instead", it is part of a boostrap breaking sequence. llvm-svn: 107231
* fix PR7523, which was caused by the ABI code calling ConvertType insteadChris Lattner2010-06-291-11/+13
| | | | | | | of ConvertTypeRecursive when it needed to in a few cases, causing pointer types to get resolved at the wrong time. llvm-svn: 107216
* relax the CGFunctionInfo::CGFunctionInfo ctor to allow any sequence Chris Lattner2010-06-291-7/+7
| | | | | | of CanQualTypes to be passed in. llvm-svn: 107176
* fix PR7519: after thrashing around and remembering how all this stuffChris Lattner2010-06-291-9/+22
| | | | | | | works, the fix is quite simple: just make sure to call ConvertTypeRecursive when the function type being lowered is in the midst of ConvertType. llvm-svn: 107173
* minor cleanups.Chris Lattner2010-06-291-9/+3
| | | | llvm-svn: 107150
* Pass the LLVM IR version of argument types down into computeInfo.Chris Lattner2010-06-291-1/+9
| | | | | | | | | This is somewhat annoying to do this at this level, but it avoids having ABIInfo know depend on CodeGenTypes for a hint. Nothing is using this yet, so no functionality change. llvm-svn: 107111
* add IR names to coerced arguments.Chris Lattner2010-06-291-0/+3
| | | | llvm-svn: 107105
* make the argument passing stuff in the FCA case smarter still, byChris Lattner2010-06-291-21/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | avoiding making the FCA at all when the types exactly line up. For example, before we made: %struct.DeclGroup = type { i64, i64 } define i64 @_Z3foo9DeclGroup(i64, i64) nounwind { entry: %D = alloca %struct.DeclGroup, align 8 ; <%struct.DeclGroup*> [#uses=3] %2 = insertvalue %struct.DeclGroup undef, i64 %0, 0 ; <%struct.DeclGroup> [#uses=1] %3 = insertvalue %struct.DeclGroup %2, i64 %1, 1 ; <%struct.DeclGroup> [#uses=1] store %struct.DeclGroup %3, %struct.DeclGroup* %D %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1] %tmp1 = load i64* %tmp ; <i64> [#uses=1] %tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i64*> [#uses=1] %tmp3 = load i64* %tmp2 ; <i64> [#uses=1] %add = add nsw i64 %tmp1, %tmp3 ; <i64> [#uses=1] ret i64 %add } ... which has the pointless insertvalue, which fastisel hates, now we make: %struct.DeclGroup = type { i64, i64 } define i64 @_Z3foo9DeclGroup(i64, i64) nounwind { entry: %D = alloca %struct.DeclGroup, align 8 ; <%struct.DeclGroup*> [#uses=4] %2 = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1] store i64 %0, i64* %2 %3 = getelementptr %struct.DeclGroup* %D, i32 0, i32 1 ; <i64*> [#uses=1] store i64 %1, i64* %3 %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1] %tmp1 = load i64* %tmp ; <i64> [#uses=1] %tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i64*> [#uses=1] %tmp3 = load i64* %tmp2 ; <i64> [#uses=1] %add = add nsw i64 %tmp1, %tmp3 ; <i64> [#uses=1] ret i64 %add } This only kicks in when x86-64 abi lowering decides it likes us. llvm-svn: 107104
* Change CGCall to handle the "coerce" case where the coerce-to typeChris Lattner2010-06-281-11/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | is a FCA to pass each of the elements as individual scalars. This produces code fast isel is less likely to reject and is easier on the optimizers. For example, before we would compile: struct DeclGroup { long NumDecls; char * Y; }; char * foo(DeclGroup D) { return D.NumDecls+D.Y; } to: %struct.DeclGroup = type { i64, i64 } define i64 @_Z3foo9DeclGroup(%struct.DeclGroup) nounwind { entry: %D = alloca %struct.DeclGroup, align 8 ; <%struct.DeclGroup*> [#uses=3] store %struct.DeclGroup %0, %struct.DeclGroup* %D, align 1 %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1] %tmp1 = load i64* %tmp ; <i64> [#uses=1] %tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i64*> [#uses=1] %tmp3 = load i64* %tmp2 ; <i64> [#uses=1] %add = add nsw i64 %tmp1, %tmp3 ; <i64> [#uses=1] ret i64 %add } Now we get: %0 = type { i64, i64 } %struct.DeclGroup = type { i64, i8* } define i8* @_Z3foo9DeclGroup(i64, i64) nounwind { entry: %D = alloca %struct.DeclGroup, align 8 ; <%struct.DeclGroup*> [#uses=3] %2 = insertvalue %0 undef, i64 %0, 0 ; <%0> [#uses=1] %3 = insertvalue %0 %2, i64 %1, 1 ; <%0> [#uses=1] %4 = bitcast %struct.DeclGroup* %D to %0* ; <%0*> [#uses=1] store %0 %3, %0* %4, align 1 %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1] %tmp1 = load i64* %tmp ; <i64> [#uses=1] %tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i8**> [#uses=1] %tmp3 = load i8** %tmp2 ; <i8*> [#uses=1] %add.ptr = getelementptr inbounds i8* %tmp3, i64 %tmp1 ; <i8*> [#uses=1] ret i8* %add.ptr } Elimination of the FCA inside the function is still-to-come. llvm-svn: 107099
* make the trivial forms of CreateCoerced{Load|Store} trivial.Chris Lattner2010-06-281-3/+12
| | | | llvm-svn: 107091
* finally get around to doing a significant cleanup to irgen:Chris Lattner2010-06-271-4/+2
| | | | | | | | have CGF create and make accessible standard int32,int64 and intptr types. This fixes a ton of 80 column violations introduced by LLVMContextification and cleans up stuff a lot. llvm-svn: 106977
* If coercing something from int or pointer type to int or pointer typeChris Lattner2010-06-271-0/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | (potentially after unwrapping it from a struct) do it without going through memory. We now compile: struct DeclGroup { unsigned NumDecls; }; int foo(DeclGroup D) { return D.NumDecls; } into: %struct.DeclGroup = type { i32 } define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone { entry: %D = alloca %struct.DeclGroup, align 4 ; <%struct.DeclGroup*> [#uses=2] %coerce.dive = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1] %coerce.val.ii = trunc i64 %0 to i32 ; <i32> [#uses=1] store i32 %coerce.val.ii, i32* %coerce.dive %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1] %tmp1 = load i32* %tmp ; <i32> [#uses=1] ret i32 %tmp1 } instead of: %struct.DeclGroup = type { i32 } define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone { entry: %D = alloca %struct.DeclGroup, align 4 ; <%struct.DeclGroup*> [#uses=2] %tmp = alloca i64 ; <i64*> [#uses=2] %coerce.dive = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1] store i64 %0, i64* %tmp %1 = bitcast i64* %tmp to i32* ; <i32*> [#uses=1] %2 = load i32* %1, align 1 ; <i32> [#uses=1] store i32 %2, i32* %coerce.dive %tmp1 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1] %tmp2 = load i32* %tmp1 ; <i32> [#uses=1] ret i32 %tmp2 } ... which is quite a bit less terrifying. llvm-svn: 106975
* Same patch as the previous on the store side. Before we compiled this:Chris Lattner2010-06-271-7/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | struct DeclGroup { unsigned NumDecls; }; int foo(DeclGroup D) { return D.NumDecls; } to: %struct.DeclGroup = type { i32 } define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone { entry: %D = alloca %struct.DeclGroup, align 4 ; <%struct.DeclGroup*> [#uses=2] %tmp = alloca i64 ; <i64*> [#uses=2] store i64 %0, i64* %tmp %1 = bitcast i64* %tmp to %struct.DeclGroup* ; <%struct.DeclGroup*> [#uses=1] %2 = load %struct.DeclGroup* %1, align 1 ; <%struct.DeclGroup> [#uses=1] store %struct.DeclGroup %2, %struct.DeclGroup* %D %tmp1 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1] %tmp2 = load i32* %tmp1 ; <i32> [#uses=1] ret i32 %tmp2 } which caused fast isel bailouts due to the FCA load/store of %2. Now we generate this just blissful code: %struct.DeclGroup = type { i32 } define i32 @_Z3foo9DeclGroup(i64) nounwind ssp noredzone { entry: %D = alloca %struct.DeclGroup, align 4 ; <%struct.DeclGroup*> [#uses=2] %tmp = alloca i64 ; <i64*> [#uses=2] %coerce.dive = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1] store i64 %0, i64* %tmp %1 = bitcast i64* %tmp to i32* ; <i32*> [#uses=1] %2 = load i32* %1, align 1 ; <i32> [#uses=1] store i32 %2, i32* %coerce.dive %tmp1 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i32*> [#uses=1] %tmp2 = load i32* %tmp1 ; <i32> [#uses=1] ret i32 %tmp2 } This avoids fastisel bailing out and is groundwork for future patch. This reduces bailouts on CGStmt.ll to 911 from 935. llvm-svn: 106974
* improve CreateCoercedLoad a bit to generate slightly less awfulChris Lattner2010-06-271-1/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | IR when handling X86-64 by-value struct stuff. For example, we use to compile this: struct DeclGroup { unsigned NumDecls; }; int foo(DeclGroup D); void bar(DeclGroup *D) { foo(*D); } into: define void @_Z3barP9DeclGroup(%struct.DeclGroup* %D) ssp nounwind { entry: %D.addr = alloca %struct.DeclGroup*, align 8 ; <%struct.DeclGroup**> [#uses=2] %agg.tmp = alloca %struct.DeclGroup, align 4 ; <%struct.DeclGroup*> [#uses=2] %tmp3 = alloca i64 ; <i64*> [#uses=2] store %struct.DeclGroup* %D, %struct.DeclGroup** %D.addr %tmp = load %struct.DeclGroup** %D.addr ; <%struct.DeclGroup*> [#uses=1] %tmp1 = bitcast %struct.DeclGroup* %agg.tmp to i8* ; <i8*> [#uses=1] %tmp2 = bitcast %struct.DeclGroup* %tmp to i8* ; <i8*> [#uses=1] call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 4, i32 4, i1 false) %0 = bitcast i64* %tmp3 to %struct.DeclGroup* ; <%struct.DeclGroup*> [#uses=1] %1 = load %struct.DeclGroup* %agg.tmp ; <%struct.DeclGroup> [#uses=1] store %struct.DeclGroup %1, %struct.DeclGroup* %0, align 1 %2 = load i64* %tmp3 ; <i64> [#uses=1] call void @_Z3foo9DeclGroup(i64 %2) ret void } which would cause fastisel to bail out due to the first class aggregate load %1. With this patch we now compile it into the (still awful): define void @_Z3barP9DeclGroup(%struct.DeclGroup* %D) nounwind ssp noredzone { entry: %D.addr = alloca %struct.DeclGroup*, align 8 ; <%struct.DeclGroup**> [#uses=2] %agg.tmp = alloca %struct.DeclGroup, align 4 ; <%struct.DeclGroup*> [#uses=2] %tmp3 = alloca i64 ; <i64*> [#uses=2] store %struct.DeclGroup* %D, %struct.DeclGroup** %D.addr %tmp = load %struct.DeclGroup** %D.addr ; <%struct.DeclGroup*> [#uses=1] %tmp1 = bitcast %struct.DeclGroup* %agg.tmp to i8* ; <i8*> [#uses=1] %tmp2 = bitcast %struct.DeclGroup* %tmp to i8* ; <i8*> [#uses=1] call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 4, i32 4, i1 false) %coerce.dive = getelementptr %struct.DeclGroup* %agg.tmp, i32 0, i32 0 ; <i32*> [#uses=1] %0 = bitcast i64* %tmp3 to i32* ; <i32*> [#uses=1] %1 = load i32* %coerce.dive ; <i32> [#uses=1] store i32 %1, i32* %0, align 1 %2 = load i64* %tmp3 ; <i64> [#uses=1] %call = call i32 @_Z3foo9DeclGroup(i64 %2) noredzone ; <i32> [#uses=0] ret void } which doesn't bail out. On CGStmt.ll, this reduces fastisel bail outs from 958 to 935, and is the precursor of better things to come. llvm-svn: 106973
* Change IR generation for return (in the simple case) to avoid doing sillyChris Lattner2010-06-271-18/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | load/store nonsense in the epilog. For example, for: int foo(int X) { int A[100]; return A[X]; } we used to generate: %arrayidx = getelementptr inbounds [100 x i32]* %A, i32 0, i64 %idxprom ; <i32*> [#uses=1] %tmp1 = load i32* %arrayidx ; <i32> [#uses=1] store i32 %tmp1, i32* %retval %0 = load i32* %retval ; <i32> [#uses=1] ret i32 %0 } which codegen'd to this code: _foo: ## @foo ## BB#0: ## %entry subq $408, %rsp ## imm = 0x198 movl %edi, 400(%rsp) movl 400(%rsp), %edi movslq %edi, %rax movl (%rsp,%rax,4), %edi movl %edi, 404(%rsp) movl 404(%rsp), %eax addq $408, %rsp ## imm = 0x198 ret Now we generate: %arrayidx = getelementptr inbounds [100 x i32]* %A, i32 0, i64 %idxprom ; <i32*> [#uses=1] %tmp1 = load i32* %arrayidx ; <i32> [#uses=1] ret i32 %tmp1 } and: _foo: ## @foo ## BB#0: ## %entry subq $408, %rsp ## imm = 0x198 movl %edi, 404(%rsp) movl 404(%rsp), %edi movslq %edi, %rax movl (%rsp,%rax,4), %eax addq $408, %rsp ## imm = 0x198 ret This actually does matter, cutting out 2000 lines of IR from CGStmt.ll for example. Another interesting effect is that altivec.h functions which are dead now get dce'd by the inliner. Hence all the changes to builtins-ppc-altivec.c to ensure the calls aren't dead. llvm-svn: 106970
* reduce indentationChris Lattner2010-06-261-34/+35
| | | | llvm-svn: 106967
* Change EmitReferenceBindingToExpr to take a decl instead of a boolean.Anders Carlsson2010-06-261-1/+1
| | | | llvm-svn: 106949
* Move CodeGenOptions.h *back* into Frontend. This should have been done when theChandler Carruth2010-06-151-1/+1
| | | | | | dependency edge was reversed such that CodeGen depends on Frontend. llvm-svn: 106065
* Fix for PR7040: Don't try to compute the LLVM type for a function where itEli Friedman2010-05-301-17/+1
| | | | | | | | | | | isn't possible to compute. This patch is mostly refactoring; the key change is the addition of the code starting with the comment, "Check whether the function has a computable LLVM signature." The solution here is essentially the same as the way the vtable code handles such functions. llvm-svn: 105151
* Correctly pass aggregates by reference when emitting thunks.John McCall2010-05-261-0/+30
| | | | llvm-svn: 104778
* Add support for Microsoft's __thiscall, from Steven Watanabe!Douglas Gregor2010-05-181-0/+4
| | | | llvm-svn: 104026
OpenPOWER on IntegriCloud