summaryrefslogtreecommitdiffstats
path: root/clang
Commit message (Collapse)AuthorAgeFilesLines
* fix PR7519: after thrashing around and remembering how all this stuffChris Lattner2010-06-295-26/+68
| | | | | | | works, the fix is quite simple: just make sure to call ConvertTypeRecursive when the function type being lowered is in the midst of ConvertType. llvm-svn: 107173
* Allow a using directive to refer to the implicitly-defined namespaceDouglas Gregor2010-06-296-12/+70
| | | | | | | | "std", with a warning, to improve GCC compatibility. Fixes PR7517. As a drive-by, add typo correction for using directives. llvm-svn: 107172
* With packed enums, an enumerator's value may be stored in more bitsDouglas Gregor2010-06-292-0/+18
| | | | | | than the enumeration type itself takes. Fixes PR7477. llvm-svn: 107163
* tests: Use %clangxx when using driver for C++, in case C++ support is disabled.Daniel Dunbar2010-06-2913-18/+22
| | | | llvm-svn: 107153
* tests: Spell %clang_cc1 correctly.Daniel Dunbar2010-06-293-4/+4
| | | | llvm-svn: 107152
* minor cleanups.Chris Lattner2010-06-292-10/+4
| | | | llvm-svn: 107150
* Driver/Darwin: Only run dsymutil when we are also compiling/assembling as partDaniel Dunbar2010-06-292-5/+26
| | | | | | | of the compilation. - <rdar://problem/8141387> clang is always invoking dsymutil llvm-svn: 107149
* Delete assert in ComputeKeyFunction. The function runs fine without it, sinceJeffrey Yasskin2010-06-292-4/+3
| | | | | | | | there's an explicit guard on isPolymorphic, and virtual bases don't affect the key function calculation. This allows people to call ASTContext::getKeyFunction on arbitrary classes. llvm-svn: 107143
* Change X86_64ABIInfo to have ASTContext and TargetData ivars toChris Lattner2010-06-293-48/+96
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | avoid passing ASTContext down through all the methods it has. When classifying an argument, or argument piece, as INTEGER, check to see if we have a pointer at exactly the same offset in the preferred type. If so, use that pointer type instead of i64. This allows us to compile A function taking a stringref into something like this: define i8* @foo(i64 %D.coerce0, i8* %D.coerce1) nounwind ssp { entry: %D = alloca %struct.DeclGroup, align 8 ; <%struct.DeclGroup*> [#uses=4] %0 = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1] store i64 %D.coerce0, i64* %0 %1 = getelementptr %struct.DeclGroup* %D, i32 0, i32 1 ; <i8**> [#uses=1] store i8* %D.coerce1, i8** %1 %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1] %tmp1 = load i64* %tmp ; <i64> [#uses=1] %tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i8**> [#uses=1] %tmp3 = load i8** %tmp2 ; <i8*> [#uses=1] %add.ptr = getelementptr inbounds i8* %tmp3, i64 %tmp1 ; <i8*> [#uses=1] ret i8* %add.ptr } instead of this: define i8* @foo(i64 %D.coerce0, i64 %D.coerce1) nounwind ssp { entry: %D = alloca %struct.DeclGroup, align 8 ; <%struct.DeclGroup*> [#uses=3] %0 = insertvalue %0 undef, i64 %D.coerce0, 0 ; <%0> [#uses=1] %1 = insertvalue %0 %0, i64 %D.coerce1, 1 ; <%0> [#uses=1] %2 = bitcast %struct.DeclGroup* %D to %0* ; <%0*> [#uses=1] store %0 %1, %0* %2, align 1 %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1] %tmp1 = load i64* %tmp ; <i64> [#uses=1] %tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i8**> [#uses=1] %tmp3 = load i8** %tmp2 ; <i8*> [#uses=1] %add.ptr = getelementptr inbounds i8* %tmp3, i64 %tmp1 ; <i8*> [#uses=1] ret i8* %add.ptr } This implements rdar://7375902 - [codegen quality] clang x86-64 ABI lowering code punishing StringRef llvm-svn: 107123
* Minix doesn't support dylibs, PR7294Chris Lattner2010-06-291-1/+1
| | | | llvm-svn: 107120
* plumb preferred types down into X86_64ABIInfo::classifyArgumentType,Chris Lattner2010-06-291-4/+14
| | | | | | no functionality change. llvm-svn: 107115
* Pass the LLVM IR version of argument types down into computeInfo.Chris Lattner2010-06-293-10/+38
| | | | | | | | | This is somewhat annoying to do this at this level, but it avoids having ABIInfo know depend on CodeGenTypes for a hint. Nothing is using this yet, so no functionality change. llvm-svn: 107111
* Prefer llvm_unreachable(...) to assert(false && ...). This is important asChandler Carruth2010-06-291-5/+6
| | | | | | without it we might exit a non-void function without returning. llvm-svn: 107106
* add IR names to coerced arguments.Chris Lattner2010-06-293-9/+12
| | | | llvm-svn: 107105
* make the argument passing stuff in the FCA case smarter still, byChris Lattner2010-06-291-21/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | avoiding making the FCA at all when the types exactly line up. For example, before we made: %struct.DeclGroup = type { i64, i64 } define i64 @_Z3foo9DeclGroup(i64, i64) nounwind { entry: %D = alloca %struct.DeclGroup, align 8 ; <%struct.DeclGroup*> [#uses=3] %2 = insertvalue %struct.DeclGroup undef, i64 %0, 0 ; <%struct.DeclGroup> [#uses=1] %3 = insertvalue %struct.DeclGroup %2, i64 %1, 1 ; <%struct.DeclGroup> [#uses=1] store %struct.DeclGroup %3, %struct.DeclGroup* %D %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1] %tmp1 = load i64* %tmp ; <i64> [#uses=1] %tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i64*> [#uses=1] %tmp3 = load i64* %tmp2 ; <i64> [#uses=1] %add = add nsw i64 %tmp1, %tmp3 ; <i64> [#uses=1] ret i64 %add } ... which has the pointless insertvalue, which fastisel hates, now we make: %struct.DeclGroup = type { i64, i64 } define i64 @_Z3foo9DeclGroup(i64, i64) nounwind { entry: %D = alloca %struct.DeclGroup, align 8 ; <%struct.DeclGroup*> [#uses=4] %2 = getelementptr %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1] store i64 %0, i64* %2 %3 = getelementptr %struct.DeclGroup* %D, i32 0, i32 1 ; <i64*> [#uses=1] store i64 %1, i64* %3 %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1] %tmp1 = load i64* %tmp ; <i64> [#uses=1] %tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i64*> [#uses=1] %tmp3 = load i64* %tmp2 ; <i64> [#uses=1] %add = add nsw i64 %tmp1, %tmp3 ; <i64> [#uses=1] ret i64 %add } This only kicks in when x86-64 abi lowering decides it likes us. llvm-svn: 107104
* A few prettifications. Also renamed TraverseInitializer toCraig Silverstein2010-06-291-8/+8
| | | | | | TraverseConstructorInitializer, to be a bit clearer. llvm-svn: 107102
* Per Doug's suggestion, move check for invalid SourceLocation intoTed Kremenek2010-06-282-3/+4
| | | | | | | cxloc::translateSourceLocation() (thus causing all clients of this function to have the same behavior). llvm-svn: 107101
* Change CGCall to handle the "coerce" case where the coerce-to typeChris Lattner2010-06-283-16/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | is a FCA to pass each of the elements as individual scalars. This produces code fast isel is less likely to reject and is easier on the optimizers. For example, before we would compile: struct DeclGroup { long NumDecls; char * Y; }; char * foo(DeclGroup D) { return D.NumDecls+D.Y; } to: %struct.DeclGroup = type { i64, i64 } define i64 @_Z3foo9DeclGroup(%struct.DeclGroup) nounwind { entry: %D = alloca %struct.DeclGroup, align 8 ; <%struct.DeclGroup*> [#uses=3] store %struct.DeclGroup %0, %struct.DeclGroup* %D, align 1 %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1] %tmp1 = load i64* %tmp ; <i64> [#uses=1] %tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i64*> [#uses=1] %tmp3 = load i64* %tmp2 ; <i64> [#uses=1] %add = add nsw i64 %tmp1, %tmp3 ; <i64> [#uses=1] ret i64 %add } Now we get: %0 = type { i64, i64 } %struct.DeclGroup = type { i64, i8* } define i8* @_Z3foo9DeclGroup(i64, i64) nounwind { entry: %D = alloca %struct.DeclGroup, align 8 ; <%struct.DeclGroup*> [#uses=3] %2 = insertvalue %0 undef, i64 %0, 0 ; <%0> [#uses=1] %3 = insertvalue %0 %2, i64 %1, 1 ; <%0> [#uses=1] %4 = bitcast %struct.DeclGroup* %D to %0* ; <%0*> [#uses=1] store %0 %3, %0* %4, align 1 %tmp = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 0 ; <i64*> [#uses=1] %tmp1 = load i64* %tmp ; <i64> [#uses=1] %tmp2 = getelementptr inbounds %struct.DeclGroup* %D, i32 0, i32 1 ; <i8**> [#uses=1] %tmp3 = load i8** %tmp2 ; <i8*> [#uses=1] %add.ptr = getelementptr inbounds i8* %tmp3, i64 %tmp1 ; <i8*> [#uses=1] ret i8* %add.ptr } Elimination of the FCA inside the function is still-to-come. llvm-svn: 107099
* Fix up ClassTemplateSpecializationDecl: For implicit instantiationsCraig Silverstein2010-06-281-10/+13
| | | | | | | | | | | | | | | | | ("set<int> x;"), we don't want to recurse at all, since the instatiated class isn't written in the source code anywhere. (Note the instatiated *type* -- set<int> -- is written, and will still get a callback of TemplateSpecializationType). For explicit instantiations ("template set<int>;"), we do need a callback, since this is the only callback that's made for this instantiation. We use getTypeAsWritten() to distinguish. We will still need to figure out how to handle template specializations, which probably are still not quite correct. Reviewed by chandlerc llvm-svn: 107098
* make the trivial forms of CreateCoerced{Load|Store} trivial.Chris Lattner2010-06-281-3/+12
| | | | llvm-svn: 107091
* Modify the way sub-statements are stored and retrieved from PCH.Argyrios Kyrtzidis2010-06-289-726/+529
| | | | | | | | | | | | | | | Before this commit, sub-stmts were stored as encountered and when they were placed in the Stmts stack we had to know what index each stmt operand has. This complicated supporting variable sub-stmts and sub-stmts that were contained in TypeSourceInfos, e.g. x = sizeof(int[1]); would crash PCH. Now, sub-stmts are stored in reverse order, from last to first, so that when reading them, in order to get the next sub-stmt we just need to pop the last stmt from the stack. This greatly simplified the way stmts are written and read (just use PCHWriter::AddStmt and PCHReader::ReadStmt accordingly) and allowed variable stmt operands and TypeSourceInfo exprs. llvm-svn: 107087
* pass/return structs of char and short as i8/i16 to avoidChris Lattner2010-06-283-7/+11
| | | | | | aweful through-memory coersion, just like we do for i32 now. llvm-svn: 107078
* more tidying up.Chris Lattner2010-06-281-32/+45
| | | | llvm-svn: 107076
* Remove state assertion.Ted Kremenek2010-06-281-1/+0
| | | | llvm-svn: 107064
* Don't crash in InitializePreprocessor() when there is no valid PTHManager. ↵Ted Kremenek2010-06-281-1/+2
| | | | | | Fixes <rdar://problem/8098441>. llvm-svn: 107061
* random acts of tidying.Chris Lattner2010-06-281-28/+47
| | | | llvm-svn: 107050
* X86-64:Chris Lattner2010-06-284-5/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pass/return structs of float/int as float/i32 instead of double/i64 to make the code generated for ABI cleaner. Passing in the low part of a double is the same as passing in a float. For example, we now compile: struct DeclGroup { float NumDecls; }; float foo(DeclGroup D); void bar(DeclGroup *D) { foo(*D); } into: %struct.DeclGroup = type { float } define void @_Z3barP9DeclGroup(%struct.DeclGroup* %D) nounwind { entry: %D.addr = alloca %struct.DeclGroup*, align 8 ; <%struct.DeclGroup**> [#uses=2] %agg.tmp = alloca %struct.DeclGroup, align 4 ; <%struct.DeclGroup*> [#uses=2] store %struct.DeclGroup* %D, %struct.DeclGroup** %D.addr %tmp = load %struct.DeclGroup** %D.addr ; <%struct.DeclGroup*> [#uses=1] %tmp1 = bitcast %struct.DeclGroup* %agg.tmp to i8* ; <i8*> [#uses=1] %tmp2 = bitcast %struct.DeclGroup* %tmp to i8* ; <i8*> [#uses=1] call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 4, i32 4, i1 false) %coerce.dive = getelementptr %struct.DeclGroup* %agg.tmp, i32 0, i32 0 ; <float*> [#uses=1] %0 = load float* %coerce.dive, align 1 ; <float> [#uses=1] %call = call float @_Z3foo9DeclGroup(float %0) ; <float> [#uses=0] ret void } instead of: %struct.DeclGroup = type { float } define void @_Z3barP9DeclGroup(%struct.DeclGroup* %D) nounwind { entry: %D.addr = alloca %struct.DeclGroup*, align 8 ; <%struct.DeclGroup**> [#uses=2] %agg.tmp = alloca %struct.DeclGroup, align 4 ; <%struct.DeclGroup*> [#uses=2] %tmp3 = alloca double ; <double*> [#uses=2] store %struct.DeclGroup* %D, %struct.DeclGroup** %D.addr %tmp = load %struct.DeclGroup** %D.addr ; <%struct.DeclGroup*> [#uses=1] %tmp1 = bitcast %struct.DeclGroup* %agg.tmp to i8* ; <i8*> [#uses=1] %tmp2 = bitcast %struct.DeclGroup* %tmp to i8* ; <i8*> [#uses=1] call void @llvm.memcpy.p0i8.p0i8.i64(i8* %tmp1, i8* %tmp2, i64 4, i32 4, i1 false) %coerce.dive = getelementptr %struct.DeclGroup* %agg.tmp, i32 0, i32 0 ; <float*> [#uses=1] %0 = bitcast double* %tmp3 to float* ; <float*> [#uses=1] %1 = load float* %coerce.dive ; <float> [#uses=1] store float %1, float* %0, align 1 %2 = load double* %tmp3 ; <double> [#uses=1] %call = call float @_Z3foo9DeclGroup(double %2) ; <float> [#uses=0] ret void } which is this machine code (at -O0): __Z3barP9DeclGroup: subq $24, %rsp movq %rdi, 16(%rsp) movq 16(%rsp), %rdi leaq 8(%rsp), %rax movl (%rdi), %ecx movl %ecx, (%rax) movss 8(%rsp), %xmm0 callq __Z3foo9DeclGroup addq $24, %rsp ret vs this: __Z3barP9DeclGroup: subq $24, %rsp movq %rdi, 16(%rsp) movq 16(%rsp), %rdi leaq 8(%rsp), %rax movl (%rdi), %ecx movl %ecx, (%rax) movss 8(%rsp), %xmm0 movss %xmm0, (%rsp) movsd (%rsp), %xmm0 callq __Z3foo9DeclGroup addq $24, %rsp ret At -O3, it is the difference between this now: __Z3barP9DeclGroup: movss (%rdi), %xmm0 jmp __Z3foo9DeclGroup # TAILCALL vs this before: __Z3barP9DeclGroup: movl (%rdi), %eax movd %rax, %xmm0 jmp __Z3foo9DeclGroup # TAILCALL llvm-svn: 107048
* Minor refactorin of my last patch (radar 7860965 related).Fariborz Jahanian2010-06-281-1/+1
| | | | llvm-svn: 107047
* Have __func__ and siblings point to block's implementation functionFariborz Jahanian2010-06-282-1/+30
| | | | | | name. Fixes radar 7860965. llvm-svn: 107044
* tweak test to pass on windowsChris Lattner2010-06-281-1/+1
| | | | llvm-svn: 107040
* tests: Rewrite test to check intent instead of implementation.Daniel Dunbar2010-06-281-13/+12
| | | | llvm-svn: 107024
* Set the default arch based on the triple.Rafael Espindola2010-06-282-46/+58
| | | | llvm-svn: 107021
* Fix UnitTests/2004-02-02-NegativeZero.c, which regressed whenChris Lattner2010-06-282-2/+13
| | | | | | I broke negate of FP values. llvm-svn: 107019
* fix a silly fixme.Chris Lattner2010-06-281-3/+1
| | | | llvm-svn: 107018
* llvm::errs() is non-buffered, so it doesn't need to be flushed.Dan Gohman2010-06-281-2/+2
| | | | llvm-svn: 107012
* Add support for traversing initializer lists (in constructors), whichCraig Silverstein2010-06-281-2/+21
| | | | | | | | | | | we ignoring before. To give access to the names on the initializer, which aren't a type or an expr or a decl, I've introduced a new TraverseInitializer. By default, it just traverses on the expr that the name is being initialized to. Reviewed by chandlerc. Tested via clang's 'make test'. llvm-svn: 107008
* Introduce Expr::Classify and Expr::ClassifyModifiable, which determine the ↵Sebastian Redl2010-06-284-375/+560
| | | | | | classification of an expression under the C++0x taxology (value category). Reimplement isLvalue and isModifiableLvalue using these functions. No regressions in the test suite from this, and my rough performance check doesn't show any regressions either. llvm-svn: 107007
* Support CXXPseudoDestructorExpr for PCH.Argyrios Kyrtzidis2010-06-284-0/+71
| | | | llvm-svn: 106999
* Support DependentScopeDeclRefExpr for PCH.Argyrios Kyrtzidis2010-06-286-0/+77
| | | | llvm-svn: 106998
* Refactor PCH reading/writing of template arguments passed to expressions.Argyrios Kyrtzidis2010-06-283-101/+113
| | | | llvm-svn: 106997
* Fix PCH emitting/reading for template arguments that contain expressions.Argyrios Kyrtzidis2010-06-287-29/+177
| | | | llvm-svn: 106996
* Fix various bugs in recent commits for C++ PCH.Argyrios Kyrtzidis2010-06-285-3/+14
| | | | llvm-svn: 106995
* Partial fix for PR7267 based on comments by John McCall on an earlier patch.Chandler Carruth2010-06-286-2/+73
| | | | | | | | | | | | | | | | | | This is more targeted, as it simply provides toggle actions for the parser to turn access checking on and off. We then use these to suppress access checking only while we parse the template-id (included scope specifier) of an explicit instantiation and explicit specialization of a class template. The specialization behavior is an extension, as it seems likely a defect that the standard did not exempt them as it does explicit instantiations. This allows the very common practice of specializing trait classes to work for private, internal types. This doesn't address instantiating or specializing function templates, although those apparently already partially work. The naming and style for the Action layer isn't my favorite, comments and suggestions would be appreciated there. llvm-svn: 106993
* Pointer comparisons (and pointer-pointer subtraction). Basically filling in ↵Jordy Rose2010-06-286-60/+556
| | | | | | SimpleSValuator::EvalBinOpLL(). llvm-svn: 106992
* Suppress diagnosing access violations while looking up deallocation functionsChandler Carruth2010-06-282-0/+28
| | | | | | | | | much as we already do for allocation function lookup. Explicitly check access for the function we actually select in one case that was previously missing, but being caught behind the blanket diagnostics for all overload candidates. This fixs PR7436. llvm-svn: 106986
* Use softfp for linux gnueabi, keep the warning for everything else.Rafael Espindola2010-06-271-2/+9
| | | | llvm-svn: 106984
* Correctly destroy reference temporaries with global storage. Remove ↵Anders Carlsson2010-06-273-22/+62
| | | | | | ErrorUnsupported call when binding a global reference to a non-lvalue. Fixes PR7326. llvm-svn: 106983
* Add a CreateReferenceTemporary that will do the right thing for variables ↵Anders Carlsson2010-06-271-6/+34
| | | | | | with global storage. llvm-svn: 106982
* Simplify CodeGenFunction::EmitReferenceBindingToExpr as a first step towards ↵Anders Carlsson2010-06-271-95/+90
| | | | | | fixing PR7326. llvm-svn: 106981
* Reduce indentation.Anders Carlsson2010-06-271-14/+11
| | | | llvm-svn: 106980
OpenPOWER on IntegriCloud