summaryrefslogtreecommitdiffstats
path: root/clang/lib/CodeGen/CGExpr.cpp
Commit message (Collapse)AuthorAgeFilesLines
* Add front-end infrastructure now address space casts are in LLVM IR.David Tweed2013-12-111-0/+1
| | | | | | | | | | With the introduction of explicit address space casts into LLVM, there's a need to provide a new cast kind the front-end can create for C/OpenCL/CUDA and code to produce address space casts from those kinds when appropriate. Patch by Michele Scandale! llvm-svn: 197036
* Fix a crash in EmitStoreThroughExtVectorComponentLValue for vectors of odd ↵Joey Gouly2013-11-211-0/+6
| | | | | | | | | | | | | | | | | | | sizes. In OpenCL a vector of 3 elements, acts like a vector of four elements. So for a vector of size 3 the '.hi' and '.odd' accessors, would access the elements {2, 3} and {1, 3} respectively. However, in EmitStoreThroughExtVectorComponentLValue we are still operating on a vector of size 3, so we should only access {2} and {1}. We do this by checking the last element to be accessed, and ignore it if it is out-of-bounds. EmitLoadOfExtVectorElementLValue doesn't have a similar problem, because it does a direct shufflevector with undef, so an out-of-bounds access just gives an undef value. Patch by Anastasia Stulova! llvm-svn: 195367
* ubsan: Only emit constants for filenames and type descriptors once.Will Dietz2013-11-081-4/+9
| | | | | | | | Produces neater IR in significantly less time. (~18% faster -O0 compile time for sqlite3 with -fsanitize=undefined) llvm-svn: 194231
* [-fms-extensions] Add support for __FUNCDNAME__David Majnemer2013-11-061-0/+4
| | | | | | | | | | | | | | | | Summary: Similar to __FUNCTION__, MSVC exposes the name of the enclosing mangled function name via __FUNCDNAME__. This implementation is very naive and unoptimized, it is expected that __FUNCDNAME__ would be used rarely in practice. Reviewers: rnk, rsmith, thakis CC: cfe-commits, silvas Differential Revision: http://llvm-reviews.chandlerc.com/D2109 llvm-svn: 194181
* C++1y sized deallocation: if we have a use, but not a definition, of a sizedRichard Smith2013-11-051-2/+2
| | | | | | | | | | | | | deallocation function (and the corresponding unsized deallocation function has been declared), emit a weak discardable definition of the function that forwards to the corresponding unsized deallocation. This allows a C++ standard library implementation to provide both a sized and an unsized deallocation function, where the unsized one does not just call the sized one, for instance by putting both in the same object file within an archive. llvm-svn: 194055
* Split -fsanitize=bounds to -fsanitize=array-bounds (for the frontend-insertedRichard Smith2013-10-221-3/+4
| | | | | | | | | | | | | | check using the ubsan runtime) and -fsanitize=local-bounds (for the middle-end check which inserts traps). Remove -fsanitize=local-bounds from -fsanitize=undefined. It does not produce useful diagnostics and has false positives (PR17635), and is not a good compromise position between UBSan's checks and ASan's checks. Map -fbounds-checking to -fsanitize=local-bounds to restore Clang's historical behavior for that flag. llvm-svn: 193205
* Implement function type checker for the undefined behavior sanitizer.Peter Collingbourne2013-10-201-2/+48
| | | | | | | | | This uses function prefix data to store function type information at the function pointer. Differential Revision: http://llvm-reviews.chandlerc.com/D1338 llvm-svn: 193058
* TBAA: use the same format for scalar TBAA and struct-path aware TBAA.Manman Ren2013-10-081-3/+6
| | | | | | | | | | | | | | | | An updated version of r191586 with bug fix. Struct-path aware TBAA generates tags to specify the access path, while scalar TBAA only generates tags to scalar types. We should not generate a TBAA tag with null being the first field. When a TBAA type node is null, the tag should be null too. Make sure we don't decorate an instruction with a null TBAA tag. Added a testing case for the bug reported by Richard with -relaxed-aliasing and -fsanitizer=thread. llvm-svn: 192145
* Fix objectsize tests after r192117Matt Arsenault2013-10-071-1/+3
| | | | llvm-svn: 192120
* Thread a SourceLocation into the EmitCheck for "load_invalid_value". This occursNick Lewycky2013-10-021-19/+26
| | | | | | when scalars are loaded / undergo lvalue-to-rvalue conversion. llvm-svn: 191808
* No functionality change. Reflow lines that could fit on one line. Break linesNick Lewycky2013-10-011-9/+6
| | | | | | that had 80-column violations. Remove spurious emacs mode markers on .cpp files. llvm-svn: 191797
* Fix 2 cases of uninitialized reads of an invalid PresumedLoc.Evgeniy Stepanov2013-09-111-2/+2
| | | | | | | | | The code in CGExpr was added back in 2012 (r165536) but not exercised in tests until recently. Detected on the MemorySanitizer bootstrap bot. llvm-svn: 190521
* Revert r189649 because it was breaking sanitizer bots.Yunzhong Gao2013-08-301-3/+9
| | | | llvm-svn: 189660
* Fixing a bug where debug info for a local variable gets emitted at file scope.Yunzhong Gao2013-08-301-9/+3
| | | | | | | The patch was discussed in Phabricator. See: http://llvm-reviews.chandlerc.com/D1281 llvm-svn: 189649
* Revert "PR14569: Omit debug info for thunks"David Blaikie2013-08-271-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | This reverts commit r189320. Alexey Samsonov and Dmitry Vyukov presented some arguments for keeping these around - though it still seems like those tasks could be solved by a tool just using the symbol table. In a very small number of cases, thunks may be inlined & debug info might be able to save profilers & similar tools from misclassifying those cases as part of the caller. The extra changes here plumb through the VarDecl for various cases to CodeGenFunction - this provides better fidelity through a few APIs but generally just causes the CGF::StartFunction to fallback to using the name of the IR function as the name in the debug info. The changes to debug-info-global-ctor-dtor.cpp seem like goodness. The two names that go missing (in favor of only emitting those names as linkage names) are names that can be demangled - emitting them only as the linkage name should encourage tools to do just that. Again, thanks to Dinesh Dwivedi for investigation/work on this issue. llvm-svn: 189421
* Handle predefined expression for a captured statementWei Pan2013-08-261-0/+4
| | | | | | | | | | | - __func__ or __FUNCTION__ returns captured statement's parent function name, not the one compiler generated. Differential Revision: http://llvm-reviews.chandlerc.com/D1491 Reviewed by bkramer llvm-svn: 189219
* Sema: Use the right type for PredefinedExpr when it's in a lambda.Benjamin Kramer2013-08-211-8/+15
| | | | | | | | | | | | | 1. We now print the return type of lambdas and return type deduced functions as "auto". Trailing return types with decltype print the underlying type. 2. Use the lambda or block scope for the PredefinedExpr type instead of the parent function. This fixes PR16946, a strange mismatch between type of the expression and the actual result. 3. Verify the type in CodeGen. 4. The type for blocks is still wrong. They are numbered and the name is not known until CodeGen. llvm-svn: 188900
* CodeGen: __uuidof should work even with an incomplete _GUID typeDavid Majnemer2013-08-151-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | Summary: We would crash in CodeGen::CodeGenModule::EmitUuidofInitializer because our attempt to enter CodeGen::CodeGenModule::EmitConstantValue will be foiled: the type of the constant value is incomplete. Instead, create an unnamed type with the proper layout on all platforms. Punt the problem of wrongly defined struct _GUID types to the user. (It's impossible because the TU may never get to see the type and thus we can't verify that it is suitable.) This fixes PR16856. Reviewers: rsmith, rnk, thakis Reviewed By: rnk CC: cfe-commits Differential Revision: http://llvm-reviews.chandlerc.com/D1375 llvm-svn: 188481
* UBSan: Fix alignment checks emitted in downcasts.Filipe Cabecinhas2013-08-081-6/+6
| | | | | | | | | | | Summary: UBSan was checking for alignment of the derived class on the pointer to the base class, before converting. With some class hierarchies, this could generate false positives. Added test-case. llvm-svn: 187948
* Debug Info / EmitCallArgs: arguments may modify the debug location.Adrian Prantl2013-07-261-1/+8
| | | | | | | | | | Restore it after each argument is emitted. This fixes the scope info for inlined subroutines inside of function argument expressions. (E.g., anything STL). rdar://problem/12592135 llvm-svn: 187240
* Remove trailing whitespace.Craig Topper2013-07-261-76/+76
| | | | llvm-svn: 187189
* Make IgnoreParens() look through ChooseExprs.Eli Friedman2013-07-201-1/+1
| | | | | | | | | | | | | This is the same way GenericSelectionExpr works, and it's generally a more consistent approach. A large part of this patch is devoted to caching the value of the condition of a ChooseExpr; it's needed to avoid threading an ASTContext into IgnoreParens(). Fixes <rdar://problem/14438917>. llvm-svn: 186738
* Simplify atomic load/store IRGen.Eli Friedman2013-07-111-3/+2
| | | | | | Also fixes a couple minor bugs along the way; see testcases. llvm-svn: 186049
* Use SmallVectorImpl& for function arguments instead of SmallVector.Craig Topper2013-07-051-1/+1
| | | | llvm-svn: 185715
* Delete dead code.Eli Friedman2013-06-281-39/+16
| | | | llvm-svn: 185119
* Emit initializers for static-storage-duration temporaries as constants whereRichard Smith2013-06-141-1/+17
| | | | | | possible. llvm-svn: 183967
* Simplify: we don't need any special-case lifetime extension when initializingRichard Smith2013-06-121-52/+29
| | | | | | | declarations of reference type; they're handled by the general case handling of MaterializeTemporaryExpr. llvm-svn: 183875
* PR12086, PR15117Richard Smith2013-06-121-9/+0
| | | | | | | | | | | | | | | | | | | Introduce CXXStdInitializerListExpr node, representing the implicit construction of a std::initializer_list<T> object from its underlying array. The AST representation of such an expression goes from an InitListExpr with a flag set, to a CXXStdInitializerListExpr containing a MaterializeTemporaryExpr containing an InitListExpr (possibly wrapped in a CXXBindTemporaryExpr). This more detailed representation has several advantages, the most important of which is that the new MaterializeTemporaryExpr allows us to directly model lifetime extension of the underlying temporary array. Using that, this patch *drastically* simplifies the IR generation of this construct, provides IR generation support for nested global initializer_list objects, fixes several bugs where the destructors for the underlying array would accidentally not get invoked, and provides constant expression evaluation support for std::initializer_list objects. llvm-svn: 183872
* Reapply r183721, reverted in r183776, with a fix for a bug in the former (weRichard Smith2013-06-121-266/+199
| | | | | | | | | | | | | | | | | | | | | | | were lacking ExprWithCleanups nodes in some cases where the new approach to lifetime extension needed them). Original commit message: Rework IR emission for lifetime-extended temporaries. Instead of trying to walk into the expression and dig out a single lifetime-extended entity and manually pull its cleanup outside the expression, instead keep a list of the cleanups which we'll need to emit when we get to the end of the full-expression. Also emit those cleanups early, as EH-only cleanups, to cover the case that the full-expression does not terminate normally. This allows IR generation to properly model temporary lifetime when multiple temporaries are extended by the same declaration. We have a pre-existing bug where an exception thrown from a temporary's destructor does not clean up lifetime-extended temporaries created in the same expression and extended to automatic storage duration; that is not fixed by this patch. llvm-svn: 183859
* Revert r183721. It caused cleanups to be delayed too long in some cases.Richard Smith2013-06-111-199/+266
| | | | | | Testcase to follow. llvm-svn: 183776
* Silence GCC warning.Benjamin Kramer2013-06-111-0/+1
| | | | llvm-svn: 183742
* Rework IR emission for lifetime-extended temporaries. Instead of trying to walkRichard Smith2013-06-111-266/+198
| | | | | | | | | | | | | | | | | into the expression and dig out a single lifetime-extended entity and manually pull its cleanup outside the expression, instead keep a list of the cleanups which we'll need to emit when we get to the end of the full-expression. Also emit those cleanups early, as EH-only cleanups, to cover the case that the full-expression does not terminate normally. This allows IR generation to properly model temporary lifetime when multiple temporaries are extended by the same declaration. We have a pre-existing bug where an exception thrown from a temporary's destructor does not clean up lifetime-extended temporaries created in the same expression and extended to automatic storage duration; that is not fixed by this patch. llvm-svn: 183721
* Remove some unreachable (and wrong) code and replace it with an assertion.Richard Smith2013-06-041-13/+3
| | | | llvm-svn: 183206
* Fix handling of pointers-to-members and comma expressions whenRichard Smith2013-06-031-1/+10
| | | | | | lifetime-extending temporaries in reference bindings. llvm-svn: 183089
* CodeGen for CapturedStmtsBen Langmuir2013-05-091-4/+12
| | | | | | | | | | | | | | | | | EmitCapturedStmt creates a captured struct containing all of the captured variables, and then emits a call to the outlined function. This is similar in principle to EmitBlockLiteral. GenerateCapturedFunction actually produces the outlined function. It is based on GenerateBlockFunction, but is much simpler. The function type is determined by the parameters that are in the CapturedDecl. Some changes have been added to this patch that were reviewed as part of the serialization patch and moving the parameters to the captured decl. Differential Revision: http://llvm-reviews.chandlerc.com/D640 llvm-svn: 181536
* Correctly emit certain implicit references to 'self' even withinJohn McCall2013-05-031-0/+11
| | | | | | | | | | | | | | | | | | a lambda. Bug #1 is that CGF's CurFuncDecl was "stuck" at lambda invocation functions. Fix that by generally improving getNonClosureContext to look through lambdas and captured statements but only report code contexts, which is generally what's wanted. Audit uses of CurFuncDecl and getNonClosureAncestor for correctness. Bug #2 is that lambdas weren't specially mapping 'self' when inside an ObjC method. Fix that by removing the requirement for that and using the normal EmitDeclRefLValue path in LoadObjCSelf. rdar://13800041 llvm-svn: 181000
* Struct-path aware TBAA: fix handling of may_alias attribute.Manman Ren2013-04-271-2/+2
| | | | llvm-svn: 180656
* C++1y: Allow aggregates to have default initializers.Richard Smith2013-04-201-0/+4
| | | | | | | | | | | Add a CXXDefaultInitExpr, analogous to CXXDefaultArgExpr, and use it both in CXXCtorInitializers and in InitListExprs to represent a default initializer. There's an additional complication here: because the default initializer can refer to the initialized object via its 'this' pointer, we need to make sure that 'this' points to the right thing within the evaluation. llvm-svn: 179958
* Implement CodeGen for C++11 thread_local, following the Itanium ABI ↵Richard Smith2013-04-191-1/+5
| | | | | | specification as discussed on cxx-abi-dev. llvm-svn: 179858
* CodeGen support for function-local static thread_local variables withRichard Smith2013-04-141-3/+10
| | | | | | | | | non-constant constructors or non-trivial destructors. Plus bugfixes for thread_local references bound to temporaries (the temporaries themselves are lifetime-extended to become thread_local), and the corresponding case for std::initializer_list. llvm-svn: 179496
* Annotate flavor of TLS variable (statically or dynamically initialized) onto ↵Richard Smith2013-04-131-1/+1
| | | | | | the AST. llvm-svn: 179447
* Struct-path aware TBAA: uniformize scalar tag and path tag.Manman Ren2013-04-111-2/+2
| | | | | | | | | | | | For struct-path aware TBAA, we used to use scalar type node as the scalar tag, which has an incompatible format with the struct path tag. We now use the same format: base type, access type and offset. We also uniformize the scalar type node and the struct type node: name, a list of pairs (offset + pointer to MDNode). For scalar type, we have a single pair. These are to make implementaiton of aliasing rules easier. llvm-svn: 179335
* Force a load when creating a reference to a temporary copied from a bitfield.Jordan Rose2013-04-111-129/+124
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For this source: const int &ref = someStruct.bitfield; We used to generate this AST: DeclStmt [...] `-VarDecl [...] ref 'const int &' `-MaterializeTemporaryExpr [...] 'const int' lvalue `-ImplicitCastExpr [...] 'const int' lvalue <NoOp> `-MemberExpr [...] 'int' lvalue bitfield .bitfield [...] `-DeclRefExpr [...] 'struct X' lvalue ParmVar [...] 'someStruct' 'struct X' Notice the lvalue inside the MaterializeTemporaryExpr, which is very confusing (and caused an assertion to fire in the analyzer - PR15694). We now generate this: DeclStmt [...] `-VarDecl [...] ref 'const int &' `-MaterializeTemporaryExpr [...] 'const int' lvalue `-ImplicitCastExpr [...] 'int' <LValueToRValue> `-MemberExpr [...] 'int' lvalue bitfield .bitfield [...] `-DeclRefExpr [...] 'struct X' lvalue ParmVar [...] 'someStruct' 'struct X' Which makes a lot more sense. This allows us to remove code in both CodeGen and AST that hacked around this special case. The commit also makes Clang accept this (legal) C++11 code: int &&ref = std::move(someStruct).bitfield PR15694 / <rdar://problem/13600396> llvm-svn: 179250
* <rdar://problem/13325066> Destroy std::initializer_list temporaries whose ↵Douglas Gregor2013-04-061-4/+20
| | | | | | lifetime has been extended by reference binding. llvm-svn: 178939
* Initial support for struct-path aware TBAA.Manman Ren2013-04-041-8/+34
| | | | | | | | | Added TBAABaseType and TBAAOffset in LValue. These two fields are initialized to the actual type and 0, and are updated in EmitLValueForField. Path-aware TBAA tags are enabled for EmitLoadOfScalar and EmitStoreOfScalar. Added command line option -struct-path-tbaa. llvm-svn: 178797
* revert r178784 since it does not have a commit messageManman Ren2013-04-041-34/+8
| | | | llvm-svn: 178796
* Index: include/clang/Driver/CC1Options.tdManman Ren2013-04-041-8/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | =================================================================== --- include/clang/Driver/CC1Options.td (revision 178718) +++ include/clang/Driver/CC1Options.td (working copy) @@ -161,6 +161,8 @@ HelpText<"Use register sized accesses to bit-fields, when possible.">; def relaxed_aliasing : Flag<["-"], "relaxed-aliasing">, HelpText<"Turn off Type Based Alias Analysis">; +def struct_path_tbaa : Flag<["-"], "struct-path-tbaa">, + HelpText<"Turn on struct-path aware Type Based Alias Analysis">; def masm_verbose : Flag<["-"], "masm-verbose">, HelpText<"Generate verbose assembly output">; def mcode_model : Separate<["-"], "mcode-model">, Index: include/clang/Driver/Options.td =================================================================== --- include/clang/Driver/Options.td (revision 178718) +++ include/clang/Driver/Options.td (working copy) @@ -587,6 +587,7 @@ Flags<[CC1Option]>, HelpText<"Disable spell-checking">; def fno_stack_protector : Flag<["-"], "fno-stack-protector">, Group<f_Group>; def fno_strict_aliasing : Flag<["-"], "fno-strict-aliasing">, Group<f_Group>; +def fstruct_path_tbaa : Flag<["-"], "fstruct-path-tbaa">, Group<f_Group>; def fno_strict_enums : Flag<["-"], "fno-strict-enums">, Group<f_Group>; def fno_strict_overflow : Flag<["-"], "fno-strict-overflow">, Group<f_Group>; def fno_threadsafe_statics : Flag<["-"], "fno-threadsafe-statics">, Group<f_Group>, Index: include/clang/Frontend/CodeGenOptions.def =================================================================== --- include/clang/Frontend/CodeGenOptions.def (revision 178718) +++ include/clang/Frontend/CodeGenOptions.def (working copy) @@ -85,6 +85,7 @@ VALUE_CODEGENOPT(OptimizeSize, 2, 0) ///< If -Os (==1) or -Oz (==2) is specified. CODEGENOPT(RelaxAll , 1, 0) ///< Relax all machine code instructions. CODEGENOPT(RelaxedAliasing , 1, 0) ///< Set when -fno-strict-aliasing is enabled. +CODEGENOPT(StructPathTBAA , 1, 0) ///< Whether or not to use struct-path TBAA. CODEGENOPT(SaveTempLabels , 1, 0) ///< Save temporary labels. CODEGENOPT(SanitizeAddressZeroBaseShadow , 1, 0) ///< Map shadow memory at zero ///< offset in AddressSanitizer. Index: lib/CodeGen/CGExpr.cpp =================================================================== --- lib/CodeGen/CGExpr.cpp (revision 178718) +++ lib/CodeGen/CGExpr.cpp (working copy) @@ -1044,7 +1044,8 @@ llvm::Value *CodeGenFunction::EmitLoadOfScalar(LValue lvalue) { return EmitLoadOfScalar(lvalue.getAddress(), lvalue.isVolatile(), lvalue.getAlignment().getQuantity(), - lvalue.getType(), lvalue.getTBAAInfo()); + lvalue.getType(), lvalue.getTBAAInfo(), + lvalue.getTBAABaseType(), lvalue.getTBAAOffset()); } static bool hasBooleanRepresentation(QualType Ty) { @@ -1106,7 +1107,9 @@ llvm::Value *CodeGenFunction::EmitLoadOfScalar(llvm::Value *Addr, bool Volatile, unsigned Alignment, QualType Ty, - llvm::MDNode *TBAAInfo) { + llvm::MDNode *TBAAInfo, + QualType TBAABaseType, + uint64_t TBAAOffset) { // For better performance, handle vector loads differently. if (Ty->isVectorType()) { llvm::Value *V; @@ -1158,8 +1161,11 @@ Load->setVolatile(true); if (Alignment) Load->setAlignment(Alignment); - if (TBAAInfo) - CGM.DecorateInstruction(Load, TBAAInfo); + if (TBAAInfo) { + llvm::MDNode *TBAAPath = CGM.getTBAAStructTagInfo(TBAABaseType, TBAAInfo, + TBAAOffset); + CGM.DecorateInstruction(Load, TBAAPath); + } if ((SanOpts->Bool && hasBooleanRepresentation(Ty)) || (SanOpts->Enum && Ty->getAs<EnumType>())) { @@ -1217,7 +1223,8 @@ bool Volatile, unsigned Alignment, QualType Ty, llvm::MDNode *TBAAInfo, - bool isInit) { + bool isInit, QualType TBAABaseType, + uint64_t TBAAOffset) { // Handle vectors differently to get better performance. if (Ty->isVectorType()) { @@ -1268,15 +1275,19 @@ llvm::StoreInst *Store = Builder.CreateStore(Value, Addr, Volatile); if (Alignment) Store->setAlignment(Alignment); - if (TBAAInfo) - CGM.DecorateInstruction(Store, TBAAInfo); + if (TBAAInfo) { + llvm::MDNode *TBAAPath = CGM.getTBAAStructTagInfo(TBAABaseType, TBAAInfo, + TBAAOffset); + CGM.DecorateInstruction(Store, TBAAPath); + } } void CodeGenFunction::EmitStoreOfScalar(llvm::Value *value, LValue lvalue, bool isInit) { EmitStoreOfScalar(value, lvalue.getAddress(), lvalue.isVolatile(), lvalue.getAlignment().getQuantity(), lvalue.getType(), - lvalue.getTBAAInfo(), isInit); + lvalue.getTBAAInfo(), isInit, lvalue.getTBAABaseType(), + lvalue.getTBAAOffset()); } /// EmitLoadOfLValue - Given an expression that represents a value lvalue, this @@ -2494,9 +2505,12 @@ llvm::Value *addr = base.getAddress(); unsigned cvr = base.getVRQualifiers(); + bool TBAAPath = CGM.getCodeGenOpts().StructPathTBAA; if (rec->isUnion()) { // For unions, there is no pointer adjustment. assert(!type->isReferenceType() && "union has reference member"); + // TODO: handle path-aware TBAA for union. + TBAAPath = false; } else { // For structs, we GEP to the field that the record layout suggests. unsigned idx = CGM.getTypes().getCGRecordLayout(rec).getLLVMFieldNo(field); @@ -2508,6 +2522,8 @@ if (cvr & Qualifiers::Volatile) load->setVolatile(true); load->setAlignment(alignment.getQuantity()); + // Loading the reference will disable path-aware TBAA. + TBAAPath = false; if (CGM.shouldUseTBAA()) { llvm::MDNode *tbaa; if (mayAlias) @@ -2541,6 +2557,16 @@ LValue LV = MakeAddrLValue(addr, type, alignment); LV.getQuals().addCVRQualifiers(cvr); + if (TBAAPath) { + const ASTRecordLayout &Layout = + getContext().getASTRecordLayout(field->getParent()); + // Set the base type to be the base type of the base LValue and + // update offset to be relative to the base type. + LV.setTBAABaseType(base.getTBAABaseType()); + LV.setTBAAOffset(base.getTBAAOffset() + + Layout.getFieldOffset(field->getFieldIndex()) / + getContext().getCharWidth()); + } // __weak attribute on a field is ignored. if (LV.getQuals().getObjCGCAttr() == Qualifiers::Weak) Index: lib/CodeGen/CGValue.h =================================================================== --- lib/CodeGen/CGValue.h (revision 178718) +++ lib/CodeGen/CGValue.h (working copy) @@ -157,6 +157,11 @@ Expr *BaseIvarExp; + /// Used by struct-path-aware TBAA. + QualType TBAABaseType; + /// Offset relative to the base type. + uint64_t TBAAOffset; + /// TBAAInfo - TBAA information to attach to dereferences of this LValue. llvm::MDNode *TBAAInfo; @@ -175,6 +180,10 @@ this->ImpreciseLifetime = false; this->ThreadLocalRef = false; this->BaseIvarExp = 0; + + // Initialize fields for TBAA. + this->TBAABaseType = Type; + this->TBAAOffset = 0; this->TBAAInfo = TBAAInfo; } @@ -232,6 +241,12 @@ Expr *getBaseIvarExp() const { return BaseIvarExp; } void setBaseIvarExp(Expr *V) { BaseIvarExp = V; } + QualType getTBAABaseType() const { return TBAABaseType; } + void setTBAABaseType(QualType T) { TBAABaseType = T; } + + uint64_t getTBAAOffset() const { return TBAAOffset; } + void setTBAAOffset(uint64_t O) { TBAAOffset = O; } + llvm::MDNode *getTBAAInfo() const { return TBAAInfo; } void setTBAAInfo(llvm::MDNode *N) { TBAAInfo = N; } Index: lib/CodeGen/CodeGenFunction.h =================================================================== --- lib/CodeGen/CodeGenFunction.h (revision 178718) +++ lib/CodeGen/CodeGenFunction.h (working copy) @@ -2211,7 +2211,9 @@ /// the LLVM value representation. llvm::Value *EmitLoadOfScalar(llvm::Value *Addr, bool Volatile, unsigned Alignment, QualType Ty, - llvm::MDNode *TBAAInfo = 0); + llvm::MDNode *TBAAInfo = 0, + QualType TBAABaseTy = QualType(), + uint64_t TBAAOffset = 0); /// EmitLoadOfScalar - Load a scalar value from an address, taking /// care to appropriately convert from the memory representation to @@ -2224,7 +2226,9 @@ /// the LLVM value representation. void EmitStoreOfScalar(llvm::Value *Value, llvm::Value *Addr, bool Volatile, unsigned Alignment, QualType Ty, - llvm::MDNode *TBAAInfo = 0, bool isInit=false); + llvm::MDNode *TBAAInfo = 0, bool isInit = false, + QualType TBAABaseTy = QualType(), + uint64_t TBAAOffset = 0); /// EmitStoreOfScalar - Store a scalar value to an address, taking /// care to appropriately convert from the memory representation to Index: lib/CodeGen/CodeGenModule.cpp =================================================================== --- lib/CodeGen/CodeGenModule.cpp (revision 178718) +++ lib/CodeGen/CodeGenModule.cpp (working copy) @@ -227,6 +227,20 @@ return TBAA->getTBAAStructInfo(QTy); } +llvm::MDNode *CodeGenModule::getTBAAStructTypeInfo(QualType QTy) { + if (!TBAA) + return 0; + return TBAA->getTBAAStructTypeInfo(QTy); +} + +llvm::MDNode *CodeGenModule::getTBAAStructTagInfo(QualType BaseTy, + llvm::MDNode *AccessN, + uint64_t O) { + if (!TBAA) + return 0; + return TBAA->getTBAAStructTagInfo(BaseTy, AccessN, O); +} + void CodeGenModule::DecorateInstruction(llvm::Instruction *Inst, llvm::MDNode *TBAAInfo) { Inst->setMetadata(llvm::LLVMContext::MD_tbaa, TBAAInfo); Index: lib/CodeGen/CodeGenModule.h =================================================================== --- lib/CodeGen/CodeGenModule.h (revision 178718) +++ lib/CodeGen/CodeGenModule.h (working copy) @@ -501,6 +501,11 @@ llvm::MDNode *getTBAAInfo(QualType QTy); llvm::MDNode *getTBAAInfoForVTablePtr(); llvm::MDNode *getTBAAStructInfo(QualType QTy); + /// Return the MDNode in the type DAG for the given struct type. + llvm::MDNode *getTBAAStructTypeInfo(QualType QTy); + /// Return the path-aware tag for given base type, access node and offset. + llvm::MDNode *getTBAAStructTagInfo(QualType BaseTy, llvm::MDNode *AccessN, + uint64_t O); bool isTypeConstant(QualType QTy, bool ExcludeCtorDtor); Index: lib/CodeGen/CodeGenTBAA.cpp =================================================================== --- lib/CodeGen/CodeGenTBAA.cpp (revision 178718) +++ lib/CodeGen/CodeGenTBAA.cpp (working copy) @@ -21,6 +21,7 @@ #include "clang/AST/Mangle.h" #include "clang/AST/RecordLayout.h" #include "clang/Frontend/CodeGenOptions.h" +#include "llvm/ADT/SmallSet.h" #include "llvm/IR/Constants.h" #include "llvm/IR/LLVMContext.h" #include "llvm/IR/Metadata.h" @@ -225,3 +226,87 @@ // For now, handle any other kind of type conservatively. return StructMetadataCache[Ty] = NULL; } + +/// Check if the given type can be handled by path-aware TBAA. +static bool isTBAAPathStruct(QualType QTy) { + if (const RecordType *TTy = QTy->getAs<RecordType>()) { + const RecordDecl *RD = TTy->getDecl()->getDefinition(); + // RD can be struct, union, class, interface or enum. + // For now, we only handle struct. + if (RD->isStruct() && !RD->hasFlexibleArrayMember()) + return true; + } + return false; +} + +llvm::MDNode * +CodeGenTBAA::getTBAAStructTypeInfo(QualType QTy) { + const Type *Ty = Context.getCanonicalType(QTy).getTypePtr(); + assert(isTBAAPathStruct(QTy)); + + if (llvm::MDNode *N = StructTypeMetadataCache[Ty]) + return N; + + if (const RecordType *TTy = QTy->getAs<RecordType>()) { + const RecordDecl *RD = TTy->getDecl()->getDefinition(); + + const ASTRecordLayout &Layout = Context.getASTRecordLayout(RD); + SmallVector <std::pair<uint64_t, llvm::MDNode*>, 4> Fields; + // To reduce the size of MDNode for a given struct type, we only output + // once for all the fields with the same scalar types. + // Offsets for scalar fields in the type DAG are not used. + llvm::SmallSet <llvm::MDNode*, 4> ScalarFieldTypes; + unsigned idx = 0; + for (RecordDecl::field_iterator i = RD->field_begin(), + e = RD->field_end(); i != e; ++i, ++idx) { + QualType FieldQTy = i->getType(); + llvm::MDNode *FieldNode; + if (isTBAAPathStruct(FieldQTy)) + FieldNode = getTBAAStructTypeInfo(FieldQTy); + else { + FieldNode = getTBAAInfo(FieldQTy); + // Ignore this field if the type already exists. + if (ScalarFieldTypes.count(FieldNode)) + continue; + ScalarFieldTypes.insert(FieldNode); + } + if (!FieldNode) + return StructTypeMetadataCache[Ty] = NULL; + Fields.push_back(std::make_pair( + Layout.getFieldOffset(idx) / Context.getCharWidth(), FieldNode)); + } + + // TODO: This is using the RTTI name. Is there a better way to get + // a unique string for a type? + SmallString<256> OutName; + llvm::raw_svector_ostream Out(OutName); + MContext.mangleCXXRTTIName(QualType(Ty, 0), Out); + Out.flush(); + // Create the struct type node with a vector of pairs (offset, type). + return StructTypeMetadataCache[Ty] = + MDHelper.createTBAAStructTypeNode(OutName, Fields); + } + + return StructMetadataCache[Ty] = NULL; +} + +llvm::MDNode * +CodeGenTBAA::getTBAAStructTagInfo(QualType BaseQTy, llvm::MDNode *AccessNode, + uint64_t Offset) { + if (!CodeGenOpts.StructPathTBAA) + return AccessNode; + + const Type *BTy = Context.getCanonicalType(BaseQTy).getTypePtr(); + TBAAPathTag PathTag = TBAAPathTag(BTy, AccessNode, Offset); + if (llvm::MDNode *N = StructTagMetadataCache[PathTag]) + return N; + + llvm::MDNode *BNode = 0; + if (isTBAAPathStruct(BaseQTy)) + BNode = getTBAAStructTypeInfo(BaseQTy); + if (!BNode) + return StructTagMetadataCache[PathTag] = AccessNode; + + return StructTagMetadataCache[PathTag] = + MDHelper.createTBAAStructTagNode(BNode, AccessNode, Offset); +} Index: lib/CodeGen/CodeGenTBAA.h =================================================================== --- lib/CodeGen/CodeGenTBAA.h (revision 178718) +++ lib/CodeGen/CodeGenTBAA.h (working copy) @@ -35,6 +35,14 @@ namespace CodeGen { class CGRecordLayout; + struct TBAAPathTag { + TBAAPathTag(const Type *B, const llvm::MDNode *A, uint64_t O) + : BaseT(B), AccessN(A), Offset(O) {} + const Type *BaseT; + const llvm::MDNode *AccessN; + uint64_t Offset; + }; + /// CodeGenTBAA - This class organizes the cross-module state that is used /// while lowering AST types to LLVM types. class CodeGenTBAA { @@ -46,8 +54,13 @@ // MDHelper - Helper for creating metadata. llvm::MDBuilder MDHelper; - /// MetadataCache - This maps clang::Types to llvm::MDNodes describing them. + /// MetadataCache - This maps clang::Types to scalar llvm::MDNodes describing + /// them. llvm::DenseMap<const Type *, llvm::MDNode *> MetadataCache; + /// This maps clang::Types to a struct node in the type DAG. + llvm::DenseMap<const Type *, llvm::MDNode *> StructTypeMetadataCache; + /// This maps TBAAPathTags to a tag node. + llvm::DenseMap<TBAAPathTag, llvm::MDNode *> StructTagMetadataCache; /// StructMetadataCache - This maps clang::Types to llvm::MDNodes describing /// them for struct assignments. @@ -89,9 +102,49 @@ /// getTBAAStructInfo - Get the TBAAStruct MDNode to be used for a memcpy of /// the given type. llvm::MDNode *getTBAAStructInfo(QualType QTy); + + /// Get the MDNode in the type DAG for given struct type QType. + llvm::MDNode *getTBAAStructTypeInfo(QualType QType); + /// Get the tag MDNode for a given base type, the actual sclar access MDNode + /// and offset into the base type. + llvm::MDNode *getTBAAStructTagInfo(QualType BaseQType, + llvm::MDNode *AccessNode, uint64_t Offset); }; } // end namespace CodeGen } // end namespace clang +namespace llvm { + +template<> struct DenseMapInfo<clang::CodeGen::TBAAPathTag> { + static clang::CodeGen::TBAAPathTag getEmptyKey() { + return clang::CodeGen::TBAAPathTag( + DenseMapInfo<const clang::Type *>::getEmptyKey(), + DenseMapInfo<const MDNode *>::getEmptyKey(), + DenseMapInfo<uint64_t>::getEmptyKey()); + } + + static clang::CodeGen::TBAAPathTag getTombstoneKey() { + return clang::CodeGen::TBAAPathTag( + DenseMapInfo<const clang::Type *>::getTombstoneKey(), + DenseMapInfo<const MDNode *>::getTombstoneKey(), + DenseMapInfo<uint64_t>::getTombstoneKey()); + } + + static unsigned getHashValue(const clang::CodeGen::TBAAPathTag &Val) { + return DenseMapInfo<const clang::Type *>::getHashValue(Val.BaseT) ^ + DenseMapInfo<const MDNode *>::getHashValue(Val.AccessN) ^ + DenseMapInfo<uint64_t>::getHashValue(Val.Offset); + } + + static bool isEqual(const clang::CodeGen::TBAAPathTag &LHS, + const clang::CodeGen::TBAAPathTag &RHS) { + return LHS.BaseT == RHS.BaseT && + LHS.AccessN == RHS.AccessN && + LHS.Offset == RHS.Offset; + } +}; + +} // end namespace llvm + #endif Index: lib/Driver/Tools.cpp =================================================================== --- lib/Driver/Tools.cpp (revision 178718) +++ lib/Driver/Tools.cpp (working copy) @@ -2105,6 +2105,8 @@ options::OPT_fno_strict_aliasing, getToolChain().IsStrictAliasingDefault())) CmdArgs.push_back("-relaxed-aliasing"); + if (Args.hasArg(options::OPT_fstruct_path_tbaa)) + CmdArgs.push_back("-struct-path-tbaa"); if (Args.hasFlag(options::OPT_fstrict_enums, options::OPT_fno_strict_enums, false)) CmdArgs.push_back("-fstrict-enums"); Index: lib/Frontend/CompilerInvocation.cpp =================================================================== --- lib/Frontend/CompilerInvocation.cpp (revision 178718) +++ lib/Frontend/CompilerInvocation.cpp (working copy) @@ -324,6 +324,7 @@ Opts.UseRegisterSizedBitfieldAccess = Args.hasArg( OPT_fuse_register_sized_bitfield_access); Opts.RelaxedAliasing = Args.hasArg(OPT_relaxed_aliasing); + Opts.StructPathTBAA = Args.hasArg(OPT_struct_path_tbaa); Opts.DwarfDebugFlags = Args.getLastArgValue(OPT_dwarf_debug_flags); Opts.MergeAllConstants = !Args.hasArg(OPT_fno_merge_all_constants); Opts.NoCommon = Args.hasArg(OPT_fno_common); Index: test/CodeGen/tbaa.cpp =================================================================== --- test/CodeGen/tbaa.cpp (revision 0) +++ test/CodeGen/tbaa.cpp (working copy) @@ -0,0 +1,217 @@ +// RUN: %clang_cc1 -O1 -disable-llvm-optzns %s -emit-llvm -o - | FileCheck %s +// RUN: %clang_cc1 -O1 -struct-path-tbaa -disable-llvm-optzns %s -emit-llvm -o - | FileCheck %s -check-prefix=PATH +// Test TBAA metadata generated by front-end. + +#include <stdint.h> +typedef struct +{ + uint16_t f16; + uint32_t f32; + uint16_t f16_2; + uint32_t f32_2; +} StructA; +typedef struct +{ + uint16_t f16; + StructA a; + uint32_t f32; +} StructB; +typedef struct +{ + uint16_t f16; + StructB b; + uint32_t f32; +} StructC; +typedef struct +{ + uint16_t f16; + StructB b; + uint32_t f32; + uint8_t f8; +} StructD; + +typedef struct +{ + uint16_t f16; + uint32_t f32; +} StructS; +typedef struct +{ + uint16_t f16; + uint32_t f32; +} StructS2; + +uint32_t g(uint32_t *s, StructA *A, uint64_t count) { +// CHECK: define i32 @{{.*}}( +// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4 +// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4 +// PATH: define i32 @{{.*}}( +// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !4 +// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !5 + *s = 1; + A->f32 = 4; + return *s; +} + +uint32_t g2(uint32_t *s, StructA *A, uint64_t count) { +// CHECK: define i32 @{{.*}}( +// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4 +// CHECK: store i16 4, i16* %{{.*}}, align 2, !tbaa !5 +// PATH: define i32 @{{.*}}( +// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !4 +// PATH: store i16 4, i16* %{{.*}}, align 2, !tbaa !8 + *s = 1; + A->f16 = 4; + return *s; +} + +uint32_t g3(StructA *A, StructB *B, uint64_t count) { +// CHECK: define i32 @{{.*}}( +// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4 +// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4 +// PATH: define i32 @{{.*}}( +// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5 +// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !9 + A->f32 = 1; + B->a.f32 = 4; + return A->f32; +} + +uint32_t g4(StructA *A, StructB *B, uint64_t count) { +// CHECK: define i32 @{{.*}}( +// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4 +// CHECK: store i16 4, i16* %{{.*}}, align 2, !tbaa !5 +// PATH: define i32 @{{.*}}( +// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5 +// PATH: store i16 4, i16* %{{.*}}, align 2, !tbaa !11 + A->f32 = 1; + B->a.f16 = 4; + return A->f32; +} + +uint32_t g5(StructA *A, StructB *B, uint64_t count) { +// CHECK: define i32 @{{.*}}( +// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4 +// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4 +// PATH: define i32 @{{.*}}( +// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5 +// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !12 + A->f32 = 1; + B->f32 = 4; + return A->f32; +} + +uint32_t g6(StructA *A, StructB *B, uint64_t count) { +// CHECK: define i32 @{{.*}}( +// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4 +// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4 +// PATH: define i32 @{{.*}}( +// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5 +// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !13 + A->f32 = 1; + B->a.f32_2 = 4; + return A->f32; +} + +uint32_t g7(StructA *A, StructS *S, uint64_t count) { +// CHECK: define i32 @{{.*}}( +// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4 +// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4 +// PATH: define i32 @{{.*}}( +// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5 +// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !14 + A->f32 = 1; + S->f32 = 4; + return A->f32; +} + +uint32_t g8(StructA *A, StructS *S, uint64_t count) { +// CHECK: define i32 @{{.*}}( +// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4 +// CHECK: store i16 4, i16* %{{.*}}, align 2, !tbaa !5 +// PATH: define i32 @{{.*}}( +// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !5 +// PATH: store i16 4, i16* %{{.*}}, align 2, !tbaa !16 + A->f32 = 1; + S->f16 = 4; + return A->f32; +} + +uint32_t g9(StructS *S, StructS2 *S2, uint64_t count) { +// CHECK: define i32 @{{.*}}( +// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4 +// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4 +// PATH: define i32 @{{.*}}( +// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !14 +// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !17 + S->f32 = 1; + S2->f32 = 4; + return S->f32; +} + +uint32_t g10(StructS *S, StructS2 *S2, uint64_t count) { +// CHECK: define i32 @{{.*}}( +// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4 +// CHECK: store i16 4, i16* %{{.*}}, align 2, !tbaa !5 +// PATH: define i32 @{{.*}}( +// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !14 +// PATH: store i16 4, i16* %{{.*}}, align 2, !tbaa !19 + S->f32 = 1; + S2->f16 = 4; + return S->f32; +} + +uint32_t g11(StructC *C, StructD *D, uint64_t count) { +// CHECK: define i32 @{{.*}}( +// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4 +// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4 +// PATH: define i32 @{{.*}}( +// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !20 +// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !22 + C->b.a.f32 = 1; + D->b.a.f32 = 4; + return C->b.a.f32; +} + +uint32_t g12(StructC *C, StructD *D, uint64_t count) { +// CHECK: define i32 @{{.*}}( +// CHECK: store i32 1, i32* %{{.*}}, align 4, !tbaa !4 +// CHECK: store i32 4, i32* %{{.*}}, align 4, !tbaa !4 +// TODO: differentiate the two accesses. +// PATH: define i32 @{{.*}}( +// PATH: store i32 1, i32* %{{.*}}, align 4, !tbaa !9 +// PATH: store i32 4, i32* %{{.*}}, align 4, !tbaa !9 + StructB *b1 = &(C->b); + StructB *b2 = &(D->b); + // b1, b2 have different context. + b1->a.f32 = 1; + b2->a.f32 = 4; + return b1->a.f32; +} + +// CHECK: !1 = metadata !{metadata !"omnipotent char", metadata !2} +// CHECK: !2 = metadata !{metadata !"Simple C/C++ TBAA"} +// CHECK: !4 = metadata !{metadata !"int", metadata !1} +// CHECK: !5 = metadata !{metadata !"short", metadata !1} + +// PATH: !1 = metadata !{metadata !"omnipotent char", metadata !2} +// PATH: !4 = metadata !{metadata !"int", metadata !1} +// PATH: !5 = metadata !{metadata !6, metadata !4, i64 4} +// PATH: !6 = metadata !{metadata !"_ZTS7StructA", i64 0, metadata !7, i64 4, metadata !4} +// PATH: !7 = metadata !{metadata !"short", metadata !1} +// PATH: !8 = metadata !{metadata !6, metadata !7, i64 0} +// PATH: !9 = metadata !{metadata !10, metadata !4, i64 8} +// PATH: !10 = metadata !{metadata !"_ZTS7StructB", i64 0, metadata !7, i64 4, metadata !6, i64 20, metadata !4} +// PATH: !11 = metadata !{metadata !10, metadata !7, i64 4} +// PATH: !12 = metadata !{metadata !10, metadata !4, i64 20} +// PATH: !13 = metadata !{metadata !10, metadata !4, i64 16} +// PATH: !14 = metadata !{metadata !15, metadata !4, i64 4} +// PATH: !15 = metadata !{metadata !"_ZTS7StructS", i64 0, metadata !7, i64 4, metadata !4} +// PATH: !16 = metadata !{metadata !15, metadata !7, i64 0} +// PATH: !17 = metadata !{metadata !18, metadata !4, i64 4} +// PATH: !18 = metadata !{metadata !"_ZTS8StructS2", i64 0, metadata !7, i64 4, metadata !4} +// PATH: !19 = metadata !{metadata !18, metadata !7, i64 0} +// PATH: !20 = metadata !{metadata !21, metadata !4, i64 12} +// PATH: !21 = metadata !{metadata !"_ZTS7StructC", i64 0, metadata !7, i64 4, metadata !10, i64 28, metadata !4} +// PATH: !22 = metadata !{metadata !23, metadata !4, i64 12} +// PATH: !23 = metadata !{metadata !"_ZTS7StructD", i64 0, metadata !7, i64 4, metadata !10, i64 28, metadata !4, i64 32, metadata !1} llvm-svn: 178784
* ubsan: Pass floating-point arguments to the runtime by value if they fit theRichard Smith2013-03-221-1/+10
| | | | | | value argument. If not, be sure we don't accidentally use a dynamic alloca. llvm-svn: 177690
* Force column info only for direct inlined functions. This should strikeAdrian Prantl2013-03-151-3/+5
| | | | | | | | | | the balance between expected behavior and compatibility with the gdb testsuite. (GDB gets confused if we break an expression into multiple debug stmts so we enable this behavior only for inlined functions. For the full experience people can still use -gcolumn-info.) llvm-svn: 177164
* Tighten up the rules for precise lifetime and documentJohn McCall2013-03-131-5/+11
| | | | | | | | the requirements on the ARC optimizer. rdar://13407451 llvm-svn: 176924
OpenPOWER on IntegriCloud