summaryrefslogtreecommitdiffstats
path: root/clang/lib/CodeGen/CGExprScalar.cpp
Commit message (Collapse)AuthorAgeFilesLines
* [OpenCL] Fix __builtin_astype for vec3 types.Yaxun Liu2016-06-081-38/+36
| | | | | | | | __builtin_astype does not generate correct LLVM IR for vec3 types. This patch inserts bitcasts to/from vec4 when necessary in addition to generating vector shuffle. Sema and codegen tests are added. Differential Revision: http://reviews.llvm.org/D20133 llvm-svn: 272153
* [Sema,CodeGen] Remove comment from SemaChecking about a ↵Craig Topper2016-05-181-18/+2
| | | | | | | | builtin_shufflevector form that it doesn't support. Remove CodeGen support for the same form since it could never have been used due to the missing support in Sema. I couldn't find any documentation that this form existed either. Nor is there documentation for one of the remaining two forms, but there is a testcase that uses it. llvm-svn: 269879
* Enable support for __float128 in Clang and enable it on pertinent platformsNemanja Ivanovic2016-05-091-5/+9
| | | | | | | | | | | | | | | | | | This patch corresponds to reviews: http://reviews.llvm.org/D15120 http://reviews.llvm.org/D19125 It adds support for the __float128 keyword, literals and target feature to enable it. Based on the latter of the two aforementioned reviews, this feature is enabled on Linux on i386/X86 as well as SystemZ. This is also the second attempt in commiting this feature. The first attempt did not enable it on required platforms which caused failures when compiling type_traits with -std=gnu++11. If you see failures with compiling this header on your platform after this commit, it is likely that your platform needs to have this feature enabled. llvm-svn: 268898
* Revert 266186 as it breaks anything that includes type_traits on some platformsNemanja Ivanovic2016-04-151-9/+5
| | | | | | | | | | Since this patch provided support for the __float128 type but disabled it on all platforms by default, some platforms can't compile type_traits with -std=gnu++11 since there is a specialization with __float128. This reverts the patch until D19125 is approved (i.e. we know which platforms need this support enabled). llvm-svn: 266460
* Enable support for __float128 in ClangNemanja Ivanovic2016-04-131-5/+9
| | | | | | | | | | | | | | | | This patch corresponds to review: http://reviews.llvm.org/D15120 It adds support for the __float128 keyword, literals and a target feature to enable it. This support is disabled by default on all targets and any target that has support for this type is free to add it. Based on feedback that I've received from target maintainers, this appears to be the right thing for most targets. I have not heard from the maintainers of X86 which I believe supports this type. I will subsequently investigate the impact of enabling this on X86. llvm-svn: 266186
* [OpenCL] Handle AddressSpaceConversion when target address space does not ↵Yaxun Liu2016-04-121-1/+4
| | | | | | | | | | | | | change. In codegen different address spaces may be mapped to the same address space for a target, e.g. in x86/x86-64 all address spaces are mapped to 0. Therefore AddressSpaceConversion should be translated by CreatePointerBitCastOrAddrSpaceCast instead of CreateAddrSpaceCast. Differential Revision: http://reviews.llvm.org/D18713 llvm-svn: 266107
* revert SVN r265702, r265640Saleem Abdulrasool2016-04-081-1/+1
| | | | | | | | | | | Revert the two changes to thread CodeGenOptions into the TargetInfo allocation and to fix the layering violation by moving CodeGenOptions into Basic. Code Generation is arguably not particularly "basic". This addresses Richard's post-commit review comments. This change purely does the mechanical revert and will be followed up with an alternate approach to thread the desired information into TargetInfo. llvm-svn: 265806
* Basic: move CodeGenOptions from FrontendSaleem Abdulrasool2016-04-071-1/+1
| | | | | | | | This is a mechanical move of CodeGenOptions from libFrontend to libBasic. This fixes the layering violation introduced earlier by threading CodeGenOptions into TargetInfo. It should also fix the modules based self-hosting builds. NFC. llvm-svn: 265702
* NFC: make AtomicOrdering an enum classJF Bastien2016-04-061-5/+6
| | | | | | | | | | | | Summary: See LLVM change D18775 for details, this change depends on it. Reviewers: jyknight, reames Subscribers: cfe-commits Differential Revision: http://reviews.llvm.org/D18776 llvm-svn: 265569
* Default vaarg lowering should support indirect struct types.James Y Knight2016-02-241-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fixes PR11517 for SPARC. On most targets, clang lowers va_arg itself, eschewing the use of the llvm vaarg instruction. This is necessary (at least for now) as the type argument to the vaarg instruction cannot represent all the ABI information that is needed to support complex calling conventions. However, on targets with a simpler varrags ABIs, the LLVM instruction can work just fine, and clang can simply lower to it. Unfortunately, even on such targets, vaarg with a struct argument would fail, because the default lowering to vaarg was naive: it didn't take into account the ABI attribute computed by classifyArgumentType. In particular, for the DefaultABIInfo, structs are supposed to be passed indirectly and so llvm's vaarg instruction should be emitted with a pointer argument. Now, vaarg instruction emission is able to use computed ABIArgInfo for the provided argument type, which allows the default ABI support to work for structs too. I haven't touched the EmitVAArg implementation for PPC32_SVR4 or XCore, although I believe both are now redundant, and could be switched over to use the default implementation as well. Differential Revision: http://reviews.llvm.org/D16154 llvm-svn: 261717
* Emit calls to objc_unsafeClaimAutoreleasedReturnValue whenJohn McCall2016-01-271-9/+9
| | | | | | | | | | | | | | | | | reclaiming a call result in order to ignore it or assign it to an __unsafe_unretained variable. This avoids adding an unwanted retain/release pair when the return value is not actually returned autoreleased (e.g. when it is returned from a nonatomic getter or a typical collection accessor). This runtime function is only available on the latest Apple OS releases; the backwards-compatibility story is that you don't get the optimization unless your deployment target is recent enough. Sorry. rdar://20530049 llvm-svn: 258962
* [Bugfix] Fix ICE on constexpr vector splat.George Burgess IV2016-01-131-14/+11
| | | | | | | | | | | | | In {CG,}ExprConstant.cpp, we weren't treating vector splats properly. This patch makes us treat splats more properly. Additionally, this patch adds a new cast kind which allows a bool->int cast to result in -1 or 0, instead of 1 or 0 (for true and false, respectively), so we can sanely model OpenCL bool->int casts in the AST. Differential Revision: http://reviews.llvm.org/D14877 llvm-svn: 257559
* [TrailingObjects] Convert OffsetOfExpr.James Y Knight2015-12-291-5/+5
| | | | | | That necessitated moving the OffsetOfNode class out of OffsetOfExpr. llvm-svn: 256590
* [CodeGen] Use llvm::CmpInst::Predicate instead of unsigned for parameter ↵Craig Topper2015-12-161-18/+14
| | | | | | types in EmitCompare to eliminate some later explicit casts. NFC. llvm-svn: 255753
* change an assert when generating fmuladd to an ordinary 'if' check (PR25719)Sanjay Patel2015-12-031-11/+9
| | | | | | | | | | | | | | | | | | | We don't want to generate fmuladd if there's a use of the fmul expression, but this shouldn't be an assert. The test case is derived from the commit message for r253337: http://reviews.llvm.org/rL253337 That commit reverted r253269: http://reviews.llvm.org/rL253269 ...but the bug exists independently of the default fp-contract setting. It just became easier to hit with that change. PR25719: https://llvm.org/bugs/show_bug.cgi?id=25719 Differential Revision: http://reviews.llvm.org/D15165 llvm-svn: 254573
* CodeGen: Remove implicit ilist iterator conversions, NFCDuncan P. N. Exon Smith2015-11-061-2/+2
| | | | | | | Make ilist iterator conversions explicit in clangCodeGen. Eventually I'll remove them everywhere. llvm-svn: 252358
* Allow compound assignment expressions to be contracted when licensed by the ↵Stephen Canon2015-11-041-1/+1
| | | | | | language or pragma. llvm-svn: 252050
* [DEBUG INFO] Emit debug info for type used in explicit cast only.Alexey Bataev2015-10-201-6/+1
| | | | | | | Currently debug info for types used in explicit cast only is not emitted. It happened after a patch for better alignment handling. This patch fixes this bug. Differential Revision: http://reviews.llvm.org/D13582 llvm-svn: 250795
* Fix crash in codegen on casting to `bool &`.Alexey Bataev2015-10-071-1/+1
| | | | | | | | Currently codegen crashes trying to emit casting to bool &. It happens because bool type is converted to i1 and later then lvalue for reference is converted to i1*. But when codegen tries to load this lvalue it crashes trying to load value from this i1*. Differential Revision: http://reviews.llvm.org/D13325 llvm-svn: 249534
* [OpenCL] Fix casting a true boolean to an integer vector.Anastasia Stulova2015-10-051-4/+22
| | | | | | | | | | | | | OpenCL v1.1 s6.2.2: for the boolean value true, every bit in the result vector should be set. This change treats the i1 value as signed for the purposes of performing the cast to integer, and therefore sign extend into the result. Patch by Neil Hickey! http://reviews.llvm.org/D13349 llvm-svn: 249301
* Support __builtin_ms_va_list.Charles Davis2015-09-171-2/+3
| | | | | | | | | | | | | | | | | | Summary: This change adds support for `__builtin_ms_va_list`, a GCC extension for variadic `ms_abi` functions. The existing `__builtin_va_list` support is inadequate for this because `va_list` is defined differently in the Win64 ABI vs. the System V/AMD64 ABI. Depends on D1622. Reviewers: rsmith, rnk, rjmccall CC: cfe-commits Differential Revision: http://reviews.llvm.org/D1623 llvm-svn: 247941
* Compute and preserve alignment more faithfully in IR-generation.John McCall2015-09-081-78/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce an Address type to bundle a pointer value with an alignment. Introduce APIs on CGBuilderTy to work with Address values. Change core APIs on CGF/CGM to traffic in Address where appropriate. Require alignments to be non-zero. Update a ton of code to compute and propagate alignment information. As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment helper function to CGF and made use of it in a number of places in the expression emitter. The end result is that we should now be significantly more correct when performing operations on objects that are locally known to be under-aligned. Since alignment is not reliably tracked in the type system, there are inherent limits to this, but at least we are no longer confused by standard operations like derived-to-base conversions and array-to-pointer decay. I've also fixed a large number of bugs where we were applying the complete-object alignment to a pointer instead of the non-virtual alignment, although most of these were hidden by the very conservative approach we took with member alignment. Also, because IRGen now reliably asserts on zero alignments, we should no longer be subject to an absurd but frustrating recurring bug where an incomplete type would report a zero alignment and then we'd naively do a alignmentAtOffset on it and emit code using an alignment equal to the largest power-of-two factor of the offset. We should also now be emitting much more aggressive alignment attributes in the presence of over-alignment. In particular, field access now uses alignmentAtOffset instead of min. Several times in this patch, I had to change the existing code-generation pattern in order to more effectively use the Address APIs. For the most part, this seems to be a strict improvement, like doing pointer arithmetic with GEPs instead of ptrtoint. That said, I've tried very hard to not change semantics, but it is likely that I've failed in a few places, for which I apologize. ABIArgInfo now always carries the assumed alignment of indirect and indirect byval arguments. In order to cut down on what was already a dauntingly large patch, I changed the code to never set align attributes in the IR on non-byval indirect arguments. That is, we still generate code which assumes that indirect arguments have the given alignment, but we don't express this information to the backend except where it's semantically required (i.e. on byvals). This is likely a minor regression for those targets that did provide this information, but it'll be trivial to add it back in a later patch. I partially punted on applying this work to CGBuiltin. Please do not add more uses of the CreateDefaultAligned{Load,Store} APIs; they will be going away eventually. llvm-svn: 246985
* Propagate SourceLocations through to get a Loc on float_cast_overflowFilipe Cabecinhas2015-08-111-39/+49
| | | | | | | | | | | | | | | Summary: float_cast_overflow is the only UBSan check without a source location attached. This patch propagates SourceLocations where necessary to get them to the EmitCheck() call. Reviewers: rsmith, ABataev, rjmccall Subscribers: cfe-commits Differential Revision: http://reviews.llvm.org/D11757 llvm-svn: 244568
* Don't repeat function names in comments. NFC.Filipe Cabecinhas2015-08-051-19/+16
| | | | llvm-svn: 244018
* Fix invalid shufflevector operandsSimon Pilgrim2015-08-021-1/+12
| | | | | | | | | | This patch fixes bug 23800 ( https://llvm.org/bugs/show_bug.cgi?id=23800#c2 ). There existed a case where the index operand from extractelement was directly used to create a shufflevector mask. Since the index can be of any integral type but the mask must only contain 32 bit integers a 64 bit index operand led to an assertion error later on. Committed on behalf of mpflanzer (Moritz Pflanzer) Differential Revision: http://reviews.llvm.org/D10838 llvm-svn: 243851
* [CodeGen] Simplify creation of shuffle masks.Benjamin Kramer2015-07-281-6/+2
| | | | | | No functional change intended. llvm-svn: 243439
* [OPENMP] Introduced type trait "__builtin_omp_required_simd_align" for ↵Alexey Bataev2015-07-021-0/+8
| | | | | | | | | default simd alignment. Adds type trait "__builtin_omp_required_simd_align" after discussions here http://reviews.llvm.org/D9894 Differential Revision: http://reviews.llvm.org/D10597 llvm-svn: 241237
* Implement diagnostic mode for -fsanitize=cfi*, -fsanitize=cfi-diag.Peter Collingbourne2015-06-191-2/+6
| | | | | | | | | | | | | | | | | This causes programs compiled with this flag to print a diagnostic when a control flow integrity check fails instead of aborting. Diagnostics are printed using UBSan's runtime library. The main motivation of this feature over -fsanitize=vptr is fidelity with the -fsanitize=cfi implementation: the diagnostics are printed under exactly the same conditions as those which would cause -fsanitize=cfi to abort the program. This means that the same restrictions apply regarding compiling all translation units with -fsanitize=cfi, cross-DSO virtual calls are forbidden, etc. Differential Revision: http://reviews.llvm.org/D10268 llvm-svn: 240109
* API update for streamlining of IRBuilder::CreateCall to just use ↵David Blaikie2015-05-181-5/+4
| | | | | | ArrayRef/initializer_list+braced init llvm-svn: 237625
* Unify sanitizer kind representation between the driver and the rest of the ↵Peter Collingbourne2015-05-111-5/+5
| | | | | | | | | | compiler. No functional change. Differential Revision: http://reviews.llvm.org/D9618 llvm-svn: 237055
* InstrProf: Stop using RegionCounter outside of CodeGenPGO (NFC)Justin Bogner2015-04-231-18/+16
| | | | | | | | | The RegionCounter type does a lot of legwork, but most of it is only meaningful within the implementation of CodeGenPGO. The uses elsewhere in CodeGen generally just want to increment or read counters, so do that directly. llvm-svn: 235664
* Unify the way we report overflow in increment/decrement operator.Alexey Samsonov2015-04-231-31/+29
| | | | | | | | | | | | | | | | | | | | Summary: Make sure signed overflow in "x--" is checked with llvm.ssub.with.overflow intrinsic and is reported as: "-2147483648 - 1 cannot be represented in type 'int'" instead of: "-2147483648 + -1 cannot be represented in type 'int'" , like we do for unsigned overflow. Test Plan: clang + compiler-rt regression test suite Reviewers: rsmith Subscribers: cfe-commits Differential Revision: http://reviews.llvm.org/D8236 llvm-svn: 235568
* [opaque pointer type] More GEP API migrationsDavid Blaikie2015-04-051-3/+3
| | | | | | | Looks like the VTable code in particular will need some work to pass around the pointee type explicitly. llvm-svn: 234128
* [OPENMP] Codegen for 'atomic update' construct.Alexey Bataev2015-03-301-8/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Adds atomic update codegen for the following forms of expressions: x binop= expr; x++; ++x; x--; --x; x = x binop expr; x = expr binop x; If x and expr are integer and binop is associative or x is a LHS in a RHS of the assignment expression, and atomics are allowed for type of x on the target platform atomicrmw instruction is emitted. Otherwise compare-and-swap sequence is emitted: bb: ... atomic load <x> cont: <expected> = phi [ <x>, label %bb ], [ <new_failed>, %cont ] <desired> = <expected> binop <expr> <res> = cmpxchg atomic &<x>, desired, expected <new_failed> = <res>.field1; br <res>field2, label %exit, label %cont exit: ... Differential Revision: http://reviews.llvm.org/D8536 llvm-svn: 233513
* [CodeGen] Support native half inc/dec amounts.Ahmed Bougacha2015-03-241-1/+6
| | | | | | | | We previously defaulted to long double, but it's also possible to have a half inc/dec amount, when LangOpts NativeHalfType is set. Currently, that's only true for OpenCL. llvm-svn: 233135
* [CodeGen] Properly support the half FP type with non-native operations.Ahmed Bougacha2015-03-231-34/+60
| | | | | | | | | | | | | | | | | | | | | | On AArch64, the -fallow-half-args-and-returns option is the default. With it, the half type is considered legal (rather than the i16 used normally for __fp16), but no operation is, except conversions and load/stores and such. The previous behavior was tantamount to saying LangOpts.NativeHalfType was implied by LangOpts.HalfArgsAndReturns, which isn't true. Instead, teach the various parts of CodeGen that already know about half (using the intrinsics or not) about this weird in-between case, where the "half" type is legal, but operations on it aren't. This is a smaller intermediate step to the end-goal of removing the intrinsic, always using "half", and letting the backend legalize. Builds on r232968. rdar://20045970, rdar://17468714 Differential Revision: http://reviews.llvm.org/D8367 llvm-svn: 232971
* [CodeGen] Convert double -> __fp16 in one step.Ahmed Bougacha2015-03-231-9/+18
| | | | | | | | | | | | | | Fix the CodeGen so that for types bigger than float, instead of converting to fp16 via the sequence "InTy -> float -> fp16", we perform conversions in just one step. This avoids the double rounding which potentially changes results from a natural IEEE-754 operation. rdar://17594379, rdar://17468714 Differential Revision: http://reviews.llvm.org/D4602 Part of: http://reviews.llvm.org/D8367 llvm-svn: 232968
* Implement bad cast checks using control flow integrity information.Peter Collingbourne2015-03-141-0/+11
| | | | | | | | | | | This scheme checks that pointer and lvalue casts are made to an object of the correct dynamic type; that is, the dynamic type of the object must be a derived class of the pointee type of the cast. The checks are currently only introduced where the class being casted to is a polymorphic class. Differential Revision: http://reviews.llvm.org/D8312 llvm-svn: 232241
* [UBSan] Split -fsanitize=shift into -fsanitize=shift-base and ↵Alexey Samsonov2015-03-091-26/+37
| | | | | | | | | | | | | | | | -fsanitize=shift-exponent. This is a recommit of r231150, reverted in r231409. Turns out that -fsanitize=shift-base check implementation only works if the shift exponent is valid, otherwise it contains undefined behavior itself. Make sure we check that exponent is valid before we proceed to check the base. Make sure that we actually report invalid values of base or exponent if -fsanitize=shift-base or -fsanitize=shift-exponent is specified, respectively. llvm-svn: 231711
* Revert "[UBSan] Split -fsanitize=shift into -fsanitize=shift-base and ↵Alexey Samsonov2015-03-051-32/+28
| | | | | | | | | | | -fsanitize=shift-exponent." It's not that easy. If we're only checking -fsanitize=shift-base we still need to verify that exponent has sane value, otherwise UBSan-inserted checks for base will contain undefined behavior themselves. llvm-svn: 231409
* [UBSan] Split -fsanitize=shift into -fsanitize=shift-base and ↵Alexey Samsonov2015-03-031-28/+32
| | | | | | | | | | | | | | | | | | | | | -fsanitize=shift-exponent. -fsanitize=shift is now a group that includes both these checks, so exisiting users should not be affected. This change introduces two new UBSan kinds that sanitize only left-hand side and right-hand side of shift operation. In practice, invalid exponent value (negative or too large) tends to cause more portability problems, including inconsistencies between different compilers, crashes and inadequeate results on non-x86 architectures etc. That is, -fsanitize=shift-exponent failures should generally be addressed first. As a bonus, this change simplifies CodeGen implementation for emitting left shift (separate checks for base and exponent are now merged by the existing generic logic in EmitCheck()), and LLVM IR for these checks (the number of basic blocks is reduced). llvm-svn: 231150
* Sema: Parenthesized bound destructor member expressions can be calledDavid Majnemer2015-02-251-1/+1
| | | | | | | | | We would wrongfully reject (a.~A)() in both the destructor and pseudo-destructor cases. This fixes PR22668. llvm-svn: 230512
* Fix typoo.Richard Smith2015-02-121-1/+1
| | | | llvm-svn: 228963
* DebugInfo: Refactor default arg handling into a common place (instead of ↵David Blaikie2015-02-091-8/+2
| | | | | | handling in repeatedly for aggregate, complex, and scalar types) llvm-svn: 228591
* DebugInfo: Ensure calls to functions with default arguments which themselves ↵David Blaikie2015-02-031-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | have default arguments, still have locations. To handle default arguments in C++ in the debug info, we disable code updating the debug location during the emission of default arguments. This code was buggy in the case of default arguments which, themselves, have default arguments - the inner default argument would re-enable debug info when it was finished, but before the outer default argument was finished. This was already a bug, but got worse (because a crasher instead of just a quality bug) with the recent improvements to debug info line quality because... The ApplyDebugLocation scoped device would find the debug info disabled and not save any debug location. But then in ~ApplyDebugLocation it would find the debug info had been enabled and would then apply the no-location. Then the outer function call would be emitted without any location. That's bad. Arguably we could /also/ fix the ApplyDebugLocation to assert on this situation (where debug info was disabled in the ctor and enabled in the dtor, or the other way around) but this is at least the necessary fix regardless. (also, I imagine this disabling behavior might need to be in-place for CGExprComplex and CGExprAgg too, maybe... ?) And I seem to recall seeing some weird default arg stepping behavior recently which might be related to this too... I'll have to look into it. llvm-svn: 228053
* Address review feedback for r228003.Adrian Prantl2015-02-031-1/+1
| | | | | | | - use named constructors - get rid of MarkAsPrologue llvm-svn: 228021
* Merge ArtificialLocation into ApplyDebugLocation and make a clearAdrian Prantl2015-02-031-1/+1
| | | | | | | | | | | | | | | | distinction between the different use-cases. With the previous default behavior we would occasionally emit empty debug locations in situations where they actually were strictly required (= on invoke insns). We now have a choice between defaulting to an empty location or an artificial location. Specifically, this fixes a bug caused by a missing debug location when emitting C++ EH cleanup blocks from within an artificial function, such as an ObjC destroy helper function. rdar://problem/19670595 llvm-svn: 228003
* DebugInfo: Use the preferred location rather than the start location for ↵David Blaikie2015-01-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | expression line info This causes things like assignment to refer to the '=' rather than the LHS when attributing the store instruction, for example. There were essentially 3 options for this: * The beginning of an expression (this was the behavior prior to this commit). This meant that stepping through subexpressions would bounce around from subexpressions back to the start of the outer expression, etc. (eg: x + y + z would go x, y, x, z, x (the repeated 'x's would be where the actual addition occurred)). * The end of an expression. This seems to be what GCC does /mostly/, and certainly this for function calls. This has the advantage that progress is always 'forwards' (never jumping backwards - except for independent subexpressions if they're evaluated in interesting orders, etc). "x + y + z" would go "x y z" with the additions occurring at y and z after the respective loads. The problem with this is that the user would still have to think fairly hard about precedence to realize which subexpression is being evaluated or which operator overload is being called in, say, an asan backtrace. * The preferred location or 'exprloc'. In this case you get sort of what you'd expect, though it's a bit confusing in its own way due to going 'backwards'. In this case the locations would be: "x y + z +" in lovely postfix arithmetic order. But this does mean that if the op+ were an operator overload, say, and in a backtrace, the backtrace will point to the exact '+' that's being called, not to the end of one of its operands. (actually the operator overload case doesn't work yet for other reasons, but that's being fixed - but this at least gets scalar/complex assignments and other plain operators right) llvm-svn: 227027
* Reapply r225000 (reverted in r225555): DebugInfo: Generalize debug info ↵David Blaikie2015-01-141-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | location handling (and follow-up commits). Several pieces of code were relying on implicit debug location setting which usually lead to incorrect line information anyway. So I've fixed those (in r225955 and r225845) separately which should pave the way for this commit to be cleanly reapplied. The reason these implicit dependencies resulted in crashes with this patch is that the debug location would no longer implicitly leak from one place to another, but be set back to invalid. Once a call with no/invalid location was emitted, if that call was ever inlined it could produce invalid debugloc chains and assert during LLVM's codegen. There may be further cases of such bugs in this patch - they're hard to flush out with regression testing, so I'll keep an eye out for reports and investigate/fix them ASAP if they come up. Original commit message: Reapply "DebugInfo: Generalize debug info location handling" Originally committed in r224385 and reverted in r224441 due to concerns this change might've introduced a crash. Turns out this change fixes the crash introduced by one of my earlier more specific location handling changes (those specific fixes are reverted by this patch, in favor of the more general solution). Recommitted in r224941 and reverted in r224970 after it caused a crash when building compiler-rt. Looks to be due to this change zeroing out the debug location when emitting default arguments (which were meant to inherit their outer expression's location) thus creating call instructions without locations - these create problems for inlining and must not be created. That is fixed and tested in this version of the change. Original commit message: This is a more scalable (fixed in mostly one place, rather than many places that will need constant improvement/maintenance) solution to several commits I've made recently to increase source fidelity for subexpressions. This resetting had to be done at the DebugLoc level (not the SourceLocation level) to preserve scoping information (if the resetting was done with CGDebugInfo::EmitLocation, it would've caused the tail end of an expression's codegen to end up in a potentially different scope than the start, even though it was at the same source location). The drawback to this is that it might leave CGDebugInfo out of sync. Ideally CGDebugInfo shouldn't have a duplicate sense of the current SourceLocation, but for now it seems it does... - I don't think I'm going to tackle removing that just now. I expect this'll probably cause some more buildbot fallout & I'll investigate that as it comes up. Also these sort of improvements might be starting to show a weakness/bug in LLVM's line table handling: we don't correctly emit is_stmt for statements, we just put it on every line table entry. This means one statement split over multiple lines appears as multiple 'statements' and two statements on one line (without column info) are treated as one statement. I don't think we have any IR representation of statements that would help us distinguish these cases and identify the beginning of each statement - so that might be something we need to add (possibly to the lexical scope chain - a scope for each statement). This does cause some problems for GDB and possibly other DWARF consumers. llvm-svn: 225956
* [mips] Fix va_arg() for pointer types on big-endian N32.Daniel Sanders2015-01-131-2/+6
| | | | | | | | | | | | | | | | | | | | | Summary: The Mips ABI's treat pointers in the same way as integers. They are sign-extended to 32-bit for O32, and 64-bit for N32/N64. This doesn't matter for O32 and N64 where pointers are already the correct width but it does matter for big-endian N32, where pointers are 32-bit and need promoting. The caller side is already passing pointers correctly. This patch corrects the callee. Reviewers: vmedic, atanasyan Reviewed By: atanasyan Subscribers: cfe-commits Differential Revision: http://reviews.llvm.org/D6812 llvm-svn: 225782
OpenPOWER on IntegriCloud