summaryrefslogtreecommitdiffstats
path: root/clang/lib/CodeGen/CodeGenFunction.h
Commit message (Collapse)AuthorAgeFilesLines
* [OpenCL] Improve destructor support in C++ for OpenCLMarco Antognini2019-07-221-6/+7
| | | | | | | This re-applies r366422 with a fix for Bug PR42665 and a new regression test. llvm-svn: 366670
* Revert r366422: [OpenCL] Improve destructor support in C++ for OpenCLIlya Biryukov2019-07-181-7/+6
| | | | | | | | Reason: this commit causes crashes in the clang compiler when building LLVM Support with libc++, see https://bugs.llvm.org/show_bug.cgi?id=42665 for details. llvm-svn: 366429
* [OpenCL] Improve destructor support in C++ for OpenCLMarco Antognini2019-07-181-6/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This patch does mainly three things: 1. It fixes a false positive error detection in Sema that is similar to D62156. The error happens when explicitly calling an overloaded destructor for different address spaces. 2. It selects the correct destructor when multiple overloads for address spaces are available. 3. It inserts the expected address space cast when invoking a destructor, if needed, and therefore fixes a crash due to the unmet assertion in llvm::CastInst::Create. The following is a reproducer of the three issues: struct MyType { ~MyType() {} ~MyType() __constant {} }; __constant MyType myGlobal{}; kernel void foo() { myGlobal.~MyType(); // 1 and 2. // 1. error: cannot initialize object parameter of type // '__generic MyType' with an expression of type '__constant MyType' // 2. error: no matching member function for call to '~MyType' } kernel void bar() { // 3. The implicit call to the destructor crashes due to: // Assertion `castIsValid(op, S, Ty) && "Invalid cast!"' failed. // in llvm::CastInst::Create. MyType myLocal; } The added test depends on D62413 and covers a few more things than the above reproducer. Subscribers: yaxunl, Anastasia, cfe-commits Tags: #clang Differential Revision: https://reviews.llvm.org/D64569 llvm-svn: 366422
* fix unnamed fiefield issue and add tests for __builtin_preserve_access_index ↵Yonghong Song2019-07-161-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | intrinsic The original commit is r366076. It is temporarily reverted (r366155) due to test failure. This resubmit makes test more robust by accepting regex instead of hardcoded names/references in several places. This is a followup patch for https://reviews.llvm.org/D61809. Handle unnamed bitfield properly and add more test cases. Fixed the unnamed bitfield issue. The unnamed bitfield is ignored by debug info, so we need to ignore such a struct/union member when we try to get the member index in the debug info. D61809 contains two test cases but not enough as it does not checking generated IRs in the fine grain level, and also it does not have semantics checking tests. This patch added unit tests for both code gen and semantics checking for the new intrinsic. Signed-off-by: Yonghong Song <yhs@fb.com> llvm-svn: 366231
* Finish "Adapt -fsanitize=function to SANITIZER_NON_UNIQUE_TYPEINFO"Stephan Bergmann2019-07-161-1/+1
| | | | | | | | | | | | | | | | | | | | | i.e., recent 5745eccef54ddd3caca278d1d292a88b2281528b: * Bump the function_type_mismatch handler version, as its signature has changed. * The function_type_mismatch handler can return successfully now, so SanitizerKind::Function must be AlwaysRecoverable (like for SanitizerKind::Vptr). * But the minimal runtime would still unconditionally treat a call to the function_type_mismatch handler as failure, so disallow -fsanitize=function in combination with -fsanitize-minimal-runtime (like it was already done for -fsanitize=vptr). * Add tests. Differential Revision: https://reviews.llvm.org/D61479 llvm-svn: 366186
* Temporarily Revert "fix unnamed fiefield issue and add tests for ↵Eric Christopher2019-07-151-3/+0
| | | | | | | | | | __builtin_preserve_access_index intrinsic" The commit had tests that would only work with names in the IR. This reverts commit r366076. llvm-svn: 366155
* fix unnamed fiefield issue and add tests for __builtin_preserve_access_index ↵Yonghong Song2019-07-151-0/+3
| | | | | | | | | | | | | | | | | | | | intrinsic This is a followup patch for https://reviews.llvm.org/D61809. Handle unnamed bitfield properly and add more test cases. Fixed the unnamed bitfield issue. The unnamed bitfield is ignored by debug info, so we need to ignore such a struct/union member when we try to get the member index in the debug info. D61809 contains two test cases but not enough as it does not checking generated IRs in the fine grain level, and also it does not have semantics checking tests. This patch added unit tests for both code gen and semantics checking for the new intrinsic. Signed-off-by: Yonghong Song <yhs@fb.com> llvm-svn: 366076
* NFC: Convert large lambda into methodVitaly Buka2019-07-101-0/+3
| | | | | | | | | | | | | | Reviewers: pcc, eugenis Reviewed By: eugenis Subscribers: cfe-commits, lldb-commits Tags: #clang, #lldb Differential Revision: https://reviews.llvm.org/D63854 llvm-svn: 365708
* [BPF] Preserve debuginfo array/union/struct type/access indexYonghong Song2019-07-091-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For background of BPF CO-RE project, please refer to http://vger.kernel.org/bpfconf2019.html In summary, BPF CO-RE intends to compile bpf programs adjustable on struct/union layout change so the same program can run on multiple kernels with adjustment before loading based on native kernel structures. In order to do this, we need keep track of GEP(getelementptr) instruction base and result debuginfo types, so we can adjust on the host based on kernel BTF info. Capturing such information as an IR optimization is hard as various optimization may have tweaked GEP and also union is replaced by structure it is impossible to track fieldindex for union member accesses. Three intrinsic functions, preserve_{array,union,struct}_access_index, are introducted. addr = preserve_array_access_index(base, index, dimension) addr = preserve_union_access_index(base, di_index) addr = preserve_struct_access_index(base, gep_index, di_index) here, base: the base pointer for the array/union/struct access. index: the last access index for array, the same for IR/DebugInfo layout. dimension: the array dimension. gep_index: the access index based on IR layout. di_index: the access index based on user/debuginfo types. If using these intrinsics blindly, i.e., transforming all GEPs to these intrinsics and later on reducing them to GEPs, we have seen up to 7% more instructions generated. To avoid such an overhead, a clang builtin is proposed: base = __builtin_preserve_access_index(base) such that user wraps to-be-relocated GEPs in this builtin and preserve_*_access_index intrinsics only apply to those GEPs. Such a buyin will prevent performance degradation if people do not use CO-RE, even for programs which use bpf_probe_read(). For example, for the following example, $ cat test.c struct sk_buff { int i; int b1:1; int b2:2; union { struct { int o1; int o2; } o; struct { char flags; char dev_id; } dev; int netid; } u[10]; }; static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr) = (void *) 4; #define _(x) (__builtin_preserve_access_index(x)) int bpf_prog(struct sk_buff *ctx) { char dev_id; bpf_probe_read(&dev_id, sizeof(char), _(&ctx->u[5].dev.dev_id)); return dev_id; } $ clang -target bpf -O2 -g -emit-llvm -S -mllvm -print-before-all \ test.c >& log The generated IR looks like below: ... define dso_local i32 @bpf_prog(%struct.sk_buff*) #0 !dbg !15 { %2 = alloca %struct.sk_buff*, align 8 %3 = alloca i8, align 1 store %struct.sk_buff* %0, %struct.sk_buff** %2, align 8, !tbaa !45 call void @llvm.dbg.declare(metadata %struct.sk_buff** %2, metadata !43, metadata !DIExpression()), !dbg !49 call void @llvm.lifetime.start.p0i8(i64 1, i8* %3) #4, !dbg !50 call void @llvm.dbg.declare(metadata i8* %3, metadata !44, metadata !DIExpression()), !dbg !51 %4 = load i32 (i8*, i32, i8*)*, i32 (i8*, i32, i8*)** @bpf_probe_read, align 8, !dbg !52, !tbaa !45 %5 = load %struct.sk_buff*, %struct.sk_buff** %2, align 8, !dbg !53, !tbaa !45 %6 = call [10 x %union.anon]* @llvm.preserve.struct.access.index.p0a10s_union.anons.p0s_struct.sk_buffs( %struct.sk_buff* %5, i32 2, i32 3), !dbg !53, !llvm.preserve.access.index !19 %7 = call %union.anon* @llvm.preserve.array.access.index.p0s_union.anons.p0a10s_union.anons( [10 x %union.anon]* %6, i32 1, i32 5), !dbg !53 %8 = call %union.anon* @llvm.preserve.union.access.index.p0s_union.anons.p0s_union.anons( %union.anon* %7, i32 1), !dbg !53, !llvm.preserve.access.index !26 %9 = bitcast %union.anon* %8 to %struct.anon.0*, !dbg !53 %10 = call i8* @llvm.preserve.struct.access.index.p0i8.p0s_struct.anon.0s( %struct.anon.0* %9, i32 1, i32 1), !dbg !53, !llvm.preserve.access.index !34 %11 = call i32 %4(i8* %3, i32 1, i8* %10), !dbg !52 %12 = load i8, i8* %3, align 1, !dbg !54, !tbaa !55 %13 = sext i8 %12 to i32, !dbg !54 call void @llvm.lifetime.end.p0i8(i64 1, i8* %3) #4, !dbg !56 ret i32 %13, !dbg !57 } !19 = distinct !DICompositeType(tag: DW_TAG_structure_type, name: "sk_buff", file: !3, line: 1, size: 704, elements: !20) !26 = distinct !DICompositeType(tag: DW_TAG_union_type, scope: !19, file: !3, line: 5, size: 64, elements: !27) !34 = distinct !DICompositeType(tag: DW_TAG_structure_type, scope: !26, file: !3, line: 10, size: 16, elements: !35) Note that @llvm.preserve.{struct,union}.access.index calls have metadata llvm.preserve.access.index attached to instructions to provide struct/union debuginfo type information. For &ctx->u[5].dev.dev_id, . The "%6 = ..." represents struct member "u" with index 2 for IR layout and index 3 for DI layout. . The "%7 = ..." represents array subscript "5". . The "%8 = ..." represents union member "dev" with index 1 for DI layout. . The "%10 = ..." represents struct member "dev_id" with index 1 for both IR and DI layout. Basically, traversing the use-def chain recursively for the 3rd argument of bpf_probe_read() and examining all preserve_*_access_index calls, the debuginfo struct/union/array access index can be achieved. The intrinsics also contain enough information to regenerate codes for IR layout. For array and structure intrinsics, the proper GEP can be constructed. For union intrinsics, replacing all uses of "addr" with "base" should be enough. Signed-off-by: Yonghong Song <yhs@fb.com> Differential Revision: https://reviews.llvm.org/D61809 llvm-svn: 365438
* Revert "[BPF] Preserve debuginfo array/union/struct type/access index"Yonghong Song2019-07-091-4/+0
| | | | | | | | | This reverts commit r365435. Forgot adding the Differential Revision link. Will add to the commit message and resubmit. llvm-svn: 365436
* [BPF] Preserve debuginfo array/union/struct type/access indexYonghong Song2019-07-091-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For background of BPF CO-RE project, please refer to http://vger.kernel.org/bpfconf2019.html In summary, BPF CO-RE intends to compile bpf programs adjustable on struct/union layout change so the same program can run on multiple kernels with adjustment before loading based on native kernel structures. In order to do this, we need keep track of GEP(getelementptr) instruction base and result debuginfo types, so we can adjust on the host based on kernel BTF info. Capturing such information as an IR optimization is hard as various optimization may have tweaked GEP and also union is replaced by structure it is impossible to track fieldindex for union member accesses. Three intrinsic functions, preserve_{array,union,struct}_access_index, are introducted. addr = preserve_array_access_index(base, index, dimension) addr = preserve_union_access_index(base, di_index) addr = preserve_struct_access_index(base, gep_index, di_index) here, base: the base pointer for the array/union/struct access. index: the last access index for array, the same for IR/DebugInfo layout. dimension: the array dimension. gep_index: the access index based on IR layout. di_index: the access index based on user/debuginfo types. If using these intrinsics blindly, i.e., transforming all GEPs to these intrinsics and later on reducing them to GEPs, we have seen up to 7% more instructions generated. To avoid such an overhead, a clang builtin is proposed: base = __builtin_preserve_access_index(base) such that user wraps to-be-relocated GEPs in this builtin and preserve_*_access_index intrinsics only apply to those GEPs. Such a buyin will prevent performance degradation if people do not use CO-RE, even for programs which use bpf_probe_read(). For example, for the following example, $ cat test.c struct sk_buff { int i; int b1:1; int b2:2; union { struct { int o1; int o2; } o; struct { char flags; char dev_id; } dev; int netid; } u[10]; }; static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr) = (void *) 4; #define _(x) (__builtin_preserve_access_index(x)) int bpf_prog(struct sk_buff *ctx) { char dev_id; bpf_probe_read(&dev_id, sizeof(char), _(&ctx->u[5].dev.dev_id)); return dev_id; } $ clang -target bpf -O2 -g -emit-llvm -S -mllvm -print-before-all \ test.c >& log The generated IR looks like below: ... define dso_local i32 @bpf_prog(%struct.sk_buff*) #0 !dbg !15 { %2 = alloca %struct.sk_buff*, align 8 %3 = alloca i8, align 1 store %struct.sk_buff* %0, %struct.sk_buff** %2, align 8, !tbaa !45 call void @llvm.dbg.declare(metadata %struct.sk_buff** %2, metadata !43, metadata !DIExpression()), !dbg !49 call void @llvm.lifetime.start.p0i8(i64 1, i8* %3) #4, !dbg !50 call void @llvm.dbg.declare(metadata i8* %3, metadata !44, metadata !DIExpression()), !dbg !51 %4 = load i32 (i8*, i32, i8*)*, i32 (i8*, i32, i8*)** @bpf_probe_read, align 8, !dbg !52, !tbaa !45 %5 = load %struct.sk_buff*, %struct.sk_buff** %2, align 8, !dbg !53, !tbaa !45 %6 = call [10 x %union.anon]* @llvm.preserve.struct.access.index.p0a10s_union.anons.p0s_struct.sk_buffs( %struct.sk_buff* %5, i32 2, i32 3), !dbg !53, !llvm.preserve.access.index !19 %7 = call %union.anon* @llvm.preserve.array.access.index.p0s_union.anons.p0a10s_union.anons( [10 x %union.anon]* %6, i32 1, i32 5), !dbg !53 %8 = call %union.anon* @llvm.preserve.union.access.index.p0s_union.anons.p0s_union.anons( %union.anon* %7, i32 1), !dbg !53, !llvm.preserve.access.index !26 %9 = bitcast %union.anon* %8 to %struct.anon.0*, !dbg !53 %10 = call i8* @llvm.preserve.struct.access.index.p0i8.p0s_struct.anon.0s( %struct.anon.0* %9, i32 1, i32 1), !dbg !53, !llvm.preserve.access.index !34 %11 = call i32 %4(i8* %3, i32 1, i8* %10), !dbg !52 %12 = load i8, i8* %3, align 1, !dbg !54, !tbaa !55 %13 = sext i8 %12 to i32, !dbg !54 call void @llvm.lifetime.end.p0i8(i64 1, i8* %3) #4, !dbg !56 ret i32 %13, !dbg !57 } !19 = distinct !DICompositeType(tag: DW_TAG_structure_type, name: "sk_buff", file: !3, line: 1, size: 704, elements: !20) !26 = distinct !DICompositeType(tag: DW_TAG_union_type, scope: !19, file: !3, line: 5, size: 64, elements: !27) !34 = distinct !DICompositeType(tag: DW_TAG_structure_type, scope: !26, file: !3, line: 10, size: 16, elements: !35) Note that @llvm.preserve.{struct,union}.access.index calls have metadata llvm.preserve.access.index attached to instructions to provide struct/union debuginfo type information. For &ctx->u[5].dev.dev_id, . The "%6 = ..." represents struct member "u" with index 2 for IR layout and index 3 for DI layout. . The "%7 = ..." represents array subscript "5". . The "%8 = ..." represents union member "dev" with index 1 for DI layout. . The "%10 = ..." represents struct member "dev_id" with index 1 for both IR and DI layout. Basically, traversing the use-def chain recursively for the 3rd argument of bpf_probe_read() and examining all preserve_*_access_index calls, the debuginfo struct/union/array access index can be achieved. The intrinsics also contain enough information to regenerate codes for IR layout. For array and structure intrinsics, the proper GEP can be constructed. For union intrinsics, replacing all uses of "addr" with "base" should be enough. Signed-off-by: Yonghong Song <yhs@fb.com> llvm-svn: 365435
* Ensure Target Features always_inline error happens in C++ cases.Erich Keane2019-06-211-0/+1
| | | | | | | | A handful of C++ cases as reported in PR42352 didn't actually give an error when always_inlining with a different target feature list. This resulted in broken IR. llvm-svn: 364109
* Rename CodeGenFunction::overlapFor* to getOverlapFor*.Richard Smith2019-06-201-5/+5
| | | | llvm-svn: 363980
* P0840R2: support for [[no_unique_address]] attributeRichard Smith2019-06-201-8/+1
| | | | | | | | | | | | | | | | | Summary: Add support for the C++2a [[no_unique_address]] attribute for targets using the Itanium C++ ABI. This depends on D63371. Reviewers: rjmccall, aaron.ballman Subscribers: dschuff, aheejin, cfe-commits Tags: #clang Differential Revision: https://reviews.llvm.org/D63451 llvm-svn: 363976
* Store a pointer to the return value in a static alloca and let the debugger ↵Amy Huang2019-06-201-0/+4
| | | | | | | | | | | | | | use that as the variable address for NRVO variables. Subscribers: hiraditya, cfe-commits, llvm-commits Tags: #clang, #llvm Differential Revision: https://reviews.llvm.org/D63361 llvm-svn: 363952
* Implement __builtin_LINE() et. al. to support source location capture.Eric Fiselier2019-05-161-4/+18
| | | | | | | | | | | | | | | | | Summary: This patch implements the source location builtins `__builtin_LINE(), `__builtin_FUNCTION()`, `__builtin_FILE()` and `__builtin_COLUMN()`. These builtins are needed to implement [`std::experimental::source_location`](https://rawgit.com/cplusplus/fundamentals-ts/v2/main.html#reflection.src_loc.creation). With the exception of `__builtin_COLUMN`, GCC also implements these builtins, and Clangs behavior is intended to match as closely as possible. Reviewers: rsmith, joerg, aaron.ballman, bogner, majnemer, shafik, martong Reviewed By: rsmith Subscribers: rnkovacs, loskutov, riccibruno, mgorny, kunitoki, alexr, majnemer, hfinkel, cfe-commits Differential Revision: https://reviews.llvm.org/D37035 llvm-svn: 360937
* [PR41276] Fixed incorrect generation of addr space cast for 'this' in C++.Anastasia Stulova2019-04-041-5/+2
| | | | | | | | | | | | | | | Improved classification of address space cast when qualification conversion is performed - prevent adding addr space cast for non-pointer and non-reference types. Take address space correctly from the pointee. Also pass correct address space from 'this' object using AggValueSlot when generating addrspacecast in the constructor call. Differential Revision: https://reviews.llvm.org/D59988 llvm-svn: 357682
* [MS] Make __iso_volatile_* available on all targetsReid Kleckner2019-03-281-3/+0
| | | | | | | | | | | Future versions of MSVC make these intrinsics available on x86 & x64, according to: http://lists.llvm.org/pipermail/cfe-dev/2019-March/061711.html The purpose of these builtins is to emit plain, non-atomic, volatile stores when /volatile:ms (-cc1 -fms-volatile) is enabled. llvm-svn: 357220
* IRGen: Remove StructorType; thread GlobalDecl through more code. NFCI.Peter Collingbourne2019-03-221-3/+2
| | | | | | | | This should make it easier to add more structor variants. Differential Revision: https://reviews.llvm.org/D59724 llvm-svn: 356822
* Revert "Add a new attribute, fortify_stdlib"Erik Pilkington2019-03-131-4/+0
| | | | | | | | | | This reverts commit r353765. After talking with our c stdlib folks, we decided to use the existing pass_object_size attribute to implement _FORTIFY_SOURCE wrappers, like Bionic does (I didn't realize that pass_object_size could be used for this purpose). Sorry for the flip/flop, and thanks to James Y. Knight for pointing this out to me. llvm-svn: 356103
* [CodeGenObjC] Emit [[X alloc] init] as objc_alloc_init(X) when availableErik Pilkington2019-02-141-0/+2
| | | | | | | | | | | This provides a code size win on the caller side, since the init message send is done in the runtime function. rdar://44987038 Differential revision: https://reviews.llvm.org/D57936 llvm-svn: 354056
* Add a new attribute, fortify_stdlibErik Pilkington2019-02-111-0/+4
| | | | | | | | | | | | | | | | | | | | | | This attribute applies to declarations of C stdlib functions (sprintf, memcpy...) that have known fortified variants (__sprintf_chk, __memcpy_chk, ...). When applied, clang will emit calls to the fortified variant functions instead of calls to the defaults. In GCC, this is done by adding gnu_inline-style wrapper functions, but that doesn't work for us for variadic functions because we don't support __builtin_va_arg_pack (and have no intention to). This attribute takes two arguments, the first is 'type' argument passed through to __builtin_object_size, and the second is a flag argument that gets passed through to the variadic checking variants. rdar://47905754 Differential revision: https://reviews.llvm.org/D57918 llvm-svn: 353765
* [opaque pointer types] Pass through function types for TLSJames Y Knight2019-02-071-4/+4
| | | | | | | | initialization and global destructor calls. Differential Revision: https://reviews.llvm.org/D57801 llvm-svn: 353355
* [opaque pointer types] More trivial changes to pass FunctionType to CallInst.James Y Knight2019-02-051-3/+3
| | | | | | | | Change various functions to use FunctionCallee or Function*. Pass function type through __builtin_dump_struct's dumpRecord helper. llvm-svn: 353199
* [opaque pointer types] Pass function types for runtime function calls.James Y Knight2019-02-051-15/+15
| | | | | | | | | | | | | Emit{Nounwind,}RuntimeCall{,OrInvoke} have been modified to take a FunctionCallee as an argument, and CreateRuntimeFunction has been modified to return a FunctionCallee. All callers have been updated. Additionally, CreateBuiltinFunction is removed, as it was redundant with CreateRuntimeFunction after some previous changes. Differential Revision: https://reviews.llvm.org/D57668 llvm-svn: 353184
* [opaque pointer types] Trivial changes towards CallInst requiringJames Y Knight2019-02-031-2/+2
| | | | | | explicit function types. llvm-svn: 353009
* [Sanitizers] UBSan unreachable incompatible with ASan in the presence of ↵Julian Lettner2019-02-011-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | `noreturn` calls Summary: UBSan wants to detect when unreachable code is actually reached, so it adds instrumentation before every unreachable instruction. However, the optimizer will remove code after calls to functions marked with noreturn. To avoid this UBSan removes noreturn from both the call instruction as well as from the function itself. Unfortunately, ASan relies on this annotation to unpoison the stack by inserting calls to _asan_handle_no_return before noreturn functions. This is important for functions that do not return but access the the stack memory, e.g., unwinder functions *like* longjmp (longjmp itself is actually "double-proofed" via its interceptor). The result is that when ASan and UBSan are combined, the noreturn attributes are missing and ASan cannot unpoison the stack, so it has false positives when stack unwinding is used. Changes: Clang-CodeGen now directly insert calls to `__asan_handle_no_return` when a call to a noreturn function is encountered and both UBsan-unreachable and ASan are enabled. This allows UBSan to continue removing the noreturn attribute from functions without any changes to the ASan pass. Previously generated code: ``` call void @longjmp call void @__asan_handle_no_return call void @__ubsan_handle_builtin_unreachable ``` Generated code (for now): ``` call void @__asan_handle_no_return call void @longjmp call void @__asan_handle_no_return call void @__ubsan_handle_builtin_unreachable ``` rdar://problem/40723397 Reviewers: delcypher, eugenis, vsk Differential Revision: https://reviews.llvm.org/D57278 > llvm-svn: 352690 llvm-svn: 352829
* Revert "[Sanitizers] UBSan unreachable incompatible with ASan in the ↵Eric Liu2019-01-311-2/+2
| | | | | | | | | presence of `noreturn` calls" This reverts commit r352690. This causes clang to crash. Sent reproducer to the author in the orginal commit. llvm-svn: 352755
* [Sanitizers] UBSan unreachable incompatible with ASan in the presence of ↵Julian Lettner2019-01-301-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | `noreturn` calls Summary: UBSan wants to detect when unreachable code is actually reached, so it adds instrumentation before every unreachable instruction. However, the optimizer will remove code after calls to functions marked with noreturn. To avoid this UBSan removes noreturn from both the call instruction as well as from the function itself. Unfortunately, ASan relies on this annotation to unpoison the stack by inserting calls to _asan_handle_no_return before noreturn functions. This is important for functions that do not return but access the the stack memory, e.g., unwinder functions *like* longjmp (longjmp itself is actually "double-proofed" via its interceptor). The result is that when ASan and UBSan are combined, the noreturn attributes are missing and ASan cannot unpoison the stack, so it has false positives when stack unwinding is used. Changes: Clang-CodeGen now directly insert calls to `__asan_handle_no_return` when a call to a noreturn function is encountered and both UBsan-unreachable and ASan are enabled. This allows UBSan to continue removing the noreturn attribute from functions without any changes to the ASan pass. Previously generated code: ``` call void @longjmp call void @__asan_handle_no_return call void @__ubsan_handle_builtin_unreachable ``` Generated code (for now): ``` call void @__asan_handle_no_return call void @longjmp call void @__asan_handle_no_return call void @__ubsan_handle_builtin_unreachable ``` rdar://problem/40723397 Reviewers: delcypher, eugenis, vsk Differential Revision: https://reviews.llvm.org/D57278 llvm-svn: 352690
* Add a new builtin: __builtin_dynamic_object_sizeErik Pilkington2019-01-301-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | This builtin has the same UI as __builtin_object_size, but has the potential to be evaluated dynamically. It is meant to be used as a drop-in replacement for libraries that use __builtin_object_size when a dynamic checking mode is enabled. For instance, __builtin_object_size fails to provide any extra checking in the following function: void f(size_t alloc) { char* p = malloc(alloc); strcpy(p, "foobar"); // expands to __builtin___strcpy_chk(p, "foobar", __builtin_object_size(p, 0)) } This is an overflow if alloc < 7, but because LLVM can't fold the object size intrinsic statically, it folds __builtin_object_size to -1. With __builtin_dynamic_object_size, alloc is passed through to __builtin___strcpy_chk. rdar://32212419 Differential revision: https://reviews.llvm.org/D56760 llvm-svn: 352665
* Cleanup: replace uses of CallSite with CallBase.James Y Knight2019-01-301-11/+10
| | | | llvm-svn: 352595
* [ubsan] Check the correct size when sanitizing array new.Richard Smith2019-01-231-2/+4
| | | | | | We previously forgot to multiply the element size by the array bound. llvm-svn: 351924
* Update the file headers across all of the LLVM projects in the monorepoChandler Carruth2019-01-191-4/+3
| | | | | | | | | | | | | | | | | to reflect the new license. We understand that people may be surprised that we're moving the header entirely to discuss the new license. We checked this carefully with the Foundation's lawyer and we believe this is the correct approach. Essentially, all code in the project is now made available by the LLVM project under our new license, so you will see that the license headers include that license only. Some of our contributors have contributed code under our old license, and accordingly, we have retained a copy of our old license notice in the top-level files in each project and repository. llvm-svn: 351636
* Fix cleanup registration for lambda captures.Richard Smith2019-01-171-3/+3
| | | | | | | | | | | | | | | | | Lambda captures should be destroyed if an exception is thrown only if the construction of the complete lambda-expression has not completed. (If the lambda-expression has been fully constructed, any exception will invoke its destructor, which will destroy the captures.) This is directly modeled after how we handle the equivalent situation in InitListExprs. Note that EmitLambdaLValue was unreachable because in C++11 onwards the frontend never creates the awkward situation where a prvalue expression (such as a lambda) is used in an lvalue context (such as the left-hand side of a class member access). llvm-svn: 351487
* [clang][UBSan] Sanitization for alignment assumptions.Roman Lebedev2019-01-151-11/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: UB isn't nice. It's cool and powerful, but not nice. Having a way to detect it is nice though. [[ https://wg21.link/p1007r3 | P1007R3: std::assume_aligned ]] / http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1007r2.pdf says: ``` We propose to add this functionality via a library function instead of a core language attribute. ... If the pointer passed in is not aligned to at least N bytes, calling assume_aligned results in undefined behaviour. ``` This differential teaches clang to sanitize all the various variants of this assume-aligned attribute. Requires D54588 for LLVM IRBuilder changes. The compiler-rt part is D54590. This is a second commit, the original one was r351105, which was mass-reverted in r351159 because 2 compiler-rt tests were failing. Reviewers: ABataev, craig.topper, vsk, rsmith, rnk, #sanitizers, erichkeane, filcab, rjmccall Reviewed By: rjmccall Subscribers: chandlerc, ldionne, EricWF, mclow.lists, cfe-commits, bkramer Tags: #sanitizers Differential Revision: https://reviews.llvm.org/D54589 llvm-svn: 351177
* Revert alignment assumptions changesVlad Tsyrklevich2019-01-151-22/+11
| | | | | | | Revert r351104-6, r351109, r351110, r351119, r351134, and r351153. These changes fail on the sanitizer bots. llvm-svn: 351159
* [clang][UBSan] Sanitization for alignment assumptions.Roman Lebedev2019-01-141-11/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: UB isn't nice. It's cool and powerful, but not nice. Having a way to detect it is nice though. [[ https://wg21.link/p1007r3 | P1007R3: std::assume_aligned ]] / http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1007r2.pdf says: ``` We propose to add this functionality via a library function instead of a core language attribute. ... If the pointer passed in is not aligned to at least N bytes, calling assume_aligned results in undefined behaviour. ``` This differential teaches clang to sanitize all the various variants of this assume-aligned attribute. Requires D54588 for LLVM IRBuilder changes. The compiler-rt part is D54590. Reviewers: ABataev, craig.topper, vsk, rsmith, rnk, #sanitizers, erichkeane, filcab, rjmccall Reviewed By: rjmccall Subscribers: chandlerc, ldionne, EricWF, mclow.lists, cfe-commits, bkramer Tags: #sanitizers Differential Revision: https://reviews.llvm.org/D54589 llvm-svn: 351105
* Convert some ObjC retain/release msgSends to runtime calls.Pete Cooper2018-12-211-0/+5
| | | | | | | | | | It is faster to directly call the ObjC runtime for methods such as retain/release instead of sending a message to those functions. Differential Revision: https://reviews.llvm.org/D55869 Reviewed By: rjmccall llvm-svn: 349952
* Remove unused Args parameter from EmitFunctionBody, NFCReid Kleckner2018-12-131-1/+1
| | | | llvm-svn: 349001
* Move CodeGenOptions from Frontend to BasicRichard Trieu2018-12-111-1/+1
| | | | | | Basic uses CodeGenOptions and should not depend on Frontend. llvm-svn: 348827
* Convert some ObjC msgSends to runtime calls.Pete Cooper2018-12-081-0/+4
| | | | | | | | | | | | | | It is faster to directly call the ObjC runtime for methods such as alloc/allocWithZone instead of sending a message to those functions. This patch adds support for converting messages to alloc/allocWithZone to their equivalent runtime calls. Tests included for the positive case of applying this transformation, negative tests that we ensure we only convert "alloc" to objc_alloc, not "alloc2", and also a driver test to ensure we enable this only for supported runtime versions. Reviewed By: rjmccall https://reviews.llvm.org/D55349 llvm-svn: 348687
* [NFC] Move storage of dispatch-version to GlobalDeclErich Keane2018-11-131-4/+3
| | | | | | | | | | | | | | | | | | | As suggested by Richard Smith, and initially put up for review here: https://reviews.llvm.org/D53341, this patch removes a hack that was used to ensure that proper target-feature lists were used when emitting cpu-dispatch (and eventually, target-clones) implementations. As a part of this, the GlobalDecl object is proliferated to a bunch more locations. Originally, this was put up for review (see above) to get acceptance on the approach, though discussion with Richard in San Diego showed he approved of the approach taken here. Thus, I believe this is acceptable for Review-After-commit Differential Revision: https://reviews.llvm.org/D53341 Change-Id: I0a0bd673340d334d93feac789d653e03d9f6b1d5 llvm-svn: 346757
* Fix a nondeterminism in the debug info for VLA size expressions.Adrian Prantl2018-11-091-0/+2
| | | | | | | | | | | The artificial variable describing the array size is supposed to be called "__vla_expr", but this was implemented by retrieving the name of the associated alloca, which isn't a reliable source for the name, since nonassert compilers may drop names from LLVM IR. rdar://problem/45924808 llvm-svn: 346542
* [CodeGen] Move `emitConstant` from ScalarExprEmitter to CodeGenFunction. NFC.Volodymyr Sapsai2018-11-011-0/+1
| | | | | | | | | | | | | | | The goal is to use `emitConstant` in more places. Didn't move `ComplexExprEmitter::emitConstant` because it returns a different type. Reviewers: rjmccall, ahatanak Reviewed By: rjmccall Subscribers: dexonsmith, erik.pilkington, cfe-commits Differential Revision: https://reviews.llvm.org/D53725 llvm-svn: 345897
* Part of PR39508: Emit an @llvm.invariant.start after storing toRichard Smith2018-10-311-3/+6
| | | | | | | | | | | | | | | | | __tls_guard. __tls_guard can only ever transition from 0 to 1, and only once. This permits LLVM to remove repeated checks for TLS initialization and repeated initialization code in cases like: int g(); thread_local int n = g(); int a = n + n; where we could not prove that __tls_guard was still 'true' when checking it for the second reference to 'n' in the initializer of 'a'. llvm-svn: 345774
* Create ConstantExpr classBill Wendling2018-10-311-3/+5
| | | | | | | | | | | | | | | | A ConstantExpr class represents a full expression that's in a context where a constant expression is required. This class reflects the path the evaluator took to reach the expression rather than the syntactic context in which the expression occurs. In the future, the class will be expanded to cache the result of the evaluated expression so that it's not needlessly re-evaluated Reviewed By: rsmith Differential Revision: https://reviews.llvm.org/D53475 llvm-svn: 345692
* Implement Function Multiversioning for Non-ELF Systems.Erich Keane2018-10-251-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Similar to how ICC handles CPU-Dispatch on Windows, this patch uses the resolver function directly to forward the call to the proper function. This is not nearly as efficient as IFuncs of course, but is still quite useful for large functions specifically developed for certain processors. This is unfortunately still limited to x86, since it depends on __builtin_cpu_supports and __builtin_cpu_is, which are x86 builtins. The naming for the resolver/forwarding function for cpu-dispatch was taken from ICC's implementation, which uses the unmodified name for this (no mangling additions). This is possible, since cpu-dispatch uses '.A' for the 'default' version. In 'target' multiversioning, this function keeps the '.resolver' extension in order to keep the default function keeping the default mangling. Change-Id: I4731555a39be26c7ad59a2d8fda6fa1a50f73284 Differential Revision: https://reviews.llvm.org/D53586 llvm-svn: 345298
* Remove a pair of unused dispatch multiversion declarations.Erich Keane2018-10-241-15/+0
| | | | | | | | | These declarations somehow survived a cleanup that combined them with the target multiversioning functions. This patch removes them as they are no longer necessary or used. Change-Id: I318286401ace63bef1aa48018dabb25be0117ca0 llvm-svn: 345145
* [X86] Add support for more than 32 features for __builtin_cpu_isCraig Topper2018-10-201-2/+2
| | | | | | | | | | | | | | libgcc supports more than 32 features by adding a new 32-bit variable __cpu_features2. This adds the clang support for checking these feature bits. Patches for compiler-rt and llvm to support this are coming as well. Probably still need an additional patch for target multiversioning in clang. Differential Revision: https://reviews.llvm.org/D53458 llvm-svn: 344832
* Distinguish `__block` variables that are captured by escaping blocksAkira Hatanaka2018-10-011-5/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | from those that aren't. This patch changes the way __block variables that aren't captured by escaping blocks are handled: - Since non-escaping blocks on the stack never get copied to the heap (see https://reviews.llvm.org/D49303), Sema shouldn't error out when the type of a non-escaping __block variable doesn't have an accessible copy constructor. - IRGen doesn't have to use the specialized byref structure (see https://clang.llvm.org/docs/Block-ABI-Apple.html#id8) for a non-escaping __block variable anymore. Instead IRGen can emit the variable as a normal variable and copy the reference to the block literal. Byref copy/dispose helpers aren't needed either. This reapplies r343518 after fixing a use-after-free bug in function Sema::ActOnBlockStmtExpr where the BlockScopeInfo was dereferenced after it was popped and deleted. rdar://problem/39352313 Differential Revision: https://reviews.llvm.org/D51564 llvm-svn: 343542
OpenPOWER on IntegriCloud