summaryrefslogtreecommitdiffstats
path: root/clang/lib/CodeGen/CodeGenFunction.h
Commit message (Collapse)AuthorAgeFilesLines
* Add builtins for aligning and checking alignment of pointers and integersAlex Richardson2020-01-091-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change introduces three new builtins (which work on both pointers and integers) that can be used instead of common bitwise arithmetic: __builtin_align_up(x, alignment), __builtin_align_down(x, alignment) and __builtin_is_aligned(x, alignment). I originally added these builtins to the CHERI fork of LLVM a few years ago to handle the slightly different C semantics that we use for CHERI [1]. Until recently these builtins (or sequences of other builtins) were required to generate correct code. I have since made changes to the default C semantics so that they are no longer strictly necessary (but using them does generate slightly more efficient code). However, based on our experience using them in various projects over the past few years, I believe that adding these builtins to clang would be useful. These builtins have the following benefit over bit-manipulation and casts via uintptr_t: - The named builtins clearly convey the semantics of the operation. While checking alignment using __builtin_is_aligned(x, 16) versus ((x & 15) == 0) is probably not a huge win in readably, I personally find __builtin_align_up(x, N) a lot easier to read than (x+(N-1))&~(N-1). - They preserve the type of the argument (including const qualifiers). When using casts via uintptr_t, it is easy to cast to the wrong type or strip qualifiers such as const. - If the alignment argument is a constant value, clang can check that it is a power-of-two and within the range of the type. Since the semantics of these builtins is well defined compared to arbitrary bit-manipulation, it is possible to add a UBSAN checker that the run-time value is a valid power-of-two. I intend to add this as a follow-up to this change. - The builtins avoids int-to-pointer casts both in C and LLVM IR. In the future (i.e. once most optimizations handle it), we could use the new llvm.ptrmask intrinsic to avoid the ptrtoint instruction that would normally be generated. - They can be used to round up/down to the next aligned value for both integers and pointers without requiring two separate macros. - In many projects the alignment operations are already wrapped in macros (e.g. roundup2 and rounddown2 in FreeBSD), so by replacing the macro implementation with a builtin call, we get improved diagnostics for many call-sites while only having to change a few lines. - Finally, the builtins also emit assume_aligned metadata when used on pointers. This can improve code generation compared to the uintptr_t casts. [1] In our CHERI compiler we have compilation mode where all pointers are implemented as capabilities (essentially unforgeable 128-bit fat pointers). In our original model, casts from uintptr_t (which is a 128-bit capability) to an integer value returned the "offset" of the capability (i.e. the difference between the virtual address and the base of the allocation). This causes problems for cases such as checking the alignment: for example, the expression `if ((uintptr_t)ptr & 63) == 0` is generally used to check if the pointer is aligned to a multiple of 64 bytes. The problem with offsets is that any pointer to the beginning of an allocation will have an offset of zero, so this check always succeeds in that case (even if the address is not correctly aligned). The same issues also exist when aligning up or down. Using the alignment builtins ensures that the address is used instead of the offset. While I have since changed the default C semantics to return the address instead of the offset when casting, this offset compilation mode can still be used by passing a command-line flag. Reviewers: rsmith, aaron.ballman, theraven, fhahn, lebedev.ri, nlopes, aqjune Reviewed By: aaron.ballman, lebedev.ri Differential Revision: https://reviews.llvm.org/D71499
* [OpenMP][NFCI] Introduce llvm/IR/OpenMPConstants.hJohannes Doerfert2019-12-101-1/+1
| | | | | | | | | | | | | | | | | | | Summary: The new OpenMPConstants.h is a location for all OpenMP related constants (and helpers) to live. This patch moves the directives there (the enum OpenMPDirectiveKind) and rewires Clang to use the new location. Initially part of D69785. Reviewers: kiranchandramohan, ABataev, RaviNarayanaswamy, gtbercea, grokos, sdmitriev, JonChesterfield, hfinkel, fghanim Subscribers: jholewinski, ppenzin, penzn, llvm-commits, cfe-commits, jfb, guansong, bollu, hiraditya, mgorny Tags: #clang, #llvm Differential Revision: https://reviews.llvm.org/D69853
* Avoid Attr.h includes, CodeGen editionReid Kleckner2019-12-091-5/+2
| | | | This saves around 20 includes of Attr.h. Not much.
* [OpenMP50] Add parallel master constructcchen2019-12-051-0/+1
| | | | | | | | | | | | Reviewers: ABataev, jdoerfert Reviewed By: ABataev Subscribers: rnk, jholewinski, guansong, arphaman, jfb, cfe-commits, sandoval, dreachem Tags: #clang Differential Revision: https://reviews.llvm.org/D70726
* Reapply af57dbf12e54 "Add support for options -frounding-math, ↵Melanie Blower2019-12-051-0/+3
| | | | | | | | | | | | ftrapping-math, -ffp-model=, and -ffp-exception-behavior=" Patch was reverted because https://bugs.llvm.org/show_bug.cgi?id=44048 The original patch is modified to set the strictfp IR attribute explicitly in CodeGen instead of as a side effect of IRBuilder. In the 2nd attempt to reapply there was a windows lit test fail, the tests were fixed to use wildcard matching. Differential Revision: https://reviews.llvm.org/D62731
* Revert "[OpenMP50] Add parallel master construct, by Chi Chun Chen."Reid Kleckner2019-12-041-1/+0
| | | | | | This reverts commit 713dab21e27c987b9114547ce7136bac2e775de9. Tests do not pass on Windows.
* Revert " Reapply af57dbf12e54 "Add support for options ↵Melanie Blower2019-12-041-3/+0
| | | | | | | -frounding-math, ftrapping-math, -ffp-model=, and -ffp-exception-behavior="" This reverts commit cdbed2dd856c14687efd741c2d8321686102acb8. Build break on Windows (lit fail)
* [OpenMP50] Add parallel master construct, by Chi Chun Chen.cchen2019-12-041-0/+1
| | | | | | | | | | | | Reviewers: ABataev, jdoerfert Reviewed By: ABataev Subscribers: jholewinski, guansong, arphaman, jfb, cfe-commits, sandoval, dreachem Tags: #clang Differential Revision: https://reviews.llvm.org/D70726
* Reapply af57dbf12e54 "Add support for options -frounding-math, ↵Melanie Blower2019-12-041-0/+3
| | | | | | | | | | ftrapping-math, -ffp-model=, and -ffp-exception-behavior=" Patch was reverted because https://bugs.llvm.org/show_bug.cgi?id=44048 The original patch is modified to set the strictfp IR attribute explicitly in CodeGen instead of as a side effect of IRBuilder Differential Revision: https://reviews.llvm.org/D62731
* [OPENMP50]Add if clause in simd directive.Alexey Bataev2019-11-191-0/+13
| | | | | | According to OpenMP 5.0, if clause can be used in simd directive. If condition in the if clause if false, the non-vectorized version of the loop must be executed.
* Temporarily Revert "Add support for options -frounding-math, ftrapping-math, ↵Eric Christopher2019-11-181-3/+0
| | | | | | | | -ffp-model=, and -ffp-exception-behavior=" and a follow-up NFC rearrangement as it's causing a crash on valid. Testcase is on the original review thread. This reverts commits af57dbf12e54f3a8ff48534bf1078f4de104c1cd and e6584b2b7b2de06f1e59aac41971760cac1e1b79
* Add support for options -frounding-math, ftrapping-math, -ffp-model=, and ↵Melanie Blower2019-11-071-0/+3
| | | | | | | | | | | | -ffp-exception-behavior= Add options to control floating point behavior: trapping and exception behavior, rounding, and control of optimizations that affect floating point calculations. More details in UsersManual.rst. Reviewers: rjmccall Differential Revision: https://reviews.llvm.org/D62731
* [OPENMP50]Add support for parallel master taskloop simd directive.Alexey Bataev2019-10-301-0/+2
| | | | Added full support for parallel master taskloop simd directive.
* [clang,ARM] Initial ACLE intrinsics for MVE.Simon Tatham2019-10-241-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit sets up the infrastructure for auto-generating <arm_mve.h> and doing clang-side code generation for the builtins it relies on, and demonstrates that it works by implementing a representative sample of the ACLE intrinsics, more or less matching the ones introduced in LLVM IR by D67158,D68699,D68700. Like NEON, that header file will provide a set of vector types like uint16x8_t and C functions with names like vaddq_u32(). Unlike NEON, the ACLE spec for <arm_mve.h> includes a polymorphism system, so that you can write plain vaddq() and disambiguate by the vector types you pass to it. Unlike the corresponding NEON code, I've arranged to make every user- facing ACLE intrinsic into a clang builtin, and implement all the code generation inside clang. So <arm_mve.h> itself contains nothing but typedefs and function declarations, with the latter all using the new `__attribute__((__clang_builtin))` system to arrange that the user- facing function names correspond to the right internal BuiltinIDs. So the new MveEmitter tablegen system specifies the full sequence of IRBuilder operations that each user-facing ACLE intrinsic should translate into. Where possible, the ACLE intrinsics map to standard IR operations such as vector-typed `add` and `fadd`; where no standard representation exists, I call down to the sample IR intrinsics introduced in an earlier commit. Doing it like this means that you get the polymorphism for free just by using __attribute__((overloadable)): the clang overload resolution decides which function declaration is the relevant one, and _then_ its BuiltinID is looked up, so by the time we're doing code generation, that's all been resolved by the standard system. It also means that you get really nice error messages if the user passes the wrong combination of types: clang will show the declarations from the header file and explain why each one doesn't match. (The obvious alternative approach would be to have wrapper functions in <arm_mve.h> which pass their arguments to the underlying builtins. But that doesn't work in the case where one of the arguments has to be a constant integer: the wrapper function can't pass the constantness through. So you'd have to do that case using a macro instead, and then use C11 `_Generic` to handle the polymorphism. Then you have to add horrible workarounds because `_Generic` requires even the untaken branches to type-check successfully, and //then// if the user gets the types wrong, the error message is totally unreadable!) Reviewers: dmgreen, miyuki, ostannard Subscribers: mgorny, javed.absar, kristof.beyls, cfe-commits Tags: #clang Differential Revision: https://reviews.llvm.org/D67161
* [OPENMP50]Add support for master taskloop simd.Alexey Bataev2019-10-181-0/+2
| | | | | | Added trsing/semantics/codegen for combined construct master taskloop simd. llvm-svn: 375255
* [OPENMP50]Add support for 'parallel master taskloop' construct.Alexey Bataev2019-10-141-0/+2
| | | | | | | | | Added parsing/sema/codegen support for 'parallel master taskloop' constructs. Some of the clauses, like 'grainsize', 'num_tasks', 'final' and 'priority' are not supported in full, only constant expressions can be used currently in these clauses. llvm-svn: 374791
* Reland r374450 with Richard Smith's comments and test fixed.Erich Keane2019-10-111-6/+1
| | | | | | | | | | The behavior from the original patch has changed, since we're no longer allowing LLVM to just ignore the alignment. Instead, we're just assuming the maximum possible alignment. Differential Revision: https://reviews.llvm.org/D68824 llvm-svn: 374562
* Revert 374450 "Fix __builtin_assume_aligned with too large values."Nico Weber2019-10-101-1/+6
| | | | | | | | | | | | | The test fails on Windows, with error: 'warning' diagnostics expected but not seen: File builtin-assume-aligned.c Line 62: requested alignment must be 268435456 bytes or smaller; assumption ignored error: 'warning' diagnostics seen but not expected: File builtin-assume-aligned.c Line 62: requested alignment must be 8192 bytes or smaller; assumption ignored llvm-svn: 374456
* Fix __builtin_assume_aligned with too large values.Erich Keane2019-10-101-6/+1
| | | | | | | | | | | | | | Code to handle __builtin_assume_aligned was allowing larger values, but would convert this to unsigned along the way. This patch removes the EmitAssumeAligned overloads that take unsigned to do away with this problem. Additionally, it adds a warning that values greater than 1 <<29 are ignored by LLVM. Differential Revision: https://reviews.llvm.org/D68824 llvm-svn: 374450
* [OPENMP50]Support for 'master taskloop' directive.Alexey Bataev2019-10-101-0/+1
| | | | | | Added full support for master taskloop directive. llvm-svn: 374437
* [BPF] do compile-once run-everywhere relocation for bitfieldsYonghong Song2019-10-081-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A bpf specific clang intrinsic is introduced: u32 __builtin_preserve_field_info(member_access, info_kind) Depending on info_kind, different information will be returned to the program. A relocation is also recorded for this builtin so that bpf loader can patch the instruction on the target host. This clang intrinsic is used to get certain information to facilitate struct/union member relocations. The offset relocation is extended by 4 bytes to include relocation kind. Currently supported relocation kinds are enum { FIELD_BYTE_OFFSET = 0, FIELD_BYTE_SIZE, FIELD_EXISTENCE, FIELD_SIGNEDNESS, FIELD_LSHIFT_U64, FIELD_RSHIFT_U64, }; for __builtin_preserve_field_info. The old access offset relocation is covered by FIELD_BYTE_OFFSET = 0. An example: struct s { int a; int b1:9; int b2:4; }; enum { FIELD_BYTE_OFFSET = 0, FIELD_BYTE_SIZE, FIELD_EXISTENCE, FIELD_SIGNEDNESS, FIELD_LSHIFT_U64, FIELD_RSHIFT_U64, }; void bpf_probe_read(void *, unsigned, const void *); int field_read(struct s *arg) { unsigned long long ull = 0; unsigned offset = __builtin_preserve_field_info(arg->b2, FIELD_BYTE_OFFSET); unsigned size = __builtin_preserve_field_info(arg->b2, FIELD_BYTE_SIZE); #ifdef USE_PROBE_READ bpf_probe_read(&ull, size, (const void *)arg + offset); unsigned lshift = __builtin_preserve_field_info(arg->b2, FIELD_LSHIFT_U64); #if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ lshift = lshift + (size << 3) - 64; #endif #else switch(size) { case 1: ull = *(unsigned char *)((void *)arg + offset); break; case 2: ull = *(unsigned short *)((void *)arg + offset); break; case 4: ull = *(unsigned int *)((void *)arg + offset); break; case 8: ull = *(unsigned long long *)((void *)arg + offset); break; } unsigned lshift = __builtin_preserve_field_info(arg->b2, FIELD_LSHIFT_U64); #endif ull <<= lshift; if (__builtin_preserve_field_info(arg->b2, FIELD_SIGNEDNESS)) return (long long)ull >> __builtin_preserve_field_info(arg->b2, FIELD_RSHIFT_U64); return ull >> __builtin_preserve_field_info(arg->b2, FIELD_RSHIFT_U64); } There is a minor overhead for bpf_probe_read() on big endian. The code and relocation generated for field_read where bpf_probe_read() is used to access argument data on little endian mode: r3 = r1 r1 = 0 r1 = 4 <=== relocation (FIELD_BYTE_OFFSET) r3 += r1 r1 = r10 r1 += -8 r2 = 4 <=== relocation (FIELD_BYTE_SIZE) call bpf_probe_read r2 = 51 <=== relocation (FIELD_LSHIFT_U64) r1 = *(u64 *)(r10 - 8) r1 <<= r2 r2 = 60 <=== relocation (FIELD_RSHIFT_U64) r0 = r1 r0 >>= r2 r3 = 1 <=== relocation (FIELD_SIGNEDNESS) if r3 == 0 goto LBB0_2 r1 s>>= r2 r0 = r1 LBB0_2: exit Compare to the above code between relocations FIELD_LSHIFT_U64 and FIELD_LSHIFT_U64, the code with big endian mode has four more instructions. r1 = 41 <=== relocation (FIELD_LSHIFT_U64) r6 += r1 r6 += -64 r6 <<= 32 r6 >>= 32 r1 = *(u64 *)(r10 - 8) r1 <<= r6 r2 = 60 <=== relocation (FIELD_RSHIFT_U64) The code and relocation generated when using direct load. r2 = 0 r3 = 4 r4 = 4 if r4 s> 3 goto LBB0_3 if r4 == 1 goto LBB0_5 if r4 == 2 goto LBB0_6 goto LBB0_9 LBB0_6: # %sw.bb1 r1 += r3 r2 = *(u16 *)(r1 + 0) goto LBB0_9 LBB0_3: # %entry if r4 == 4 goto LBB0_7 if r4 == 8 goto LBB0_8 goto LBB0_9 LBB0_8: # %sw.bb9 r1 += r3 r2 = *(u64 *)(r1 + 0) goto LBB0_9 LBB0_5: # %sw.bb r1 += r3 r2 = *(u8 *)(r1 + 0) goto LBB0_9 LBB0_7: # %sw.bb5 r1 += r3 r2 = *(u32 *)(r1 + 0) LBB0_9: # %sw.epilog r1 = 51 r2 <<= r1 r1 = 60 r0 = r2 r0 >>= r1 r3 = 1 if r3 == 0 goto LBB0_11 r2 s>>= r1 r0 = r2 LBB0_11: # %sw.epilog exit Considering verifier is able to do limited constant propogation following branches. The following is the code actually traversed. r2 = 0 r3 = 4 <=== relocation r4 = 4 <=== relocation if r4 s> 3 goto LBB0_3 LBB0_3: # %entry if r4 == 4 goto LBB0_7 LBB0_7: # %sw.bb5 r1 += r3 r2 = *(u32 *)(r1 + 0) LBB0_9: # %sw.epilog r1 = 51 <=== relocation r2 <<= r1 r1 = 60 <=== relocation r0 = r2 r0 >>= r1 r3 = 1 if r3 == 0 goto LBB0_11 r2 s>>= r1 r0 = r2 LBB0_11: # %sw.epilog exit For native load case, the load size is calculated to be the same as the size of load width LLVM otherwise used to load the value which is then used to extract the bitfield value. Differential Revision: https://reviews.llvm.org/D67980 llvm-svn: 374099
* [Alignment][Clang][NFC] Add CharUnits::getAsAlignGuillaume Chatelet2019-10-031-1/+1
| | | | | | | | | | | | | | | | | | Summary: This is a prerequisite to removing `llvm::GlobalObject::setAlignment(unsigned)`. This is patch is part of a series to introduce an Alignment type. See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html See this patch for the introduction of the type: https://reviews.llvm.org/D64790 Reviewers: courbet Subscribers: jholewinski, cfe-commits Tags: #clang Differential Revision: https://reviews.llvm.org/D68274 llvm-svn: 373592
* [OpenCL] Improve destructor support in C++ for OpenCLMarco Antognini2019-07-221-6/+7
| | | | | | | This re-applies r366422 with a fix for Bug PR42665 and a new regression test. llvm-svn: 366670
* Revert r366422: [OpenCL] Improve destructor support in C++ for OpenCLIlya Biryukov2019-07-181-7/+6
| | | | | | | | Reason: this commit causes crashes in the clang compiler when building LLVM Support with libc++, see https://bugs.llvm.org/show_bug.cgi?id=42665 for details. llvm-svn: 366429
* [OpenCL] Improve destructor support in C++ for OpenCLMarco Antognini2019-07-181-6/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This patch does mainly three things: 1. It fixes a false positive error detection in Sema that is similar to D62156. The error happens when explicitly calling an overloaded destructor for different address spaces. 2. It selects the correct destructor when multiple overloads for address spaces are available. 3. It inserts the expected address space cast when invoking a destructor, if needed, and therefore fixes a crash due to the unmet assertion in llvm::CastInst::Create. The following is a reproducer of the three issues: struct MyType { ~MyType() {} ~MyType() __constant {} }; __constant MyType myGlobal{}; kernel void foo() { myGlobal.~MyType(); // 1 and 2. // 1. error: cannot initialize object parameter of type // '__generic MyType' with an expression of type '__constant MyType' // 2. error: no matching member function for call to '~MyType' } kernel void bar() { // 3. The implicit call to the destructor crashes due to: // Assertion `castIsValid(op, S, Ty) && "Invalid cast!"' failed. // in llvm::CastInst::Create. MyType myLocal; } The added test depends on D62413 and covers a few more things than the above reproducer. Subscribers: yaxunl, Anastasia, cfe-commits Tags: #clang Differential Revision: https://reviews.llvm.org/D64569 llvm-svn: 366422
* fix unnamed fiefield issue and add tests for __builtin_preserve_access_index ↵Yonghong Song2019-07-161-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | intrinsic The original commit is r366076. It is temporarily reverted (r366155) due to test failure. This resubmit makes test more robust by accepting regex instead of hardcoded names/references in several places. This is a followup patch for https://reviews.llvm.org/D61809. Handle unnamed bitfield properly and add more test cases. Fixed the unnamed bitfield issue. The unnamed bitfield is ignored by debug info, so we need to ignore such a struct/union member when we try to get the member index in the debug info. D61809 contains two test cases but not enough as it does not checking generated IRs in the fine grain level, and also it does not have semantics checking tests. This patch added unit tests for both code gen and semantics checking for the new intrinsic. Signed-off-by: Yonghong Song <yhs@fb.com> llvm-svn: 366231
* Finish "Adapt -fsanitize=function to SANITIZER_NON_UNIQUE_TYPEINFO"Stephan Bergmann2019-07-161-1/+1
| | | | | | | | | | | | | | | | | | | | | i.e., recent 5745eccef54ddd3caca278d1d292a88b2281528b: * Bump the function_type_mismatch handler version, as its signature has changed. * The function_type_mismatch handler can return successfully now, so SanitizerKind::Function must be AlwaysRecoverable (like for SanitizerKind::Vptr). * But the minimal runtime would still unconditionally treat a call to the function_type_mismatch handler as failure, so disallow -fsanitize=function in combination with -fsanitize-minimal-runtime (like it was already done for -fsanitize=vptr). * Add tests. Differential Revision: https://reviews.llvm.org/D61479 llvm-svn: 366186
* Temporarily Revert "fix unnamed fiefield issue and add tests for ↵Eric Christopher2019-07-151-3/+0
| | | | | | | | | | __builtin_preserve_access_index intrinsic" The commit had tests that would only work with names in the IR. This reverts commit r366076. llvm-svn: 366155
* fix unnamed fiefield issue and add tests for __builtin_preserve_access_index ↵Yonghong Song2019-07-151-0/+3
| | | | | | | | | | | | | | | | | | | | intrinsic This is a followup patch for https://reviews.llvm.org/D61809. Handle unnamed bitfield properly and add more test cases. Fixed the unnamed bitfield issue. The unnamed bitfield is ignored by debug info, so we need to ignore such a struct/union member when we try to get the member index in the debug info. D61809 contains two test cases but not enough as it does not checking generated IRs in the fine grain level, and also it does not have semantics checking tests. This patch added unit tests for both code gen and semantics checking for the new intrinsic. Signed-off-by: Yonghong Song <yhs@fb.com> llvm-svn: 366076
* NFC: Convert large lambda into methodVitaly Buka2019-07-101-0/+3
| | | | | | | | | | | | | | Reviewers: pcc, eugenis Reviewed By: eugenis Subscribers: cfe-commits, lldb-commits Tags: #clang, #lldb Differential Revision: https://reviews.llvm.org/D63854 llvm-svn: 365708
* [BPF] Preserve debuginfo array/union/struct type/access indexYonghong Song2019-07-091-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For background of BPF CO-RE project, please refer to http://vger.kernel.org/bpfconf2019.html In summary, BPF CO-RE intends to compile bpf programs adjustable on struct/union layout change so the same program can run on multiple kernels with adjustment before loading based on native kernel structures. In order to do this, we need keep track of GEP(getelementptr) instruction base and result debuginfo types, so we can adjust on the host based on kernel BTF info. Capturing such information as an IR optimization is hard as various optimization may have tweaked GEP and also union is replaced by structure it is impossible to track fieldindex for union member accesses. Three intrinsic functions, preserve_{array,union,struct}_access_index, are introducted. addr = preserve_array_access_index(base, index, dimension) addr = preserve_union_access_index(base, di_index) addr = preserve_struct_access_index(base, gep_index, di_index) here, base: the base pointer for the array/union/struct access. index: the last access index for array, the same for IR/DebugInfo layout. dimension: the array dimension. gep_index: the access index based on IR layout. di_index: the access index based on user/debuginfo types. If using these intrinsics blindly, i.e., transforming all GEPs to these intrinsics and later on reducing them to GEPs, we have seen up to 7% more instructions generated. To avoid such an overhead, a clang builtin is proposed: base = __builtin_preserve_access_index(base) such that user wraps to-be-relocated GEPs in this builtin and preserve_*_access_index intrinsics only apply to those GEPs. Such a buyin will prevent performance degradation if people do not use CO-RE, even for programs which use bpf_probe_read(). For example, for the following example, $ cat test.c struct sk_buff { int i; int b1:1; int b2:2; union { struct { int o1; int o2; } o; struct { char flags; char dev_id; } dev; int netid; } u[10]; }; static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr) = (void *) 4; #define _(x) (__builtin_preserve_access_index(x)) int bpf_prog(struct sk_buff *ctx) { char dev_id; bpf_probe_read(&dev_id, sizeof(char), _(&ctx->u[5].dev.dev_id)); return dev_id; } $ clang -target bpf -O2 -g -emit-llvm -S -mllvm -print-before-all \ test.c >& log The generated IR looks like below: ... define dso_local i32 @bpf_prog(%struct.sk_buff*) #0 !dbg !15 { %2 = alloca %struct.sk_buff*, align 8 %3 = alloca i8, align 1 store %struct.sk_buff* %0, %struct.sk_buff** %2, align 8, !tbaa !45 call void @llvm.dbg.declare(metadata %struct.sk_buff** %2, metadata !43, metadata !DIExpression()), !dbg !49 call void @llvm.lifetime.start.p0i8(i64 1, i8* %3) #4, !dbg !50 call void @llvm.dbg.declare(metadata i8* %3, metadata !44, metadata !DIExpression()), !dbg !51 %4 = load i32 (i8*, i32, i8*)*, i32 (i8*, i32, i8*)** @bpf_probe_read, align 8, !dbg !52, !tbaa !45 %5 = load %struct.sk_buff*, %struct.sk_buff** %2, align 8, !dbg !53, !tbaa !45 %6 = call [10 x %union.anon]* @llvm.preserve.struct.access.index.p0a10s_union.anons.p0s_struct.sk_buffs( %struct.sk_buff* %5, i32 2, i32 3), !dbg !53, !llvm.preserve.access.index !19 %7 = call %union.anon* @llvm.preserve.array.access.index.p0s_union.anons.p0a10s_union.anons( [10 x %union.anon]* %6, i32 1, i32 5), !dbg !53 %8 = call %union.anon* @llvm.preserve.union.access.index.p0s_union.anons.p0s_union.anons( %union.anon* %7, i32 1), !dbg !53, !llvm.preserve.access.index !26 %9 = bitcast %union.anon* %8 to %struct.anon.0*, !dbg !53 %10 = call i8* @llvm.preserve.struct.access.index.p0i8.p0s_struct.anon.0s( %struct.anon.0* %9, i32 1, i32 1), !dbg !53, !llvm.preserve.access.index !34 %11 = call i32 %4(i8* %3, i32 1, i8* %10), !dbg !52 %12 = load i8, i8* %3, align 1, !dbg !54, !tbaa !55 %13 = sext i8 %12 to i32, !dbg !54 call void @llvm.lifetime.end.p0i8(i64 1, i8* %3) #4, !dbg !56 ret i32 %13, !dbg !57 } !19 = distinct !DICompositeType(tag: DW_TAG_structure_type, name: "sk_buff", file: !3, line: 1, size: 704, elements: !20) !26 = distinct !DICompositeType(tag: DW_TAG_union_type, scope: !19, file: !3, line: 5, size: 64, elements: !27) !34 = distinct !DICompositeType(tag: DW_TAG_structure_type, scope: !26, file: !3, line: 10, size: 16, elements: !35) Note that @llvm.preserve.{struct,union}.access.index calls have metadata llvm.preserve.access.index attached to instructions to provide struct/union debuginfo type information. For &ctx->u[5].dev.dev_id, . The "%6 = ..." represents struct member "u" with index 2 for IR layout and index 3 for DI layout. . The "%7 = ..." represents array subscript "5". . The "%8 = ..." represents union member "dev" with index 1 for DI layout. . The "%10 = ..." represents struct member "dev_id" with index 1 for both IR and DI layout. Basically, traversing the use-def chain recursively for the 3rd argument of bpf_probe_read() and examining all preserve_*_access_index calls, the debuginfo struct/union/array access index can be achieved. The intrinsics also contain enough information to regenerate codes for IR layout. For array and structure intrinsics, the proper GEP can be constructed. For union intrinsics, replacing all uses of "addr" with "base" should be enough. Signed-off-by: Yonghong Song <yhs@fb.com> Differential Revision: https://reviews.llvm.org/D61809 llvm-svn: 365438
* Revert "[BPF] Preserve debuginfo array/union/struct type/access index"Yonghong Song2019-07-091-4/+0
| | | | | | | | | This reverts commit r365435. Forgot adding the Differential Revision link. Will add to the commit message and resubmit. llvm-svn: 365436
* [BPF] Preserve debuginfo array/union/struct type/access indexYonghong Song2019-07-091-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For background of BPF CO-RE project, please refer to http://vger.kernel.org/bpfconf2019.html In summary, BPF CO-RE intends to compile bpf programs adjustable on struct/union layout change so the same program can run on multiple kernels with adjustment before loading based on native kernel structures. In order to do this, we need keep track of GEP(getelementptr) instruction base and result debuginfo types, so we can adjust on the host based on kernel BTF info. Capturing such information as an IR optimization is hard as various optimization may have tweaked GEP and also union is replaced by structure it is impossible to track fieldindex for union member accesses. Three intrinsic functions, preserve_{array,union,struct}_access_index, are introducted. addr = preserve_array_access_index(base, index, dimension) addr = preserve_union_access_index(base, di_index) addr = preserve_struct_access_index(base, gep_index, di_index) here, base: the base pointer for the array/union/struct access. index: the last access index for array, the same for IR/DebugInfo layout. dimension: the array dimension. gep_index: the access index based on IR layout. di_index: the access index based on user/debuginfo types. If using these intrinsics blindly, i.e., transforming all GEPs to these intrinsics and later on reducing them to GEPs, we have seen up to 7% more instructions generated. To avoid such an overhead, a clang builtin is proposed: base = __builtin_preserve_access_index(base) such that user wraps to-be-relocated GEPs in this builtin and preserve_*_access_index intrinsics only apply to those GEPs. Such a buyin will prevent performance degradation if people do not use CO-RE, even for programs which use bpf_probe_read(). For example, for the following example, $ cat test.c struct sk_buff { int i; int b1:1; int b2:2; union { struct { int o1; int o2; } o; struct { char flags; char dev_id; } dev; int netid; } u[10]; }; static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr) = (void *) 4; #define _(x) (__builtin_preserve_access_index(x)) int bpf_prog(struct sk_buff *ctx) { char dev_id; bpf_probe_read(&dev_id, sizeof(char), _(&ctx->u[5].dev.dev_id)); return dev_id; } $ clang -target bpf -O2 -g -emit-llvm -S -mllvm -print-before-all \ test.c >& log The generated IR looks like below: ... define dso_local i32 @bpf_prog(%struct.sk_buff*) #0 !dbg !15 { %2 = alloca %struct.sk_buff*, align 8 %3 = alloca i8, align 1 store %struct.sk_buff* %0, %struct.sk_buff** %2, align 8, !tbaa !45 call void @llvm.dbg.declare(metadata %struct.sk_buff** %2, metadata !43, metadata !DIExpression()), !dbg !49 call void @llvm.lifetime.start.p0i8(i64 1, i8* %3) #4, !dbg !50 call void @llvm.dbg.declare(metadata i8* %3, metadata !44, metadata !DIExpression()), !dbg !51 %4 = load i32 (i8*, i32, i8*)*, i32 (i8*, i32, i8*)** @bpf_probe_read, align 8, !dbg !52, !tbaa !45 %5 = load %struct.sk_buff*, %struct.sk_buff** %2, align 8, !dbg !53, !tbaa !45 %6 = call [10 x %union.anon]* @llvm.preserve.struct.access.index.p0a10s_union.anons.p0s_struct.sk_buffs( %struct.sk_buff* %5, i32 2, i32 3), !dbg !53, !llvm.preserve.access.index !19 %7 = call %union.anon* @llvm.preserve.array.access.index.p0s_union.anons.p0a10s_union.anons( [10 x %union.anon]* %6, i32 1, i32 5), !dbg !53 %8 = call %union.anon* @llvm.preserve.union.access.index.p0s_union.anons.p0s_union.anons( %union.anon* %7, i32 1), !dbg !53, !llvm.preserve.access.index !26 %9 = bitcast %union.anon* %8 to %struct.anon.0*, !dbg !53 %10 = call i8* @llvm.preserve.struct.access.index.p0i8.p0s_struct.anon.0s( %struct.anon.0* %9, i32 1, i32 1), !dbg !53, !llvm.preserve.access.index !34 %11 = call i32 %4(i8* %3, i32 1, i8* %10), !dbg !52 %12 = load i8, i8* %3, align 1, !dbg !54, !tbaa !55 %13 = sext i8 %12 to i32, !dbg !54 call void @llvm.lifetime.end.p0i8(i64 1, i8* %3) #4, !dbg !56 ret i32 %13, !dbg !57 } !19 = distinct !DICompositeType(tag: DW_TAG_structure_type, name: "sk_buff", file: !3, line: 1, size: 704, elements: !20) !26 = distinct !DICompositeType(tag: DW_TAG_union_type, scope: !19, file: !3, line: 5, size: 64, elements: !27) !34 = distinct !DICompositeType(tag: DW_TAG_structure_type, scope: !26, file: !3, line: 10, size: 16, elements: !35) Note that @llvm.preserve.{struct,union}.access.index calls have metadata llvm.preserve.access.index attached to instructions to provide struct/union debuginfo type information. For &ctx->u[5].dev.dev_id, . The "%6 = ..." represents struct member "u" with index 2 for IR layout and index 3 for DI layout. . The "%7 = ..." represents array subscript "5". . The "%8 = ..." represents union member "dev" with index 1 for DI layout. . The "%10 = ..." represents struct member "dev_id" with index 1 for both IR and DI layout. Basically, traversing the use-def chain recursively for the 3rd argument of bpf_probe_read() and examining all preserve_*_access_index calls, the debuginfo struct/union/array access index can be achieved. The intrinsics also contain enough information to regenerate codes for IR layout. For array and structure intrinsics, the proper GEP can be constructed. For union intrinsics, replacing all uses of "addr" with "base" should be enough. Signed-off-by: Yonghong Song <yhs@fb.com> llvm-svn: 365435
* Ensure Target Features always_inline error happens in C++ cases.Erich Keane2019-06-211-0/+1
| | | | | | | | A handful of C++ cases as reported in PR42352 didn't actually give an error when always_inlining with a different target feature list. This resulted in broken IR. llvm-svn: 364109
* Rename CodeGenFunction::overlapFor* to getOverlapFor*.Richard Smith2019-06-201-5/+5
| | | | llvm-svn: 363980
* P0840R2: support for [[no_unique_address]] attributeRichard Smith2019-06-201-8/+1
| | | | | | | | | | | | | | | | | Summary: Add support for the C++2a [[no_unique_address]] attribute for targets using the Itanium C++ ABI. This depends on D63371. Reviewers: rjmccall, aaron.ballman Subscribers: dschuff, aheejin, cfe-commits Tags: #clang Differential Revision: https://reviews.llvm.org/D63451 llvm-svn: 363976
* Store a pointer to the return value in a static alloca and let the debugger ↵Amy Huang2019-06-201-0/+4
| | | | | | | | | | | | | | use that as the variable address for NRVO variables. Subscribers: hiraditya, cfe-commits, llvm-commits Tags: #clang, #llvm Differential Revision: https://reviews.llvm.org/D63361 llvm-svn: 363952
* Implement __builtin_LINE() et. al. to support source location capture.Eric Fiselier2019-05-161-4/+18
| | | | | | | | | | | | | | | | | Summary: This patch implements the source location builtins `__builtin_LINE(), `__builtin_FUNCTION()`, `__builtin_FILE()` and `__builtin_COLUMN()`. These builtins are needed to implement [`std::experimental::source_location`](https://rawgit.com/cplusplus/fundamentals-ts/v2/main.html#reflection.src_loc.creation). With the exception of `__builtin_COLUMN`, GCC also implements these builtins, and Clangs behavior is intended to match as closely as possible. Reviewers: rsmith, joerg, aaron.ballman, bogner, majnemer, shafik, martong Reviewed By: rsmith Subscribers: rnkovacs, loskutov, riccibruno, mgorny, kunitoki, alexr, majnemer, hfinkel, cfe-commits Differential Revision: https://reviews.llvm.org/D37035 llvm-svn: 360937
* [PR41276] Fixed incorrect generation of addr space cast for 'this' in C++.Anastasia Stulova2019-04-041-5/+2
| | | | | | | | | | | | | | | Improved classification of address space cast when qualification conversion is performed - prevent adding addr space cast for non-pointer and non-reference types. Take address space correctly from the pointee. Also pass correct address space from 'this' object using AggValueSlot when generating addrspacecast in the constructor call. Differential Revision: https://reviews.llvm.org/D59988 llvm-svn: 357682
* [MS] Make __iso_volatile_* available on all targetsReid Kleckner2019-03-281-3/+0
| | | | | | | | | | | Future versions of MSVC make these intrinsics available on x86 & x64, according to: http://lists.llvm.org/pipermail/cfe-dev/2019-March/061711.html The purpose of these builtins is to emit plain, non-atomic, volatile stores when /volatile:ms (-cc1 -fms-volatile) is enabled. llvm-svn: 357220
* IRGen: Remove StructorType; thread GlobalDecl through more code. NFCI.Peter Collingbourne2019-03-221-3/+2
| | | | | | | | This should make it easier to add more structor variants. Differential Revision: https://reviews.llvm.org/D59724 llvm-svn: 356822
* Revert "Add a new attribute, fortify_stdlib"Erik Pilkington2019-03-131-4/+0
| | | | | | | | | | This reverts commit r353765. After talking with our c stdlib folks, we decided to use the existing pass_object_size attribute to implement _FORTIFY_SOURCE wrappers, like Bionic does (I didn't realize that pass_object_size could be used for this purpose). Sorry for the flip/flop, and thanks to James Y. Knight for pointing this out to me. llvm-svn: 356103
* [CodeGenObjC] Emit [[X alloc] init] as objc_alloc_init(X) when availableErik Pilkington2019-02-141-0/+2
| | | | | | | | | | | This provides a code size win on the caller side, since the init message send is done in the runtime function. rdar://44987038 Differential revision: https://reviews.llvm.org/D57936 llvm-svn: 354056
* Add a new attribute, fortify_stdlibErik Pilkington2019-02-111-0/+4
| | | | | | | | | | | | | | | | | | | | | | This attribute applies to declarations of C stdlib functions (sprintf, memcpy...) that have known fortified variants (__sprintf_chk, __memcpy_chk, ...). When applied, clang will emit calls to the fortified variant functions instead of calls to the defaults. In GCC, this is done by adding gnu_inline-style wrapper functions, but that doesn't work for us for variadic functions because we don't support __builtin_va_arg_pack (and have no intention to). This attribute takes two arguments, the first is 'type' argument passed through to __builtin_object_size, and the second is a flag argument that gets passed through to the variadic checking variants. rdar://47905754 Differential revision: https://reviews.llvm.org/D57918 llvm-svn: 353765
* [opaque pointer types] Pass through function types for TLSJames Y Knight2019-02-071-4/+4
| | | | | | | | initialization and global destructor calls. Differential Revision: https://reviews.llvm.org/D57801 llvm-svn: 353355
* [opaque pointer types] More trivial changes to pass FunctionType to CallInst.James Y Knight2019-02-051-3/+3
| | | | | | | | Change various functions to use FunctionCallee or Function*. Pass function type through __builtin_dump_struct's dumpRecord helper. llvm-svn: 353199
* [opaque pointer types] Pass function types for runtime function calls.James Y Knight2019-02-051-15/+15
| | | | | | | | | | | | | Emit{Nounwind,}RuntimeCall{,OrInvoke} have been modified to take a FunctionCallee as an argument, and CreateRuntimeFunction has been modified to return a FunctionCallee. All callers have been updated. Additionally, CreateBuiltinFunction is removed, as it was redundant with CreateRuntimeFunction after some previous changes. Differential Revision: https://reviews.llvm.org/D57668 llvm-svn: 353184
* [opaque pointer types] Trivial changes towards CallInst requiringJames Y Knight2019-02-031-2/+2
| | | | | | explicit function types. llvm-svn: 353009
* [Sanitizers] UBSan unreachable incompatible with ASan in the presence of ↵Julian Lettner2019-02-011-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | `noreturn` calls Summary: UBSan wants to detect when unreachable code is actually reached, so it adds instrumentation before every unreachable instruction. However, the optimizer will remove code after calls to functions marked with noreturn. To avoid this UBSan removes noreturn from both the call instruction as well as from the function itself. Unfortunately, ASan relies on this annotation to unpoison the stack by inserting calls to _asan_handle_no_return before noreturn functions. This is important for functions that do not return but access the the stack memory, e.g., unwinder functions *like* longjmp (longjmp itself is actually "double-proofed" via its interceptor). The result is that when ASan and UBSan are combined, the noreturn attributes are missing and ASan cannot unpoison the stack, so it has false positives when stack unwinding is used. Changes: Clang-CodeGen now directly insert calls to `__asan_handle_no_return` when a call to a noreturn function is encountered and both UBsan-unreachable and ASan are enabled. This allows UBSan to continue removing the noreturn attribute from functions without any changes to the ASan pass. Previously generated code: ``` call void @longjmp call void @__asan_handle_no_return call void @__ubsan_handle_builtin_unreachable ``` Generated code (for now): ``` call void @__asan_handle_no_return call void @longjmp call void @__asan_handle_no_return call void @__ubsan_handle_builtin_unreachable ``` rdar://problem/40723397 Reviewers: delcypher, eugenis, vsk Differential Revision: https://reviews.llvm.org/D57278 > llvm-svn: 352690 llvm-svn: 352829
* Revert "[Sanitizers] UBSan unreachable incompatible with ASan in the ↵Eric Liu2019-01-311-2/+2
| | | | | | | | | presence of `noreturn` calls" This reverts commit r352690. This causes clang to crash. Sent reproducer to the author in the orginal commit. llvm-svn: 352755
OpenPOWER on IntegriCloud