summaryrefslogtreecommitdiffstats
path: root/clang/lib/CodeGen/CGExpr.cpp
Commit message (Collapse)AuthorAgeFilesLines
* Revert 9007f06af0e "Revert "Allow system header to provide their own ↵Hans Wennborg2020-01-171-1/+8
| | | | | | | implementation of some builtin"" This should no longer be necessary after cd4c65f91d5 "Add __warn_memset_zero_len builtin as a workaround for glibc issue"
* Revert "Allow system header to provide their own implementation of some builtin"Amy Huang2020-01-171-8/+1
| | | | | | | | | This reverts commit 921f871ac438175ca8fcfcafdfcfac4d7ddf3905 because it causes libc++ code to trigger __warn_memset_zero_len. See https://reviews.llvm.org/D71082. (cherry picked from commit 3d210ed3d1880c615776b07d1916edb400c245a6)
* Fix "pointer is null" static analyzer warnings. NFCI.Simon Pilgrim2020-01-111-9/+6
| | | | Use castAs<> instead of getAs<> since the pointer is dereferenced immediately below and castAs will perform the null assertion for us.
* Allow system header to provide their own implementation of some builtinserge-sans-paille2020-01-101-1/+8
| | | | | | | | | | | | | | | | | | If a system header provides an (inline) implementation of some of their function, clang still matches on the function name and generate the appropriate llvm builtin, e.g. memcpy. This behavior is in line with glibc recommendation « users may not provide their own version of symbols » but doesn't account for the fact that glibc itself can provide inline version of some functions. It is the case for the memcpy function when -D_FORTIFY_SOURCE=1 is on. In that case an inline version of memcpy calls __memcpy_chk, a function that performs extra runtime checks. Clang currently ignores the inline version and thus provides no runtime check. This code fixes the issue by detecting functions whose name is a builtin name but also have an inline implementation. Differential Revision: https://reviews.llvm.org/D71082
* [OPENMP50]Support lastprivate conditional updates in inc/dec unary ops.Alexey Bataev2020-01-061-0/+3
| | | | | Added support for checking of updates of variables used in unary pre(pos) inc/dec expressions.
* [OPENMP50]Codegen for lastprivate conditional list items.Alexey Bataev2020-01-021-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | Added codegen support for lastprivate conditional. According to the standard, if when the conditional modifier appears on the clause, if an assignment to a list item is encountered in the construct then the original list item is assigned the value that is assigned to the new list item in the sequentially last iteration or lexically last section in which such an assignment is encountered. We look for the assignment operations and check if the left side references lastprivate conditional variable. Then the next code is emitted: if (last_iv_a <= iv) { last_iv_a = iv; last_a = lp_a; } At the end the implicit barrier is generated to wait for the end of all threads and then in the check for the last iteration the private copy is assigned the last value. if (last_iter) { lp_a = last_a; // <--- new code a = lp_a; // <--- store of private value to the original variable. }
* [OPENMP50]Codegen for nontemporal clause.Alexey Bataev2019-12-231-5/+28
| | | | | | | | | | | | | | | | Summary: Basic codegen for the declarations marked as nontemporal. Also, if the base declaration in the member expression is marked as nontemporal, lvalue for member decl access inherits nonteporal flag from the base lvalue. Reviewers: rjmccall, hfinkel, jdoerfert Subscribers: guansong, arphaman, caomhin, kkwli0, cfe-commits Tags: #clang Differential Revision: https://reviews.llvm.org/D71708
* [CodeGen][ObjC] Emit a primitive store to store a __strong field inAkira Hatanaka2019-12-031-1/+5
| | | | | | | | | | ExpandTypeFromArgs This fixes a bug in IRGen where a call to `llvm.objc.storeStrong` was being emitted to initialize a __strong field of an uninitialized temporary struct, which caused crashes at runtime. rdar://problem/51807365
* [NFC] Pass a reference to CodeGenFunction to methods of LValue andAkira Hatanaka2019-12-031-53/+55
| | | | | | | | | | | | | | AggValueSlot This reapplies 8a5b7c35709d9ce1f44a99f0c5b084bf2696ea17 after a null dereference bug in CGOpenMPRuntime::emitUserDefinedMapper. Original commit message: This is needed for the pointer authentication work we plan to do in the near future. https://github.com/apple/llvm-project/blob/a63a81bd9911f87a0b5dcd5bdd7ccdda7124af87/clang/docs/PointerAuthentication.rst
* Revert "[NFC] Pass a reference to CodeGenFunction to methods of LValue and"Akira Hatanaka2019-12-031-55/+53
| | | | | This reverts commit 8a5b7c35709d9ce1f44a99f0c5b084bf2696ea17. This seems to have broken UBSan because of a null dereference.
* [NFC] Pass a reference to CodeGenFunction to methods of LValue andAkira Hatanaka2019-12-031-53/+55
| | | | | | | | | AggValueSlot This is needed for the pointer authentication work we plan to do in the near future. https://github.com/apple/llvm-project/blob/a63a81bd9911f87a0b5dcd5bdd7ccdda7124af87/clang/docs/PointerAuthentication.rst
* IRGen: Call SetLLVMFunctionAttributes{,ForDefinition} on __cfi_check_fail.Peter Collingbourne2019-11-251-0/+3
| | | | | | | | | | | | | | This has the main effect of causing target-cpu and target-features to be set on __cfi_check_fail, causing the function to become ABI-compatible with other functions in the case where these attributes affect ABI (e.g. reserve-x18). Technically we only need to call SetLLVMFunctionAttributes to get the target-* attributes set, but since we're creating a definition we probably ought to call the ForDefinition function as well. Fixes PR44094. Differential Revision: https://reviews.llvm.org/D70692
* [NFC] Refactor representation of materialized temporariesTyker2019-11-191-1/+1
| | | | | | | | | | | | | | | Summary: this patch refactor representation of materialized temporaries to prevent an issue raised by rsmith in https://reviews.llvm.org/D63640#inline-612718 Reviewers: rsmith, martong, shafik Reviewed By: rsmith Subscribers: thakis, sammccall, ilya-biryukov, rnkovacs, arphaman, cfe-commits Tags: #clang Differential Revision: https://reviews.llvm.org/D69360
* Revert "[NFC] Refactor representation of materialized temporaries"Nico Weber2019-11-171-1/+1
| | | | | | This reverts commit 08ea1ee2db5f9d6460fef1d79d0d1d1a5eb78982. It broke ./ClangdTests/FindExplicitReferencesTest.All on the bots, see comments on https://reviews.llvm.org/D69360
* [NFC] Refactor representation of materialized temporariesTyker2019-11-161-1/+1
| | | | | | | | | | | | | | | Summary: this patch refactor representation of materialized temporaries to prevent an issue raised by rsmith in https://reviews.llvm.org/D63640#inline-612718 Reviewers: rsmith, martong, shafik Reviewed By: rsmith Subscribers: rnkovacs, arphaman, cfe-commits Tags: #clang Differential Revision: https://reviews.llvm.org/D69360
* [BPF] Restrict preserve_access_index attribute to C onlyYonghong Song2019-11-141-31/+12
| | | | | | | | | | | | This patch is a follow-up for commit 4e2ce228ae79 [BPF] Add preserve_access_index attribute for record definition to restrict attribute for C only. A new test case is added to check for this restriction. Additional code polishing is done based on Aaron Ballman's suggestion in https://reviews.llvm.org/D69759/new/. Differential Revision: https://reviews.llvm.org/D70257
* [BPF] Add preserve_access_index attribute for record definitionYonghong Song2019-11-131-7/+68
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a resubmission for the previous reverted commit 943436040121 with the same subject. This commit fixed the segfault issue and addressed additional review comments. This patch introduced a new bpf specific attribute which can be added to struct or union definition. For example, struct s { ... } __attribute__((preserve_access_index)); union u { ... } __attribute__((preserve_access_index)); The goal is to simplify user codes for cases where preserve access index happens for certain struct/union, so user does not need to use clang __builtin_preserve_access_index for every members. The attribute has no effect if -g is not specified. When the attribute is specified and -g is specified, any member access defined by that structure or union, including array subscript access and inner records, will be preserved through __builtin_preserve_{array,struct,union}_access_index() IR intrinsics, which will enable relocation generation in bpf backend. The following is an example to illustrate the usage: -bash-4.4$ cat t.c #define __reloc__ __attribute__((preserve_access_index)) struct s1 { int c; } __reloc__; struct s2 { union { struct s1 b[3]; }; } __reloc__; struct s3 { struct s2 a; } __reloc__; int test(struct s3 *arg) { return arg->a.b[2].c; } -bash-4.4$ clang -target bpf -g -S -O2 t.c A relocation with access string "0:0:0:0:2:0" will be generated representing access offset of arg->a.b[2].c. forward declaration with attribute is also handled properly such that the attribute is copied and populated in real record definition. Differential Revision: https://reviews.llvm.org/D69759
* Revert "[BPF] Add preserve_access_index attribute for record definition"Yonghong Song2019-11-091-68/+7
| | | | | | This reverts commit 4a5aa1a7bf8b1714b817ede8e09cd28c0784228a. There are some other test failures. Investigate them first.
* [BPF] Add preserve_access_index attribute for record definitionYonghong Song2019-11-091-7/+68
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduced a new bpf specific attribute which can be added to struct or union definition. For example, struct s { ... } __attribute__((preserve_access_index)); union u { ... } __attribute__((preserve_access_index)); The goal is to simplify user codes for cases where preserve access index happens for certain struct/union, so user does not need to use clang __builtin_preserve_access_index for every members. The attribute has no effect if -g is not specified. When the attribute is specified and -g is specified, any member access defined by that structure or union, including array subscript access and inner records, will be preserved through __builtin_preserve_{array,struct,union}_access_index() IR intrinsics, which will enable relocation generation in bpf backend. The following is an example to illustrate the usage: -bash-4.4$ cat t.c #define __reloc__ __attribute__((preserve_access_index)) struct s1 { int c; } __reloc__; struct s2 { union { struct s1 b[3]; }; } __reloc__; struct s3 { struct s2 a; } __reloc__; int test(struct s3 *arg) { return arg->a.b[2].c; } -bash-4.4$ clang -target bpf -g -S -O2 t.c A relocation with access string "0:0:0:0:2:0" will be generated representing access offset of arg->a.b[2].c. forward declaration with attribute is also handled properly such that the attribute is copied and populated in real record definition. Differential Revision: https://reviews.llvm.org/D69759
* [opaque pointer types] Add element type argument to IRBuilder ↵Craig Topper2019-11-031-1/+2
| | | | | | | | | | | | | | | | | | | | CreatePreserveStructAccessIndex and CreatePreserveArrayAccessIndex Summary: These were the only remaining users of the GetElementPtrInst::getGEPReturnType method that gets the element type from the pointer type. Remove that method since its now dead. Reviewers: jyknight, t.p.northover, arsenm Reviewed By: arsenm Subscribers: wdng, arsenm, arphaman, cfe-commits, llvm-commits Tags: #clang, #llvm Differential Revision: https://reviews.llvm.org/D69756
* [c++20] Add CXXRewrittenBinaryOperator to represent a comparisonRichard Smith2019-10-191-0/+2
| | | | | | | | operator that is rewritten as a call to multiple other operators. No functionality change yet: nothing creates these expressions. llvm-svn: 375305
* Revert Tag CFI-generated data structures with "#pragma clang section" ↵Dmitry Mikulin2019-10-171-3/+0
| | | | | | | | attributes. This reverts r375022 (git commit e2692b3bc0327606748b6d291b9009d2c845ced5) llvm-svn: 375069
* Tag CFI-generated data structures with "#pragma clang section" attributes.Dmitry Mikulin2019-10-161-0/+3
| | | | | | Differential Revision: https://reviews.llvm.org/D68808 llvm-svn: 375022
* [BPF] do compile-once run-everywhere relocation for bitfieldsYonghong Song2019-10-081-3/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A bpf specific clang intrinsic is introduced: u32 __builtin_preserve_field_info(member_access, info_kind) Depending on info_kind, different information will be returned to the program. A relocation is also recorded for this builtin so that bpf loader can patch the instruction on the target host. This clang intrinsic is used to get certain information to facilitate struct/union member relocations. The offset relocation is extended by 4 bytes to include relocation kind. Currently supported relocation kinds are enum { FIELD_BYTE_OFFSET = 0, FIELD_BYTE_SIZE, FIELD_EXISTENCE, FIELD_SIGNEDNESS, FIELD_LSHIFT_U64, FIELD_RSHIFT_U64, }; for __builtin_preserve_field_info. The old access offset relocation is covered by FIELD_BYTE_OFFSET = 0. An example: struct s { int a; int b1:9; int b2:4; }; enum { FIELD_BYTE_OFFSET = 0, FIELD_BYTE_SIZE, FIELD_EXISTENCE, FIELD_SIGNEDNESS, FIELD_LSHIFT_U64, FIELD_RSHIFT_U64, }; void bpf_probe_read(void *, unsigned, const void *); int field_read(struct s *arg) { unsigned long long ull = 0; unsigned offset = __builtin_preserve_field_info(arg->b2, FIELD_BYTE_OFFSET); unsigned size = __builtin_preserve_field_info(arg->b2, FIELD_BYTE_SIZE); #ifdef USE_PROBE_READ bpf_probe_read(&ull, size, (const void *)arg + offset); unsigned lshift = __builtin_preserve_field_info(arg->b2, FIELD_LSHIFT_U64); #if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ lshift = lshift + (size << 3) - 64; #endif #else switch(size) { case 1: ull = *(unsigned char *)((void *)arg + offset); break; case 2: ull = *(unsigned short *)((void *)arg + offset); break; case 4: ull = *(unsigned int *)((void *)arg + offset); break; case 8: ull = *(unsigned long long *)((void *)arg + offset); break; } unsigned lshift = __builtin_preserve_field_info(arg->b2, FIELD_LSHIFT_U64); #endif ull <<= lshift; if (__builtin_preserve_field_info(arg->b2, FIELD_SIGNEDNESS)) return (long long)ull >> __builtin_preserve_field_info(arg->b2, FIELD_RSHIFT_U64); return ull >> __builtin_preserve_field_info(arg->b2, FIELD_RSHIFT_U64); } There is a minor overhead for bpf_probe_read() on big endian. The code and relocation generated for field_read where bpf_probe_read() is used to access argument data on little endian mode: r3 = r1 r1 = 0 r1 = 4 <=== relocation (FIELD_BYTE_OFFSET) r3 += r1 r1 = r10 r1 += -8 r2 = 4 <=== relocation (FIELD_BYTE_SIZE) call bpf_probe_read r2 = 51 <=== relocation (FIELD_LSHIFT_U64) r1 = *(u64 *)(r10 - 8) r1 <<= r2 r2 = 60 <=== relocation (FIELD_RSHIFT_U64) r0 = r1 r0 >>= r2 r3 = 1 <=== relocation (FIELD_SIGNEDNESS) if r3 == 0 goto LBB0_2 r1 s>>= r2 r0 = r1 LBB0_2: exit Compare to the above code between relocations FIELD_LSHIFT_U64 and FIELD_LSHIFT_U64, the code with big endian mode has four more instructions. r1 = 41 <=== relocation (FIELD_LSHIFT_U64) r6 += r1 r6 += -64 r6 <<= 32 r6 >>= 32 r1 = *(u64 *)(r10 - 8) r1 <<= r6 r2 = 60 <=== relocation (FIELD_RSHIFT_U64) The code and relocation generated when using direct load. r2 = 0 r3 = 4 r4 = 4 if r4 s> 3 goto LBB0_3 if r4 == 1 goto LBB0_5 if r4 == 2 goto LBB0_6 goto LBB0_9 LBB0_6: # %sw.bb1 r1 += r3 r2 = *(u16 *)(r1 + 0) goto LBB0_9 LBB0_3: # %entry if r4 == 4 goto LBB0_7 if r4 == 8 goto LBB0_8 goto LBB0_9 LBB0_8: # %sw.bb9 r1 += r3 r2 = *(u64 *)(r1 + 0) goto LBB0_9 LBB0_5: # %sw.bb r1 += r3 r2 = *(u8 *)(r1 + 0) goto LBB0_9 LBB0_7: # %sw.bb5 r1 += r3 r2 = *(u32 *)(r1 + 0) LBB0_9: # %sw.epilog r1 = 51 r2 <<= r1 r1 = 60 r0 = r2 r0 >>= r1 r3 = 1 if r3 == 0 goto LBB0_11 r2 s>>= r1 r0 = r2 LBB0_11: # %sw.epilog exit Considering verifier is able to do limited constant propogation following branches. The following is the code actually traversed. r2 = 0 r3 = 4 <=== relocation r4 = 4 <=== relocation if r4 s> 3 goto LBB0_3 LBB0_3: # %entry if r4 == 4 goto LBB0_7 LBB0_7: # %sw.bb5 r1 += r3 r2 = *(u32 *)(r1 + 0) LBB0_9: # %sw.epilog r1 = 51 <=== relocation r2 <<= r1 r1 = 60 <=== relocation r0 = r2 r0 >>= r1 r3 = 1 if r3 == 0 goto LBB0_11 r2 s>>= r1 r0 = r2 LBB0_11: # %sw.epilog exit For native load case, the load size is calculated to be the same as the size of load width LLVM otherwise used to load the value which is then used to extract the bitfield value. Differential Revision: https://reviews.llvm.org/D67980 llvm-svn: 374099
* Codegen - silence static analyzer getAs<> null dereference warnings. NFCI.Simon Pilgrim2019-10-071-3/+3
| | | | | | The static analyzer is warning about potential null dereferences, but in these cases we should be able to use castAs<> directly and if not assert will fire for us. llvm-svn: 373918
* [Alignment][Clang][NFC] Add CharUnits::getAsAlignGuillaume Chatelet2019-10-031-3/+3
| | | | | | | | | | | | | | | | | | Summary: This is a prerequisite to removing `llvm::GlobalObject::setAlignment(unsigned)`. This is patch is part of a series to introduce an Alignment type. See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html See this patch for the introduction of the type: https://reviews.llvm.org/D64790 Reviewers: courbet Subscribers: jholewinski, cfe-commits Tags: #clang Differential Revision: https://reviews.llvm.org/D68274 llvm-svn: 373592
* [Alignment][NFC] Remove AllocaInst::setAlignment(unsigned)Guillaume Chatelet2019-09-301-1/+1
| | | | | | | | | | | | | | | | | Summary: This is patch is part of a series to introduce an Alignment type. See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html See this patch for the introduction of the type: https://reviews.llvm.org/D64790 Reviewers: courbet Subscribers: jholewinski, arsenm, jvesely, nhaehnle, eraman, hiraditya, cfe-commits, llvm-commits Tags: #clang, #llvm Differential Revision: https://reviews.llvm.org/D68141 llvm-svn: 373207
* Improve code generation for thread_local variables:Richard Smith2019-09-121-1/+1
| | | | | | | | | | | | | | | | | | | | | Summary: * Don't bother using a thread wrapper when the variable is known to have constant initialization. * Emit the thread wrapper as discardable-if-unused in TUs that don't contain a definition of the thread_local variable. * Don't emit the thread wrapper at all if the thread_local variable is unused and discardable; it will be emitted by all TUs that need it. Reviewers: rjmccall, jdoerfert Subscribers: cfe-commits Tags: #clang Differential Revision: https://reviews.llvm.org/D67429 llvm-svn: 371767
* Fix for PR43175: compiler crash when trying to emit noncapturableAlexey Bataev2019-09-101-0/+5
| | | | | | | | | | | | | | constant. If the constexpr variable is partially initialized, the initializer can be emitted as the structure, not as an array, because of some early optimizations. The llvm variable gets the type from this constant and, thus, gets the type which is pointer to struct rather than pointer to an array. We need to convert this type to be truely array, otherwise it may lead to the compiler crash when trying to emit array subscript expression. llvm-svn: 371548
* Implement codegen for MSVC unions with reference members.Aaron Ballman2019-08-271-14/+16
| | | | | | | | Currently, clang accepts a union with a reference member when given the -fms-extensions flag. This change fixes the codegen for this case. Patch by Dominic Ferreira. llvm-svn: 370052
* hwasan, codegen: Keep more lifetime markers used for hwasanVitaly Buka2019-08-261-0/+1
| | | | | | | | | | | | Reviewers: eugenis Subscribers: cfe-commits Tags: #clang Differential Revision: https://reviews.llvm.org/D66697 llvm-svn: 369980
* msan, codegen, instcombine: Keep more lifetime markers used for msanVitaly Buka2019-08-261-4/+3
| | | | | | | | | | | | Reviewers: eugenis Subscribers: hiraditya, cfe-commits, #sanitizers, llvm-commits Tags: #clang, #sanitizers, #llvm Differential Revision: https://reviews.llvm.org/D66695 llvm-svn: 369979
* IR. Change strip* family of functions to not look through aliases.Peter Collingbourne2019-08-221-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | I noticed another instance of the issue where references to aliases were being replaced with aliasees, this time in InstCombine. In the instance that I saw it turned out to be only a QoI issue (a symbol ended up being missing from the symbol table due to the last reference to the alias being removed, preventing HWASAN from symbolizing a global reference), but it could easily have manifested as incorrect behaviour. Since this is the third such issue encountered (previously: D65118, D65314) it seems to be time to address this common error/QoI issue once and for all and make the strip* family of functions not look through aliases. Includes a test for the specific issue that I saw, but no doubt there are other similar bugs fixed here. As with D65118 this has been tested to make sure that the optimization isn't load bearing. I built Clang, Chromium for Linux, Android and Windows as well as the test-suite and there were no size regressions. Differential Revision: https://reviews.llvm.org/D66606 llvm-svn: 369697
* [BPF] annotate DIType metadata for builtin preseve_array_access_index()Yonghong Song2019-08-021-3/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, debuginfo types are annotated to IR builtin preserve_struct_access_index() and preserve_union_access_index(), but not preserve_array_access_index(). The debug info is useful to identify the root type name which later will be used for type comparison. For user access without explicit type conversions, the previous scheme works as we can ignore intermediate compiler generated type conversions (e.g., from union types to union members) and still generate correct access index string. The issue comes with user explicit type conversions, e.g., converting an array to a structure like below: struct t { int a; char b[40]; }; struct p { int c; int d; }; struct t *var = ...; ... __builtin_preserve_access_index(&(((struct p *)&(var->b[0]))->d)) ... Although BPF backend can derive the type of &(var->b[0]), explicit type annotation make checking more consistent and less error prone. Another benefit is for multiple dimension array handling. For example, struct p { int c; int d; } g[8][9][10]; ... __builtin_preserve_access_index(&g[2][3][4].d) ... It would be possible to calculate the number of "struct p"'s before accessing its member "d" if array debug info is available as it contains each dimension range. This patch enables to annotate IR builtin preserve_array_access_index() with proper debuginfo type. The unit test case and language reference is updated as well. Signed-off-by: Yonghong Song <yhs@fb.com> Differential Revision: https://reviews.llvm.org/D65664 llvm-svn: 367724
* fix unnamed fiefield issue and add tests for __builtin_preserve_access_index ↵Yonghong Song2019-07-161-2/+19
| | | | | | | | | | | | | | | | | | | | | | | | intrinsic The original commit is r366076. It is temporarily reverted (r366155) due to test failure. This resubmit makes test more robust by accepting regex instead of hardcoded names/references in several places. This is a followup patch for https://reviews.llvm.org/D61809. Handle unnamed bitfield properly and add more test cases. Fixed the unnamed bitfield issue. The unnamed bitfield is ignored by debug info, so we need to ignore such a struct/union member when we try to get the member index in the debug info. D61809 contains two test cases but not enough as it does not checking generated IRs in the fine grain level, and also it does not have semantics checking tests. This patch added unit tests for both code gen and semantics checking for the new intrinsic. Signed-off-by: Yonghong Song <yhs@fb.com> llvm-svn: 366231
* Finish "Adapt -fsanitize=function to SANITIZER_NON_UNIQUE_TYPEINFO"Stephan Bergmann2019-07-161-1/+1
| | | | | | | | | | | | | | | | | | | | | i.e., recent 5745eccef54ddd3caca278d1d292a88b2281528b: * Bump the function_type_mismatch handler version, as its signature has changed. * The function_type_mismatch handler can return successfully now, so SanitizerKind::Function must be AlwaysRecoverable (like for SanitizerKind::Vptr). * But the minimal runtime would still unconditionally treat a call to the function_type_mismatch handler as failure, so disallow -fsanitize=function in combination with -fsanitize-minimal-runtime (like it was already done for -fsanitize=vptr). * Add tests. Differential Revision: https://reviews.llvm.org/D61479 llvm-svn: 366186
* Fix parameter name comments using clang-tidy. NFC.Rui Ueyama2019-07-161-6/+6
| | | | | | | | | | | | | | | | | | | | | This patch applies clang-tidy's bugprone-argument-comment tool to LLVM, clang and lld source trees. Here is how I created this patch: $ git clone https://github.com/llvm/llvm-project.git $ cd llvm-project $ mkdir build $ cd build $ cmake -GNinja -DCMAKE_BUILD_TYPE=Debug \ -DLLVM_ENABLE_PROJECTS='clang;lld;clang-tools-extra' \ -DCMAKE_EXPORT_COMPILE_COMMANDS=On -DLLVM_ENABLE_LLD=On \ -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ ../llvm $ ninja $ parallel clang-tidy -checks='-*,bugprone-argument-comment' \ -config='{CheckOptions: [{key: StrictMode, value: 1}]}' -fix \ ::: ../llvm/lib/**/*.{cpp,h} ../clang/lib/**/*.{cpp,h} ../lld/**/*.{cpp,h} llvm-svn: 366177
* Temporarily Revert "fix unnamed fiefield issue and add tests for ↵Eric Christopher2019-07-151-19/+2
| | | | | | | | | | __builtin_preserve_access_index intrinsic" The commit had tests that would only work with names in the IR. This reverts commit r366076. llvm-svn: 366155
* fix unnamed fiefield issue and add tests for __builtin_preserve_access_index ↵Yonghong Song2019-07-151-2/+19
| | | | | | | | | | | | | | | | | | | | intrinsic This is a followup patch for https://reviews.llvm.org/D61809. Handle unnamed bitfield properly and add more test cases. Fixed the unnamed bitfield issue. The unnamed bitfield is ignored by debug info, so we need to ignore such a struct/union member when we try to get the member index in the debug info. D61809 contains two test cases but not enough as it does not checking generated IRs in the fine grain level, and also it does not have semantics checking tests. This patch added unit tests for both code gen and semantics checking for the new intrinsic. Signed-off-by: Yonghong Song <yhs@fb.com> llvm-svn: 366076
* [BPF] Preserve debuginfo array/union/struct type/access indexYonghong Song2019-07-091-4/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For background of BPF CO-RE project, please refer to http://vger.kernel.org/bpfconf2019.html In summary, BPF CO-RE intends to compile bpf programs adjustable on struct/union layout change so the same program can run on multiple kernels with adjustment before loading based on native kernel structures. In order to do this, we need keep track of GEP(getelementptr) instruction base and result debuginfo types, so we can adjust on the host based on kernel BTF info. Capturing such information as an IR optimization is hard as various optimization may have tweaked GEP and also union is replaced by structure it is impossible to track fieldindex for union member accesses. Three intrinsic functions, preserve_{array,union,struct}_access_index, are introducted. addr = preserve_array_access_index(base, index, dimension) addr = preserve_union_access_index(base, di_index) addr = preserve_struct_access_index(base, gep_index, di_index) here, base: the base pointer for the array/union/struct access. index: the last access index for array, the same for IR/DebugInfo layout. dimension: the array dimension. gep_index: the access index based on IR layout. di_index: the access index based on user/debuginfo types. If using these intrinsics blindly, i.e., transforming all GEPs to these intrinsics and later on reducing them to GEPs, we have seen up to 7% more instructions generated. To avoid such an overhead, a clang builtin is proposed: base = __builtin_preserve_access_index(base) such that user wraps to-be-relocated GEPs in this builtin and preserve_*_access_index intrinsics only apply to those GEPs. Such a buyin will prevent performance degradation if people do not use CO-RE, even for programs which use bpf_probe_read(). For example, for the following example, $ cat test.c struct sk_buff { int i; int b1:1; int b2:2; union { struct { int o1; int o2; } o; struct { char flags; char dev_id; } dev; int netid; } u[10]; }; static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr) = (void *) 4; #define _(x) (__builtin_preserve_access_index(x)) int bpf_prog(struct sk_buff *ctx) { char dev_id; bpf_probe_read(&dev_id, sizeof(char), _(&ctx->u[5].dev.dev_id)); return dev_id; } $ clang -target bpf -O2 -g -emit-llvm -S -mllvm -print-before-all \ test.c >& log The generated IR looks like below: ... define dso_local i32 @bpf_prog(%struct.sk_buff*) #0 !dbg !15 { %2 = alloca %struct.sk_buff*, align 8 %3 = alloca i8, align 1 store %struct.sk_buff* %0, %struct.sk_buff** %2, align 8, !tbaa !45 call void @llvm.dbg.declare(metadata %struct.sk_buff** %2, metadata !43, metadata !DIExpression()), !dbg !49 call void @llvm.lifetime.start.p0i8(i64 1, i8* %3) #4, !dbg !50 call void @llvm.dbg.declare(metadata i8* %3, metadata !44, metadata !DIExpression()), !dbg !51 %4 = load i32 (i8*, i32, i8*)*, i32 (i8*, i32, i8*)** @bpf_probe_read, align 8, !dbg !52, !tbaa !45 %5 = load %struct.sk_buff*, %struct.sk_buff** %2, align 8, !dbg !53, !tbaa !45 %6 = call [10 x %union.anon]* @llvm.preserve.struct.access.index.p0a10s_union.anons.p0s_struct.sk_buffs( %struct.sk_buff* %5, i32 2, i32 3), !dbg !53, !llvm.preserve.access.index !19 %7 = call %union.anon* @llvm.preserve.array.access.index.p0s_union.anons.p0a10s_union.anons( [10 x %union.anon]* %6, i32 1, i32 5), !dbg !53 %8 = call %union.anon* @llvm.preserve.union.access.index.p0s_union.anons.p0s_union.anons( %union.anon* %7, i32 1), !dbg !53, !llvm.preserve.access.index !26 %9 = bitcast %union.anon* %8 to %struct.anon.0*, !dbg !53 %10 = call i8* @llvm.preserve.struct.access.index.p0i8.p0s_struct.anon.0s( %struct.anon.0* %9, i32 1, i32 1), !dbg !53, !llvm.preserve.access.index !34 %11 = call i32 %4(i8* %3, i32 1, i8* %10), !dbg !52 %12 = load i8, i8* %3, align 1, !dbg !54, !tbaa !55 %13 = sext i8 %12 to i32, !dbg !54 call void @llvm.lifetime.end.p0i8(i64 1, i8* %3) #4, !dbg !56 ret i32 %13, !dbg !57 } !19 = distinct !DICompositeType(tag: DW_TAG_structure_type, name: "sk_buff", file: !3, line: 1, size: 704, elements: !20) !26 = distinct !DICompositeType(tag: DW_TAG_union_type, scope: !19, file: !3, line: 5, size: 64, elements: !27) !34 = distinct !DICompositeType(tag: DW_TAG_structure_type, scope: !26, file: !3, line: 10, size: 16, elements: !35) Note that @llvm.preserve.{struct,union}.access.index calls have metadata llvm.preserve.access.index attached to instructions to provide struct/union debuginfo type information. For &ctx->u[5].dev.dev_id, . The "%6 = ..." represents struct member "u" with index 2 for IR layout and index 3 for DI layout. . The "%7 = ..." represents array subscript "5". . The "%8 = ..." represents union member "dev" with index 1 for DI layout. . The "%10 = ..." represents struct member "dev_id" with index 1 for both IR and DI layout. Basically, traversing the use-def chain recursively for the 3rd argument of bpf_probe_read() and examining all preserve_*_access_index calls, the debuginfo struct/union/array access index can be achieved. The intrinsics also contain enough information to regenerate codes for IR layout. For array and structure intrinsics, the proper GEP can be constructed. For union intrinsics, replacing all uses of "addr" with "base" should be enough. Signed-off-by: Yonghong Song <yhs@fb.com> Differential Revision: https://reviews.llvm.org/D61809 llvm-svn: 365438
* Revert "[BPF] Preserve debuginfo array/union/struct type/access index"Yonghong Song2019-07-091-45/+4
| | | | | | | | | This reverts commit r365435. Forgot adding the Differential Revision link. Will add to the commit message and resubmit. llvm-svn: 365436
* [BPF] Preserve debuginfo array/union/struct type/access indexYonghong Song2019-07-091-4/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For background of BPF CO-RE project, please refer to http://vger.kernel.org/bpfconf2019.html In summary, BPF CO-RE intends to compile bpf programs adjustable on struct/union layout change so the same program can run on multiple kernels with adjustment before loading based on native kernel structures. In order to do this, we need keep track of GEP(getelementptr) instruction base and result debuginfo types, so we can adjust on the host based on kernel BTF info. Capturing such information as an IR optimization is hard as various optimization may have tweaked GEP and also union is replaced by structure it is impossible to track fieldindex for union member accesses. Three intrinsic functions, preserve_{array,union,struct}_access_index, are introducted. addr = preserve_array_access_index(base, index, dimension) addr = preserve_union_access_index(base, di_index) addr = preserve_struct_access_index(base, gep_index, di_index) here, base: the base pointer for the array/union/struct access. index: the last access index for array, the same for IR/DebugInfo layout. dimension: the array dimension. gep_index: the access index based on IR layout. di_index: the access index based on user/debuginfo types. If using these intrinsics blindly, i.e., transforming all GEPs to these intrinsics and later on reducing them to GEPs, we have seen up to 7% more instructions generated. To avoid such an overhead, a clang builtin is proposed: base = __builtin_preserve_access_index(base) such that user wraps to-be-relocated GEPs in this builtin and preserve_*_access_index intrinsics only apply to those GEPs. Such a buyin will prevent performance degradation if people do not use CO-RE, even for programs which use bpf_probe_read(). For example, for the following example, $ cat test.c struct sk_buff { int i; int b1:1; int b2:2; union { struct { int o1; int o2; } o; struct { char flags; char dev_id; } dev; int netid; } u[10]; }; static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr) = (void *) 4; #define _(x) (__builtin_preserve_access_index(x)) int bpf_prog(struct sk_buff *ctx) { char dev_id; bpf_probe_read(&dev_id, sizeof(char), _(&ctx->u[5].dev.dev_id)); return dev_id; } $ clang -target bpf -O2 -g -emit-llvm -S -mllvm -print-before-all \ test.c >& log The generated IR looks like below: ... define dso_local i32 @bpf_prog(%struct.sk_buff*) #0 !dbg !15 { %2 = alloca %struct.sk_buff*, align 8 %3 = alloca i8, align 1 store %struct.sk_buff* %0, %struct.sk_buff** %2, align 8, !tbaa !45 call void @llvm.dbg.declare(metadata %struct.sk_buff** %2, metadata !43, metadata !DIExpression()), !dbg !49 call void @llvm.lifetime.start.p0i8(i64 1, i8* %3) #4, !dbg !50 call void @llvm.dbg.declare(metadata i8* %3, metadata !44, metadata !DIExpression()), !dbg !51 %4 = load i32 (i8*, i32, i8*)*, i32 (i8*, i32, i8*)** @bpf_probe_read, align 8, !dbg !52, !tbaa !45 %5 = load %struct.sk_buff*, %struct.sk_buff** %2, align 8, !dbg !53, !tbaa !45 %6 = call [10 x %union.anon]* @llvm.preserve.struct.access.index.p0a10s_union.anons.p0s_struct.sk_buffs( %struct.sk_buff* %5, i32 2, i32 3), !dbg !53, !llvm.preserve.access.index !19 %7 = call %union.anon* @llvm.preserve.array.access.index.p0s_union.anons.p0a10s_union.anons( [10 x %union.anon]* %6, i32 1, i32 5), !dbg !53 %8 = call %union.anon* @llvm.preserve.union.access.index.p0s_union.anons.p0s_union.anons( %union.anon* %7, i32 1), !dbg !53, !llvm.preserve.access.index !26 %9 = bitcast %union.anon* %8 to %struct.anon.0*, !dbg !53 %10 = call i8* @llvm.preserve.struct.access.index.p0i8.p0s_struct.anon.0s( %struct.anon.0* %9, i32 1, i32 1), !dbg !53, !llvm.preserve.access.index !34 %11 = call i32 %4(i8* %3, i32 1, i8* %10), !dbg !52 %12 = load i8, i8* %3, align 1, !dbg !54, !tbaa !55 %13 = sext i8 %12 to i32, !dbg !54 call void @llvm.lifetime.end.p0i8(i64 1, i8* %3) #4, !dbg !56 ret i32 %13, !dbg !57 } !19 = distinct !DICompositeType(tag: DW_TAG_structure_type, name: "sk_buff", file: !3, line: 1, size: 704, elements: !20) !26 = distinct !DICompositeType(tag: DW_TAG_union_type, scope: !19, file: !3, line: 5, size: 64, elements: !27) !34 = distinct !DICompositeType(tag: DW_TAG_structure_type, scope: !26, file: !3, line: 10, size: 16, elements: !35) Note that @llvm.preserve.{struct,union}.access.index calls have metadata llvm.preserve.access.index attached to instructions to provide struct/union debuginfo type information. For &ctx->u[5].dev.dev_id, . The "%6 = ..." represents struct member "u" with index 2 for IR layout and index 3 for DI layout. . The "%7 = ..." represents array subscript "5". . The "%8 = ..." represents union member "dev" with index 1 for DI layout. . The "%10 = ..." represents struct member "dev_id" with index 1 for both IR and DI layout. Basically, traversing the use-def chain recursively for the 3rd argument of bpf_probe_read() and examining all preserve_*_access_index calls, the debuginfo struct/union/array access index can be achieved. The intrinsics also contain enough information to regenerate codes for IR layout. For array and structure intrinsics, the proper GEP can be constructed. For union intrinsics, replacing all uses of "addr" with "base" should be enough. Signed-off-by: Yonghong Song <yhs@fb.com> llvm-svn: 365435
* [C++2a] Add __builtin_bit_cast, used to implement std::bit_castErik Pilkington2019-07-021-0/+1
| | | | | | | | | | | | | | | | | | This commit adds a new builtin, __builtin_bit_cast(T, v), which performs a bit_cast from a value v to a type T. This expression can be evaluated at compile time under specific circumstances. The compile time evaluation currently doesn't support bit-fields, but I'm planning on fixing this in a follow up (some of the logic for figuring this out is in CodeGen). I'm also planning follow-ups for supporting some more esoteric types that the constexpr evaluator supports, as well as extending __builtin_memcpy constexpr evaluation to use the same infrastructure. rdar://44987528 Differential revision: https://reviews.llvm.org/D62825 llvm-svn: 364954
* [clang] Add DISuprogram and DIE for a func declDjordje Todorovic2019-06-271-1/+13
| | | | | | | | | | | | | | | Attach a unique DISubprogram to a function declaration that will be used for call site debug info. ([7/13] Introduce the debug entry values.) Co-authored-by: Ananth Sowda <asowda@cisco.com> Co-authored-by: Nikola Prica <nikola.prica@rt-rk.com> Co-authored-by: Ivan Baev <ibaev@cisco.com> Differential Revision: https://reviews.llvm.org/D60714 llvm-svn: 364502
* Ensure Target Features always_inline error happens in C++ cases.Erich Keane2019-06-211-11/+0
| | | | | | | | A handful of C++ cases as reported in PR42352 didn't actually give an error when always_inlining with a different target feature list. This resulted in broken IR. llvm-svn: 364109
* P0840R2: support for [[no_unique_address]] attributeRichard Smith2019-06-201-0/+15
| | | | | | | | | | | | | | | | | Summary: Add support for the C++2a [[no_unique_address]] attribute for targets using the Itanium C++ ABI. This depends on D63371. Reviewers: rjmccall, aaron.ballman Subscribers: dschuff, aheejin, cfe-commits Tags: #clang Differential Revision: https://reviews.llvm.org/D63451 llvm-svn: 363976
* [OpenMP] Add support for handling declare target to clause when unified ↵Gheorghe-Teodor Bercea2019-06-201-6/+13
| | | | | | | | | | | | | | | | | | | | | memory is required Summary: This patch adds support for the handling of the variables under the declare target to clause. The variables in this case are handled like link variables are. A pointer is created on the host and then mapped to the device. The runtime will then copy the address of the host variable in the device pointer. Reviewers: ABataev, AlexEichenberger, caomhin Reviewed By: ABataev Subscribers: guansong, jdoerfert, cfe-commits Tags: #clang Differential Revision: https://reviews.llvm.org/D63108 llvm-svn: 363959
* C++ DR712 and others: handle non-odr-use resulting from an lvalue-to-rvalue ↵Richard Smith2019-06-141-22/+87
| | | | | | | | | | | | | | | | | | | | | | | | | | | conversion applied to a member access or similar not-quite-trivial lvalue expression. Summary: When a variable is named in a context where we can't directly emit a reference to it (because we don't know for sure that it's going to be defined, or it's from an enclosing function and not captured, or the reference might not "work" for some reason), we emit a copy of the variable as a global and use that for the known-to-be-read-only access. This reinstates r363295, reverted in r363352, with a fix for PR42276: we now produce a proper name for a non-odr-use reference to a static constexpr data member. The name <mangled-name>.const is used in that case; such names are reserved to the implementation for cases such as this and should demangle nicely. Reviewers: rjmccall Subscribers: jdoerfert, cfe-commits Tags: #clang Differential Revision: https://reviews.llvm.org/D63157 llvm-svn: 363428
* Revert 363295, it caused PR42276. Also revert follow-ups 363337, 363340.Nico Weber2019-06-141-87/+22
| | | | | | | | Revert 363340 "Remove unused SK_LValueToRValue initialization step." Revert 363337 "PR23833, DR2140: an lvalue-to-rvalue conversion on a glvalue of type" Revert 363295 "C++ DR712 and others: handle non-odr-use resulting from an lvalue-to-rvalue conversion applied to a member access or similar not-quite-trivial lvalue expression." llvm-svn: 363352
* C++ DR712 and others: handle non-odr-use resulting from an lvalue-to-rvalue ↵Richard Smith2019-06-131-22/+87
| | | | | | | | | | | | | | | | | | | | | conversion applied to a member access or similar not-quite-trivial lvalue expression. Summary: When a variable is named in a context where we can't directly emit a reference to it (because we don't know for sure that it's going to be defined, or it's from an enclosing function and not captured, or the reference might not "work" for some reason), we emit a copy of the variable as a global and use that for the known-to-be-read-only access. Reviewers: rjmccall Subscribers: jdoerfert, cfe-commits Tags: #clang Differential Revision: https://reviews.llvm.org/D63157 llvm-svn: 363295
OpenPOWER on IntegriCloud