summaryrefslogtreecommitdiffstats
path: root/llvm/docs/LangRef.rst
Commit message (Collapse)AuthorAgeFilesLines
* Update docs for fast-math flags.Jay Foad2019-10-181-2/+3
| | | | | | | This adds fneg, phi and select to the list of operations that may use fast-math flags. llvm-svn: 375250
* Reland: Dead Virtual Function EliminationOliver Stannard2019-10-171-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove dead virtual functions from vtables with replaceNonMetadataUsesWith, so that CGProfile metadata gets cleaned up correctly. Original commit message: Currently, it is hard for the compiler to remove unused C++ virtual functions, because they are all referenced from vtables, which are referenced by constructors. This means that if the constructor is called from any live code, then we keep every virtual function in the final link, even if there are no call sites which can use it. This patch allows unused virtual functions to be removed during LTO (and regular compilation in limited circumstances) by using type metadata to match virtual function call sites to the vtable slots they might load from. This information can then be used in the global dead code elimination pass instead of the references from vtables to virtual functions, to more accurately determine which functions are reachable. To make this transformation safe, I have changed clang's code-generation to always load virtual function pointers using the llvm.type.checked.load intrinsic, instead of regular load instructions. I originally tried writing this using clang's existing code-generation, which uses the llvm.type.test and llvm.assume intrinsics after doing a normal load. However, it is possible for optimisations to obscure the relationship between the GEP, load and llvm.type.test, causing GlobalDCE to fail to find virtual function call sites. The existing linkage and visibility types don't accurately describe the scope in which a virtual call could be made which uses a given vtable. This is wider than the visibility of the type itself, because a virtual function call could be made using a more-visible base class. I've added a new !vcall_visibility metadata type to represent this, described in TypeMetadata.rst. The internalization pass and libLTO have been updated to change this metadata when linking is performed. This doesn't currently work with ThinLTO, because it needs to see every call to llvm.type.checked.load in the linkage unit. It might be possible to extend this optimisation to be able to use the ThinLTO summary, as was done for devirtualization, but until then that combination is rejected in the clang driver. To test this, I've written a fuzzer which generates random C++ programs with complex class inheritance graphs, and virtual functions called through object and function pointers of different types. The programs are spread across multiple translation units and DSOs to test the different visibility restrictions. I've also tried doing bootstrap builds of LLVM to test this. This isn't ideal, because only classes in anonymous namespaces can be optimised with -fvisibility=default, and some parts of LLVM (plugins and bugpoint) do not work correctly with -fvisibility=hidden. However, there are only 12 test failures when building with -fvisibility=hidden (and an unmodified compiler), and this change does not cause any new failures for either value of -fvisibility. On the 7 C++ sub-benchmarks of SPEC2006, this gives a geomean code-size reduction of ~6%, over a baseline compiled with "-O2 -flto -fvisibility=hidden -fwhole-program-vtables". The best cases are reductions of ~14% in 450.soplex and 483.xalancbmk, and there are no code size increases. I've also run this on a set of 8 mbed-os examples compiled for Armv7M, which show a geomean size reduction of ~3%, again with no size increases. I had hoped that this would have no effect on performance, which would allow it to awlays be enabled (when using -fwhole-program-vtables). However, the changes in clang to use the llvm.type.checked.load intrinsic are causing ~1% performance regression in the C++ parts of SPEC2006. It should be possible to recover some of this perf loss by teaching optimisations about the llvm.type.checked.load intrinsic, which would make it worth turning this on by default (though it's still dependent on -fwhole-program-vtables). Differential revision: https://reviews.llvm.org/D63932 llvm-svn: 375094
* [DebugInfo] Add a DW_OP_LLVM_entry_value operationDavid Stenberg2019-10-151-12/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Internally in LLVM's metadata we use DW_OP_entry_value operations with the same semantics as DWARF; that is, its operand specifies the number of bytes that the entry value covers. At the time of emitting entry values we don't know the emitted size of the DWARF expression that the entry value will cover. Currently the size is hardcoded to 1 in DIExpression, and other values causes the verifier to fail. As the size is 1, that effectively means that we can only have valid entry values for registers that can be encoded in one byte, which are the registers with DWARF numbers 0 to 31 (as they can be encoded as single-byte DW_OP_reg0..DW_OP_reg31 rather than a multi-byte DW_OP_regx). It is a bit confusing, but it seems like llvm-dwarfdump will print an operation "correctly", even if the byte size is less than that, which may make it seem that we emit correct DWARF for registers with DWARF numbers > 31. If you instead use readelf for such cases, it will interpret the number of specified bytes as a DWARF expression. This seems like a limitation in llvm-dwarfdump. As suggested in D66746, a way forward would be to add an internal variant of DW_OP_entry_value, DW_OP_LLVM_entry_value, whose operand instead specifies the number of operations that the entry value covers, and we then translate that into the byte size at the time of emission. In this patch that internal operation is added. This patch keeps the limitation that a entry value can only be applied to simple register locations, but it will fix the issue with the size operand being incorrect for DWARF numbers > 31. Reviewers: aprantl, vsk, djtodoro, NikolaPrica Reviewed By: aprantl Subscribers: jyknight, fedor.sergeev, hiraditya, llvm-commits Tags: #debug-info, #llvm Differential Revision: https://reviews.llvm.org/D67492 llvm-svn: 374881
* Revert "Dead Virtual Function Elimination"Jorge Gorbe Moya2019-10-141-9/+0
| | | | | | This reverts commit 9f6a873268e1ad9855873d9d8007086c0d01cf4f. llvm-svn: 374844
* Dead Virtual Function EliminationOliver Stannard2019-10-111-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, it is hard for the compiler to remove unused C++ virtual functions, because they are all referenced from vtables, which are referenced by constructors. This means that if the constructor is called from any live code, then we keep every virtual function in the final link, even if there are no call sites which can use it. This patch allows unused virtual functions to be removed during LTO (and regular compilation in limited circumstances) by using type metadata to match virtual function call sites to the vtable slots they might load from. This information can then be used in the global dead code elimination pass instead of the references from vtables to virtual functions, to more accurately determine which functions are reachable. To make this transformation safe, I have changed clang's code-generation to always load virtual function pointers using the llvm.type.checked.load intrinsic, instead of regular load instructions. I originally tried writing this using clang's existing code-generation, which uses the llvm.type.test and llvm.assume intrinsics after doing a normal load. However, it is possible for optimisations to obscure the relationship between the GEP, load and llvm.type.test, causing GlobalDCE to fail to find virtual function call sites. The existing linkage and visibility types don't accurately describe the scope in which a virtual call could be made which uses a given vtable. This is wider than the visibility of the type itself, because a virtual function call could be made using a more-visible base class. I've added a new !vcall_visibility metadata type to represent this, described in TypeMetadata.rst. The internalization pass and libLTO have been updated to change this metadata when linking is performed. This doesn't currently work with ThinLTO, because it needs to see every call to llvm.type.checked.load in the linkage unit. It might be possible to extend this optimisation to be able to use the ThinLTO summary, as was done for devirtualization, but until then that combination is rejected in the clang driver. To test this, I've written a fuzzer which generates random C++ programs with complex class inheritance graphs, and virtual functions called through object and function pointers of different types. The programs are spread across multiple translation units and DSOs to test the different visibility restrictions. I've also tried doing bootstrap builds of LLVM to test this. This isn't ideal, because only classes in anonymous namespaces can be optimised with -fvisibility=default, and some parts of LLVM (plugins and bugpoint) do not work correctly with -fvisibility=hidden. However, there are only 12 test failures when building with -fvisibility=hidden (and an unmodified compiler), and this change does not cause any new failures for either value of -fvisibility. On the 7 C++ sub-benchmarks of SPEC2006, this gives a geomean code-size reduction of ~6%, over a baseline compiled with "-O2 -flto -fvisibility=hidden -fwhole-program-vtables". The best cases are reductions of ~14% in 450.soplex and 483.xalancbmk, and there are no code size increases. I've also run this on a set of 8 mbed-os examples compiled for Armv7M, which show a geomean size reduction of ~3%, again with no size increases. I had hoped that this would have no effect on performance, which would allow it to awlays be enabled (when using -fwhole-program-vtables). However, the changes in clang to use the llvm.type.checked.load intrinsic are causing ~1% performance regression in the C++ parts of SPEC2006. It should be possible to recover some of this perf loss by teaching optimisations about the llvm.type.checked.load intrinsic, which would make it worth turning this on by default (though it's still dependent on -fwhole-program-vtables). Differential revision: https://reviews.llvm.org/D63932 llvm-svn: 374539
* [X86] Add new calling convention that guarantees tail call optimizationReid Kleckner2019-10-071-4/+13
| | | | | | | | | | | | | | | | | When the target option GuaranteedTailCallOpt is specified, calls with the fastcc calling convention will be transformed into tail calls if they are in tail position. This diff adds a new calling convention, tailcc, currently supported only on X86, which behaves the same way as fastcc, except that the GuaranteedTailCallOpt flag does not need to enabled in order to enable tail call optimization. Patch by Dwight Guth <dwight.guth@runtimeverification.com>! Reviewed By: lebedev.ri, paquette, rnk Differential Revision: https://reviews.llvm.org/D67855 llvm-svn: 373976
* Fix another sphinx warning.Kevin P. Neal2019-10-071-2/+2
| | | | | | Differential Revision: https://reviews.llvm.org/D64746 llvm-svn: 373909
* Fix sphinx warnings.Kevin P. Neal2019-10-071-2/+2
| | | | | | Differential Revision: https://reviews.llvm.org/D64746 llvm-svn: 373902
* [FPEnv] Add constrained intrinsics for lrint and lroundKevin P. Neal2019-10-071-0/+172
| | | | | | | | | | | Earlier in the year intrinsics for lrint, llrint, lround and llround were added to llvm. The constrained versions are now implemented here. Reviewed by: andrew.w.kaylor, craig.topper, cameron.mcinally Approved by: craig.topper Differential Revision: https://reviews.llvm.org/D64746 llvm-svn: 373900
* Fix doc for t inline asm constraints for ARM/ThumbPablo Barrio2019-09-301-12/+12
| | | | | | | | | | | | Summary: The constraint goes up to regs d15 and q7, not d16 and q8. Subscribers: kristof.beyls, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D68090 llvm-svn: 373228
* Fix breakage of sphinx builders. Sorry for leaving this broken over theKevin P. Neal2019-09-301-3/+0
| | | | | | weekend! llvm-svn: 373215
* Document requirement of function attributes with constrained floatingKevin P. Neal2019-09-261-1/+13
| | | | | | | | | | point. Reviewed by: andrew.w.kaylor, uweigand, efriedma Approved by: andrew.w.kaylor Differential Revision: https://reviews.llvm.org/D67839 llvm-svn: 373002
* [Verifier] add invariant check for callbrNick Desaulniers2019-09-251-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: The list of indirect labels should ALWAYS have their blockaddresses as argument operands to the callbr (but not necessarily the other way around). Add an invariant that checks this. The verifier catches a bad test case that was added recently in r368478. I think that was a simple mistake, and the test was made less strict in regards to the precise addresses (as those weren't specifically the point of the test). This invariant will be used to find a reported bug. Link: https://www.spinics.net/lists/arm-kernel/msg753473.html Link: https://github.com/ClangBuiltLinux/linux/issues/649 Reviewers: craig.topper, void, chandlerc Reviewed By: void Subscribers: ychen, lebedev.ri, javed.absar, kristof.beyls, hiraditya, llvm-commits, srhines Tags: #llvm Differential Revision: https://reviews.llvm.org/D67196 llvm-svn: 372923
* [LangRef] Clarify absence of rounding guarantees for fmuladd.Florian Hahn2019-09-251-6/+6
| | | | | | | | | | | | | During the review of D67434, it was recommended to make fmuladd's behavior more explicit. D67434 depends on this interpretation. Reviewers: efriedma, jfb, reames, scanon, lebedev.ri, spatel Reviewed By: spatel Differential Revision: https://reviews.llvm.org/D67552 llvm-svn: 372892
* [IR] allow fast-math-flags on phi of FP values (2nd try)Sanjay Patel2019-09-251-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The changes here are based on the corresponding diffs for allowing FMF on 'select': D61917 <https://reviews.llvm.org/D61917> As discussed there, we want to have fast-math-flags be a property of an FP value because the alternative (having them on things like fcmp) leads to logical inconsistency such as: https://bugs.llvm.org/show_bug.cgi?id=38086 The earlier patch for select made almost no practical difference because most unoptimized conditional code begins life as a phi (based on what I see in clang). Similarly, I don't expect this patch to do much on its own either because SimplifyCFG promptly drops the flags when converting to select on a minimal example like: https://bugs.llvm.org/show_bug.cgi?id=39535 But once we have this plumbing in place, we should be able to wire up the FMF propagation and start solving cases like that. The change to RecurrenceDescriptor::AddReductionVar() is required to prevent a regression in a LoopVectorize test. We are intersecting the FMF of any FPMathOperator there, so if a phi is not properly annotated, new math instructions may not be either. Once we fix the propagation in SimplifyCFG, it may be safe to remove that hack. Differential Revision: https://reviews.llvm.org/D67564 llvm-svn: 372878
* Revert [IR] allow fast-math-flags on phi of FP valuesSanjay Patel2019-09-251-7/+1
| | | | | | This reverts r372866 (git commit dec03223a97af0e4dfcb23da55c0f7f8c9b62d00) llvm-svn: 372868
* [IR] allow fast-math-flags on phi of FP valuesSanjay Patel2019-09-251-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The changes here are based on the corresponding diffs for allowing FMF on 'select': D61917 As discussed there, we want to have fast-math-flags be a property of an FP value because the alternative (having them on things like fcmp) leads to logical inconsistency such as: https://bugs.llvm.org/show_bug.cgi?id=38086 The earlier patch for select made almost no practical difference because most unoptimized conditional code begins life as a phi (based on what I see in clang). Similarly, I don't expect this patch to do much on its own either because SimplifyCFG promptly drops the flags when converting to select on a minimal example like: https://bugs.llvm.org/show_bug.cgi?id=39535 But once we have this plumbing in place, we should be able to wire up the FMF propagation and start solving cases like that. The change to RecurrenceDescriptor::AddReductionVar() is required to prevent a regression in a LoopVectorize test. We are intersecting the FMF of any FPMathOperator there, so if a phi is not properly annotated, new math instructions may not be either. Once we fix the propagation in SimplifyCFG, it may be safe to remove that hack. Differential Revision: https://reviews.llvm.org/D67564 llvm-svn: 372866
* [SVE][Inline-Asm] Add constraints for SVE predicate registersKerry McLaughlin2019-09-161-0/+2
| | | | | | | | | | | | | | | | | | | Summary: Adds the following inline asm constraints for SVE: - Upl: One of the low eight SVE predicate registers, P0 to P7 inclusive - Upa: SVE predicate register with full range, P0 to P15 Reviewers: t.p.northover, sdesmalen, rovka, momchil.velikov, cameron.mcinally, greened, rengolin Reviewed By: rovka Subscribers: javed.absar, tschuett, rkruppe, psnobl, cfe-commits, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D66524 llvm-svn: 371967
* [FPEnv] Document that constrained FP intrinsics cannot be mixed with ↵Kevin P. Neal2019-09-131-7/+16
| | | | | | | | | | non-constrained Reviewed by: andrew.w.kaylor, cameron.mcinally, uweigand Approved by: andrew.w.kaylor Differential Revision: https://reviews.llvm.org/D67360 llvm-svn: 371888
* Fix a few spellos in docs.Nico Weber2019-09-131-4/+4
| | | | | | (Trying to debug an incremental build thing on a bot...) llvm-svn: 371860
* [LangRef] add link for fma intrinsicSanjay Patel2019-09-111-1/+3
| | | | llvm-svn: 371615
* [LangRef] fix punctuation; NFCSanjay Patel2019-09-111-1/+1
| | | | llvm-svn: 371612
* LangRef: mention MSan's problem with speculative conditional branches.Evgeniy Stepanov2019-09-091-0/+11
| | | | | | | | | | | | | | | | | | | Summary: This short blurb aims to disallow optimizations like we had to revert (under MSan) in https://reviews.llvm.org/D21165 https://bugs.llvm.org/show_bug.cgi?id=28054 https://reviews.llvm.org/D67205 Reviewers: vitalybuka, efriedma Subscribers: llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67244 llvm-svn: 371461
* [Intrinsic] Add the llvm.umul.fix.sat intrinsicBjorn Pettersson2019-09-071-0/+67
| | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Add an intrinsic that takes 2 unsigned integers with the scale of them provided as the third argument and performs fixed point multiplication on them. The result is saturated and clamped between the largest and smallest representable values of the first 2 operands. This is a part of implementing fixed point arithmetic in clang where some of the more complex operations will be implemented as intrinsics. Patch by: leonardchan, bjope Reviewers: RKSimon, craig.topper, bevinh, leonardchan, lebedev.ri, spatel Reviewed By: leonardchan Subscribers: ychen, wuzish, nemanjai, MaskRay, jsji, jdoerfert, Ka-Ka, hiraditya, rjmccall, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D57836 llvm-svn: 371308
* [SVE][Inline-Asm] Support for SVE asm operandsKerry McLaughlin2019-09-021-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | Summary: Adds the following inline asm constraints for SVE: - w: SVE vector register with full range, Z0 to Z31 - x: Restricted to registers Z0 to Z15 inclusive. - y: Restricted to registers Z0 to Z7 inclusive. This change also adds the "z" modifier to interpret a register as an SVE register. Not all of the bitconvert patterns added by this patch are used, but they have been included here for completeness. Reviewers: t.p.northover, sdesmalen, rovka, momchil.velikov, rengolin, cameron.mcinally, greened Reviewed By: sdesmalen Subscribers: javed.absar, tschuett, rkruppe, psnobl, cfe-commits, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D66302 llvm-svn: 370673
* [LangRef] Update saturating examples for llvm.smul.fix.sat. NFCBjorn Pettersson2019-08-311-3/+3
| | | | | | | | Some saturation examples for llvm.smul.fix.sat were not showing the correct result. I've adjusted the operands to make sure that we actually trigger overflow in those examples. llvm-svn: 370566
* [FPEnv] Add fptosi and fptoui constrained intrinsics.Kevin P. Neal2019-08-281-0/+66
| | | | | | | | | | | | | | | | | This implements constrained floating point intrinsics for FP to signed and unsigned integers. Quoting from D32319: The purpose of the constrained intrinsics is to force the optimizer to respect the restrictions that will be necessary to support things like the STDC FENV_ACCESS ON pragma without interfering with optimizations when these restrictions are not needed. Reviewed by: Andrew Kaylor, Craig Topper, Hal Finkel, Cameron McInally, Roman Lebedev, Kit Barton Approved by: Craig Topper Differential Revision: http://reviews.llvm.org/D63782 llvm-svn: 370228
* Debug Info: Support for DW_AT_export_symbols for anonymous structsShafik Yaghmour2019-08-231-0/+5
| | | | | | | | | | | | | | | | | | | | | | This implements the DWARF 5 feature described in: http://dwarfstd.org/ShowIssue.php?issue=141212.1 To support recognizing anonymous structs: struct A { struct { // Anonymous struct int y; }; } a; This patch adds a new (DI)flag to LLVM metadata: ExportSymbols Differential Revision: https://reviews.llvm.org/D66352 llvm-svn: 369781
* Add ptrmask intrinsicFlorian Hahn2019-08-151-0/+36
| | | | | | | | | | | | | | | | | | | This patch adds a ptrmask intrinsic which allows masking out bits of a pointer that must be zero when accessing it, because of ABI alignment requirements or a restriction of the meaningful bits of a pointer through the data layout. This avoids doing a ptrtoint/inttoptr round trip in some cases (e.g. tagged pointers) and allows us to not lose information about the underlying object. Reviewers: nlopes, efriedma, hfinkel, sanjoy, jdoerfert, aqjune Reviewed by: sanjoy, jdoerfert Differential Revision: https://reviews.llvm.org/D59065 llvm-svn: 368986
* [LangRef] Remove opening [ that was missing a closing ] from ↵Craig Topper2019-08-141-3/+3
| | | | | | | | | call/callbr/invoke syntax. It looks like this bracket was added when the addrspace was added. before it. So I think it can jut be removed. llvm-svn: 368861
* Add llvm.licm.disable metadataTim Corringham2019-08-081-2/+16
| | | | | | | | | | | | | | For some targets the LICM pass can result in sub-optimal code in some cases where it would be better not to run the pass, but it isn't always possible to suppress the transformations heuristically. Where the front-end has insight into such cases it is beneficial to attach loop metadata to disable the pass - this change adds the llvm.licm.disable metadata to enable that. Differential Revision: https://reviews.llvm.org/D64557 llvm-svn: 368296
* [RISCV][NFC] Document RISC-V-specific assembly constraintsSam Elliott2019-08-071-0/+11
| | | | llvm-svn: 368167
* [Attributor] Deduce the "no-return" attribute for functionsJohannes Doerfert2019-08-051-2/+3
| | | | | | | | | | | | | | | A function is "no-return" if we never reach a return instruction, either because there are none or the ones that exist are dead. Test have been adjusted: - either noreturn was added, or - noreturn was avoided by modifying the code. The new noreturn_{sync,async} test make sure we do handle invoke instructions with a noreturn (and potentially nowunwind) callee correctly, even in the presence of potential asynchronous exceptions. llvm-svn: 367948
* [BPF] annotate DIType metadata for builtin preseve_array_access_index()Yonghong Song2019-08-021-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, debuginfo types are annotated to IR builtin preserve_struct_access_index() and preserve_union_access_index(), but not preserve_array_access_index(). The debug info is useful to identify the root type name which later will be used for type comparison. For user access without explicit type conversions, the previous scheme works as we can ignore intermediate compiler generated type conversions (e.g., from union types to union members) and still generate correct access index string. The issue comes with user explicit type conversions, e.g., converting an array to a structure like below: struct t { int a; char b[40]; }; struct p { int c; int d; }; struct t *var = ...; ... __builtin_preserve_access_index(&(((struct p *)&(var->b[0]))->d)) ... Although BPF backend can derive the type of &(var->b[0]), explicit type annotation make checking more consistent and less error prone. Another benefit is for multiple dimension array handling. For example, struct p { int c; int d; } g[8][9][10]; ... __builtin_preserve_access_index(&g[2][3][4].d) ... It would be possible to calculate the number of "struct p"'s before accessing its member "d" if array debug info is available as it contains each dimension range. This patch enables to annotate IR builtin preserve_array_access_index() with proper debuginfo type. The unit test case and language reference is updated as well. Signed-off-by: Yonghong Song <yhs@fb.com> Differential Revision: https://reviews.llvm.org/D65664 llvm-svn: 367724
* Reland "[DwarfDebug] Dump call site debug info"Djordje Todorovic2019-07-311-0/+4
| | | | | | | | | The build failure found after the rL365467 has been resolved. Differential Revision: https://reviews.llvm.org/D60716 llvm-svn: 367446
* [Clang] New loop pragma vectorize_predicateSjoerd Meijer2019-07-251-0/+15
| | | | | | | | | | | | | | | | | | | This adds a new vectorize predication loop hint: #pragma clang loop vectorize_predicate(enable) that can be used to indicate to the vectoriser that all (load/store) instructions should be predicated (masked). This allows, for example, folding of the remainder loop into the main loop. This patch will be followed up with D64916 and D65197. The former is a refactoring in the loopvectorizer and the groundwork to make tail loop folding a more general concept, and in the latter the actual tail loop folding transformation will be implemented. Differential Revision: https://reviews.llvm.org/D64744 llvm-svn: 366989
* [IR][Verifier] Allow IntToPtrInst to be !dereferenceableRyan Taylor2019-07-231-15/+42
| | | | | | | | | | | | | | | | | Summary: Allow IntToPtrInst to carry !dereferenceable metadata tag. This is valid since !dereferenceable can be only be applied to pointer type values. Change-Id: If8a6e3c616f073d51eaff52ab74535c29ed497b4 Subscribers: llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D64954 llvm-svn: 366826
* ARM MTE stack sanitizer.Evgeniy Stepanov2019-07-151-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | Add "memtag" sanitizer that detects and mitigates stack memory issues using armv8.5 Memory Tagging Extension. It is similar in principle to HWASan, which is a software implementation of the same idea, but there are enough differencies to warrant a new sanitizer type IMHO. It is also expected to have very different performance properties. The new sanitizer does not have a runtime library (it may grow one later, along with a "debugging" mode). Similar to SafeStack and StackProtector, the instrumentation pass (in a follow up change) will be inserted in all cases, but will only affect functions marked with the new sanitize_memtag attribute. Reviewers: pcc, hctim, vitalybuka, ostannard Subscribers: srhines, mehdi_amini, javed.absar, kristof.beyls, hiraditya, cryptoad, steven_wu, dexonsmith, cfe-commits, llvm-commits Tags: #clang, #llvm Differential Revision: https://reviews.llvm.org/D64169 llvm-svn: 366123
* [BPF] add unit tests for preserve_{array,union,struct}_access_index intrinsicsYonghong Song2019-07-151-3/+9
| | | | | | | | | | | | | | | | | | | This is a followup patch for https://reviews.llvm.org/D61810/new/, which adds new intrinsics preserve_{array,union,struct}_access_index. Currently, only BPF backend utilizes preserve_{array,union,struct}_access_index intrinsics, so all tests are compiled with BPF target. https://reviews.llvm.org/D61524 already added some tests for these intrinsics, but some of them pretty complex. This patch added a few unit test cases focusing on individual intrinsic functions. Also made a few clarification on language reference for these intrinsics. Differential Revision: https://reviews.llvm.org/D64606 llvm-svn: 366038
* Revert "[DwarfDebug] Dump call site debug info"Djordje Todorovic2019-07-121-3/+0
| | | | | | | | A build failure was found on the SystemZ platform. This reverts commit 9e7e73578e54cd22b3c7af4b54274d743b6607cc. llvm-svn: 365886
* [Attributor] Deduce "nosync" function attribute.Stefan Stipanovic2019-07-111-0/+10
| | | | | | | | | | | | | | Introduce and deduce "nosync" function attribute to indicate that a function does not synchronize with another thread in a way that other thread might free memory. Reviewers: jdoerfert, jfb, nhaehnle, arsenm Subscribers: wdng, hfinkel, nhaenhle, mehdi_amini, steven_wu, dexonsmith, arsenm, uenoku, hiraditya, jfb, llvm-commits Differential Revision: https://reviews.llvm.org/D62766 llvm-svn: 365830
* [DwarfDebug] Dump call site debug infoDjordje Todorovic2019-07-091-0/+3
| | | | | | | | | | | | | | | | | | | Dump the DWARF information about call sites and call site parameters into debug info sections. The patch also provides an interface for the interpretation of instructions that could load values of a call site parameters in order to generate DWARF about the call site parameters. ([13/13] Introduce the debug entry values.) Co-authored-by: Ananth Sowda <asowda@cisco.com> Co-authored-by: Nikola Prica <nikola.prica@rt-rk.com> Co-authored-by: Ivan Baev <ibaev@cisco.com> Differential Revision: https://reviews.llvm.org/D60716 llvm-svn: 365467
* [BPF] add new intrinsics preserve_{array,union,struct}_access_indexYonghong Song2019-07-091-0/+103
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For background of BPF CO-RE project, please refer to http://vger.kernel.org/bpfconf2019.html In summary, BPF CO-RE intends to compile bpf programs adjustable on struct/union layout change so the same program can run on multiple kernels with adjustment before loading based on native kernel structures. In order to do this, we need keep track of GEP(getelementptr) instruction base and result debuginfo types, so we can adjust on the host based on kernel BTF info. Capturing such information as an IR optimization is hard as various optimization may have tweaked GEP and also union is replaced by structure it is impossible to track fieldindex for union member accesses. Three intrinsic functions, preserve_{array,union,struct}_access_index, are introducted. addr = preserve_array_access_index(base, index, dimension) addr = preserve_union_access_index(base, di_index) addr = preserve_struct_access_index(base, gep_index, di_index) here, base: the base pointer for the array/union/struct access. index: the last access index for array, the same for IR/DebugInfo layout. dimension: the array dimension. gep_index: the access index based on IR layout. di_index: the access index based on user/debuginfo types. For example, for the following example, $ cat test.c struct sk_buff { int i; int b1:1; int b2:2; union { struct { int o1; int o2; } o; struct { char flags; char dev_id; } dev; int netid; } u[10]; }; static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr) = (void *) 4; #define _(x) (__builtin_preserve_access_index(x)) int bpf_prog(struct sk_buff *ctx) { char dev_id; bpf_probe_read(&dev_id, sizeof(char), _(&ctx->u[5].dev.dev_id)); return dev_id; } $ clang -target bpf -O2 -g -emit-llvm -S -mllvm -print-before-all \ test.c >& log The generated IR looks like below: ... define dso_local i32 @bpf_prog(%struct.sk_buff*) #0 !dbg !15 { %2 = alloca %struct.sk_buff*, align 8 %3 = alloca i8, align 1 store %struct.sk_buff* %0, %struct.sk_buff** %2, align 8, !tbaa !45 call void @llvm.dbg.declare(metadata %struct.sk_buff** %2, metadata !43, metadata !DIExpression()), !dbg !49 call void @llvm.lifetime.start.p0i8(i64 1, i8* %3) #4, !dbg !50 call void @llvm.dbg.declare(metadata i8* %3, metadata !44, metadata !DIExpression()), !dbg !51 %4 = load i32 (i8*, i32, i8*)*, i32 (i8*, i32, i8*)** @bpf_probe_read, align 8, !dbg !52, !tbaa !45 %5 = load %struct.sk_buff*, %struct.sk_buff** %2, align 8, !dbg !53, !tbaa !45 %6 = call [10 x %union.anon]* @llvm.preserve.struct.access.index.p0a10s_union.anons.p0s_struct.sk_buffs( %struct.sk_buff* %5, i32 2, i32 3), !dbg !53, !llvm.preserve.access.index !19 %7 = call %union.anon* @llvm.preserve.array.access.index.p0s_union.anons.p0a10s_union.anons( [10 x %union.anon]* %6, i32 1, i32 5), !dbg !53 %8 = call %union.anon* @llvm.preserve.union.access.index.p0s_union.anons.p0s_union.anons( %union.anon* %7, i32 1), !dbg !53, !llvm.preserve.access.index !26 %9 = bitcast %union.anon* %8 to %struct.anon.0*, !dbg !53 %10 = call i8* @llvm.preserve.struct.access.index.p0i8.p0s_struct.anon.0s( %struct.anon.0* %9, i32 1, i32 1), !dbg !53, !llvm.preserve.access.index !34 %11 = call i32 %4(i8* %3, i32 1, i8* %10), !dbg !52 %12 = load i8, i8* %3, align 1, !dbg !54, !tbaa !55 %13 = sext i8 %12 to i32, !dbg !54 call void @llvm.lifetime.end.p0i8(i64 1, i8* %3) #4, !dbg !56 ret i32 %13, !dbg !57 } !19 = distinct !DICompositeType(tag: DW_TAG_structure_type, name: "sk_buff", file: !3, line: 1, size: 704, elements: !20) !26 = distinct !DICompositeType(tag: DW_TAG_union_type, scope: !19, file: !3, line: 5, size: 64, elements: !27) !34 = distinct !DICompositeType(tag: DW_TAG_structure_type, scope: !26, file: !3, line: 10, size: 16, elements: !35) Note that @llvm.preserve.{struct,union}.access.index calls have metadata llvm.preserve.access.index attached to instructions to provide struct/union debuginfo type information. For &ctx->u[5].dev.dev_id, . The "%6 = ..." represents struct member "u" with index 2 for IR layout and index 3 for DI layout. . The "%7 = ..." represents array subscript "5". . The "%8 = ..." represents union member "dev" with index 1 for DI layout. . The "%10 = ..." represents struct member "dev_id" with index 1 for both IR and DI layout. Basically, traversing the use-def chain recursively for the 3rd argument of bpf_probe_read() and examining all preserve_*_access_index calls, the debuginfo struct/union/array access index can be achieved. The intrinsics also contain enough information to regenerate codes for IR layout. For array and structure intrinsics, the proper GEP can be constructed. For union intrinsics, replacing all uses of "addr" with "base" should be enough. The test case ThinLTO/X86/lazyload_metadata.ll is adjusted to reflect the new addition of the metadata. Signed-off-by: Yonghong Song <yhs@fb.com> Differential Revision: https://reviews.llvm.org/D61810 llvm-svn: 365423
* Revert "[BPF] add new intrinsics preserve_{array,union,struct}_access_index"Yonghong Song2019-07-081-103/+0
| | | | | | | | | This reverts commit r365352. Test ThinLTO/X86/lazyload_metadata.ll failed. Revert the commit and at the same time to fix the issue. llvm-svn: 365360
* [BPF] add new intrinsics preserve_{array,union,struct}_access_indexYonghong Song2019-07-081-0/+103
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For background of BPF CO-RE project, please refer to http://vger.kernel.org/bpfconf2019.html In summary, BPF CO-RE intends to compile bpf programs adjustable on struct/union layout change so the same program can run on multiple kernels with adjustment before loading based on native kernel structures. In order to do this, we need keep track of GEP(getelementptr) instruction base and result debuginfo types, so we can adjust on the host based on kernel BTF info. Capturing such information as an IR optimization is hard as various optimization may have tweaked GEP and also union is replaced by structure it is impossible to track fieldindex for union member accesses. Three intrinsic functions, preserve_{array,union,struct}_access_index, are introducted. addr = preserve_array_access_index(base, index, dimension) addr = preserve_union_access_index(base, di_index) addr = preserve_struct_access_index(base, gep_index, di_index) here, base: the base pointer for the array/union/struct access. index: the last access index for array, the same for IR/DebugInfo layout. dimension: the array dimension. gep_index: the access index based on IR layout. di_index: the access index based on user/debuginfo types. For example, for the following example, $ cat test.c struct sk_buff { int i; int b1:1; int b2:2; union { struct { int o1; int o2; } o; struct { char flags; char dev_id; } dev; int netid; } u[10]; }; static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr) = (void *) 4; #define _(x) (__builtin_preserve_access_index(x)) int bpf_prog(struct sk_buff *ctx) { char dev_id; bpf_probe_read(&dev_id, sizeof(char), _(&ctx->u[5].dev.dev_id)); return dev_id; } $ clang -target bpf -O2 -g -emit-llvm -S -mllvm -print-before-all \ test.c >& log The generated IR looks like below: ... define dso_local i32 @bpf_prog(%struct.sk_buff*) #0 !dbg !15 { %2 = alloca %struct.sk_buff*, align 8 %3 = alloca i8, align 1 store %struct.sk_buff* %0, %struct.sk_buff** %2, align 8, !tbaa !45 call void @llvm.dbg.declare(metadata %struct.sk_buff** %2, metadata !43, metadata !DIExpression()), !dbg !49 call void @llvm.lifetime.start.p0i8(i64 1, i8* %3) #4, !dbg !50 call void @llvm.dbg.declare(metadata i8* %3, metadata !44, metadata !DIExpression()), !dbg !51 %4 = load i32 (i8*, i32, i8*)*, i32 (i8*, i32, i8*)** @bpf_probe_read, align 8, !dbg !52, !tbaa !45 %5 = load %struct.sk_buff*, %struct.sk_buff** %2, align 8, !dbg !53, !tbaa !45 %6 = call [10 x %union.anon]* @llvm.preserve.struct.access.index.p0a10s_union.anons.p0s_struct.sk_buffs( %struct.sk_buff* %5, i32 2, i32 3), !dbg !53, !llvm.preserve.access.index !19 %7 = call %union.anon* @llvm.preserve.array.access.index.p0s_union.anons.p0a10s_union.anons( [10 x %union.anon]* %6, i32 1, i32 5), !dbg !53 %8 = call %union.anon* @llvm.preserve.union.access.index.p0s_union.anons.p0s_union.anons( %union.anon* %7, i32 1), !dbg !53, !llvm.preserve.access.index !26 %9 = bitcast %union.anon* %8 to %struct.anon.0*, !dbg !53 %10 = call i8* @llvm.preserve.struct.access.index.p0i8.p0s_struct.anon.0s( %struct.anon.0* %9, i32 1, i32 1), !dbg !53, !llvm.preserve.access.index !34 %11 = call i32 %4(i8* %3, i32 1, i8* %10), !dbg !52 %12 = load i8, i8* %3, align 1, !dbg !54, !tbaa !55 %13 = sext i8 %12 to i32, !dbg !54 call void @llvm.lifetime.end.p0i8(i64 1, i8* %3) #4, !dbg !56 ret i32 %13, !dbg !57 } !19 = distinct !DICompositeType(tag: DW_TAG_structure_type, name: "sk_buff", file: !3, line: 1, size: 704, elements: !20) !26 = distinct !DICompositeType(tag: DW_TAG_union_type, scope: !19, file: !3, line: 5, size: 64, elements: !27) !34 = distinct !DICompositeType(tag: DW_TAG_structure_type, scope: !26, file: !3, line: 10, size: 16, elements: !35) Note that @llvm.preserve.{struct,union}.access.index calls have metadata llvm.preserve.access.index attached to instructions to provide struct/union debuginfo type information. For &ctx->u[5].dev.dev_id, . The "%6 = ..." represents struct member "u" with index 2 for IR layout and index 3 for DI layout. . The "%7 = ..." represents array subscript "5". . The "%8 = ..." represents union member "dev" with index 1 for DI layout. . The "%10 = ..." represents struct member "dev_id" with index 1 for both IR and DI layout. Basically, traversing the use-def chain recursively for the 3rd argument of bpf_probe_read() and examining all preserve_*_access_index calls, the debuginfo struct/union/array access index can be achieved. The intrinsics also contain enough information to regenerate codes for IR layout. For array and structure intrinsics, the proper GEP can be constructed. For union intrinsics, replacing all uses of "addr" with "base" should be enough. Signed-off-by: Yonghong Song <yhs@fb.com> Differential Revision: https://reviews.llvm.org/D61810 llvm-svn: 365352
* Add, and infer, a nofree function attributeBrian Homerding2019-07-081-0/+8
| | | | | | | | | | | | This patch adds a function attribute, nofree, to indicate that a function does not, directly or indirectly, call a memory-deallocation function (e.g., free, C++'s operator delete). Reviewers: jdoerfert Differential Revision: https://reviews.llvm.org/D49165 llvm-svn: 365336
* Scalable Vector IR Type with further LTO fixesGraham Hunter2019-07-051-15/+39
| | | | | | | | | | | | | | | | | Reintroduces the scalable vector IR type from D32530, after it was reverted a couple of times due to increasing chromium LTO build times. This latest incarnation removes the walk over aggregate types from the verifier entirely, in favor of rejecting scalable vectors in the isValidElementType methods in ArrayType and StructType. This removes the 70% degradation observed with the second repro tarball from PR42210. Reviewers: thakis, hans, rengolin, sdesmalen Reviewed By: sdesmalen Differential Revision: https://reviews.llvm.org/D64079 llvm-svn: 365203
* [LangRef] Clarify codegen expectations for intrinsics with fp/integer-only ↵Amara Emerson2019-06-271-0/+10
| | | | | | | | | | overloads. This change is a result of discussions on list: "GlobalISel: Ambiguous intrinsic semantics problem" Differential Revision: https://reviews.llvm.org/D59657 llvm-svn: 364610
* [Attr] Add "willreturn" function attributeJohannes Doerfert2019-06-271-0/+7
| | | | | | | | | | | | | | | | | | | | | | This patch introduces a new function attribute, willreturn, to indicate that a call of this function will either exhibit undefined behavior or comes back and continues execution at a point in the existing call stack that includes the current invocation. This attribute guarantees that the function does not have any endless loops, endless recursion, or terminating functions like abort or exit. Patch by Hideto Ueno (@uenoku) Reviewers: jdoerfert Subscribers: mehdi_amini, hiraditya, steven_wu, dexonsmith, lebedev.ri, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D62801 llvm-svn: 364555
* Revert r363658 "[SVE][IR] Scalable Vector IR Type with pr42210 fix"Hans Wennborg2019-06-271-39/+15
| | | | | | | | | | | | | | | | | | | | | | | | We saw a 70% ThinLTO link time increase in Chromium for Android, see crbug.com/978817. Sounds like more of PR42210. > Recommit of D32530 with a few small changes: > - Stopped recursively walking through aggregates in > the verifier, so that we don't impose too much > overhead on large modules under LTO (see PR42210). > - Changed tests to match; the errors are slightly > different since they only report the array or > struct that actually contains a scalable vector, > rather than all aggregates which contain one in > a nested member. > - Corrected an older comment > > Reviewers: thakis, rengolin, sdesmalen > > Reviewed By: sdesmalen > > Differential Revision: https://reviews.llvm.org/D63321 llvm-svn: 364543
OpenPOWER on IntegriCloud