| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
Reviewers: sylvestre.ledru, kcc
Reviewed By: sylvestre.ledru
Differential Revision: https://reviews.llvm.org/D66792
llvm-svn: 370035
|
|
|
|
|
|
|
|
|
| |
-dA was in the d_group, which is a preprocessor state dumping group.
However -dA is a debug flag to cause a verbose asm. It was already
implemented to do the same thing as -fverbose-asm, so make it just be an
alias.
llvm-svn: 369926
|
|
|
|
|
|
| |
Differential Revision:https://reviews.llvm.org/D64418
llvm-svn: 369749
|
|
|
|
|
|
| |
This fixes some minor grammatical issues I noticed when reading the docs, and changes the recommended feature testing approach to use __has_attribute instead of __has_extension.
llvm-svn: 369687
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
update 3 or newer"
This broke compiling some ASan tests with never versions of MSVC/the Win
SDK, see https://crbug.com/996675
> MSVC 2017 update 3 (_MSC_VER 1911) enables /Zc:twoPhase by default, and
> so should clang-cl:
> https://docs.microsoft.com/en-us/cpp/build/reference/zc-twophase
>
> clang-cl takes the MSVC version it emulates from the -fmsc-version flag,
> or if that's not passed it tries to check what the installed version of
> MSVC is and uses that, and failing that it uses a default version that's
> currently 1911. So this changes the default if no -fmsc-version flag is
> passed and no installed MSVC is detected. (It also changes the default
> if -fmsc-version is passed or MSVC is detected, and either indicates
> _MSC_VER >= 1911.)
>
> As mentioned in the MSDN article, the Windows SDK header files in
> version 10.0.15063.0 (Creators Update or Redstone 2) and earlier
> versions do not work correctly with /Zc:twoPhase. If you need to use
> these old SDKs with a new clang-cl, explicitly pass /Zc:twoPhase- to get
> the old behavior.
>
> Fixes PR43032.
>
> Differential Revision: https://reviews.llvm.org/D66394
llvm-svn: 369647
|
|
|
|
|
|
|
| |
This looks like a combination of a copy-and-paste (and paste and paste)
error and a commit without reviewing the diff first error.
llvm-svn: 369496
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
MSVC 2017 update 3 (_MSC_VER 1911) enables /Zc:twoPhase by default, and
so should clang-cl:
https://docs.microsoft.com/en-us/cpp/build/reference/zc-twophase
clang-cl takes the MSVC version it emulates from the -fmsc-version flag,
or if that's not passed it tries to check what the installed version of
MSVC is and uses that, and failing that it uses a default version that's
currently 1911. So this changes the default if no -fmsc-version flag is
passed and no installed MSVC is detected. (It also changes the default
if -fmsc-version is passed or MSVC is detected, and either indicates
_MSC_VER >= 1911.)
As mentioned in the MSDN article, the Windows SDK header files in
version 10.0.15063.0 (Creators Update or Redstone 2) and earlier
versions do not work correctly with /Zc:twoPhase. If you need to use
these old SDKs with a new clang-cl, explicitly pass /Zc:twoPhase- to get
the old behavior.
Fixes PR43032.
Differential Revision: https://reviews.llvm.org/D66394
llvm-svn: 369402
|
|
|
|
|
|
|
|
|
|
| |
Also fixes the documentation for SpaceBeforeAssignmentOperators.
See discussions at https://reviews.llvm.org/D66332
Differential Revision: https://reviews.llvm.org/D66384
llvm-svn: 369214
|
|
|
|
| |
llvm-svn: 369161
|
|
|
|
|
|
| |
Test commit after obtaining commit access.
llvm-svn: 369148
|
|
|
|
| |
llvm-svn: 369099
|
|
|
|
|
|
| |
Differential Revision: https://reviews.llvm.org/D60281
llvm-svn: 368979
|
|
|
|
|
|
| |
This patch adds a line in the release notes about the new clang-cpp library and the CMake option to force clang to link against it.
llvm-svn: 368874
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Previously __has_builtin(__builtin_*) would return false for
__builtin_*s that we modeled as keywords rather than as functions
(because they take type arguments). With this patch, all builtins
that are called with function-call-like syntax return true from
__has_builtin (covering __builtin_* and also the __is_* and __has_* type
traits and the handful of similar builtins without such a prefix).
Update the documentation on __has_builtin and on type traits to match.
While doing this I noticed the type trait documentation was out of date
and incomplete; that's fixed here too.
Reviewers: aaron.ballman
Subscribers: jfb, kristina, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D66100
llvm-svn: 368785
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This change updates `isDerivedFrom` to support Objective-C classes by
converting it to a polymorphic matcher.
Notes:
The matching behavior for Objective-C classes is modeled to match the
behavior of `isDerivedFrom` with C++ classes. To that effect,
`isDerivedFrom` matches aliased types of derived Objective-C classes,
including compatibility aliases. To achieve this, the AST visitor has
been updated to map compatibility aliases to their underlying
Objective-C class.
`isSameOrDerivedFrom` also provides similar behaviors for C++ and
Objective-C classes. The behavior that
`cxxRecordDecl(isSameOrDerivedFrom("X"))` does not match
`class Y {}; typedef Y X;` is mirrored for Objective-C in that
`objcInterfaceDecl(isSameOrDerivedFrom("X"))` does not match either
`@interface Y @end typedef Y X;` or
`@interface Y @end @compatibility_alias X Y;`.
Test Notes:
Ran clang unit tests.
Reviewers: aaron.ballman, jordan_rose, rjmccall, klimek, alexfh, gribozavr
Reviewed By: aaron.ballman, gribozavr
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D60543
llvm-svn: 368632
|
|
|
|
|
|
| |
D65064, D64635, D64638 pathces solve the issue with macor expansion.
llvm-svn: 368562
|
|
|
|
|
|
|
|
| |
See PR40840
Differential Revision: https://reviews.llvm.org/D66059
llvm-svn: 368539
|
|
|
|
|
|
|
|
| |
See PR40840
Differential Revision: https://reviews.llvm.org/D65925
llvm-svn: 368507
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The default behavior of Clang's indirect function call checker will replace
the address of each CFI-checked function in the output file's symbol table
with the address of a jump table entry which will pass CFI checks. We refer
to this as making the jump table `canonical`. This property allows code that
was not compiled with ``-fsanitize=cfi-icall`` to take a CFI-valid address
of a function, but it comes with a couple of caveats that are especially
relevant for users of cross-DSO CFI:
- There is a performance and code size overhead associated with each
exported function, because each such function must have an associated
jump table entry, which must be emitted even in the common case where the
function is never address-taken anywhere in the program, and must be used
even for direct calls between DSOs, in addition to the PLT overhead.
- There is no good way to take a CFI-valid address of a function written in
assembly or a language not supported by Clang. The reason is that the code
generator would need to insert a jump table in order to form a CFI-valid
address for assembly functions, but there is no way in general for the
code generator to determine the language of the function. This may be
possible with LTO in the intra-DSO case, but in the cross-DSO case the only
information available is the function declaration. One possible solution
is to add a C wrapper for each assembly function, but these wrappers can
present a significant maintenance burden for heavy users of assembly in
addition to adding runtime overhead.
For these reasons, we provide the option of making the jump table non-canonical
with the flag ``-fno-sanitize-cfi-canonical-jump-tables``. When the jump
table is made non-canonical, symbol table entries point directly to the
function body. Any instances of a function's address being taken in C will
be replaced with a jump table address.
This scheme does have its own caveats, however. It does end up breaking
function address equality more aggressively than the default behavior,
especially in cross-DSO mode which normally preserves function address
equality entirely.
Furthermore, it is occasionally necessary for code not compiled with
``-fsanitize=cfi-icall`` to take a function address that is valid
for CFI. For example, this is necessary when a function's address
is taken by assembly code and then called by CFI-checking C code. The
``__attribute__((cfi_jump_table_canonical))`` attribute may be used to make
the jump table entry of a specific function canonical so that the external
code will end up taking a address for the function that will pass CFI checks.
Fixes PR41972.
Differential Revision: https://reviews.llvm.org/D65629
llvm-svn: 368495
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
`ExprWithCleanups`.
Summary:
The `ExprWithCleanups` node is added to the AST along with the elidable
CXXConstructExpr. If it is the outermost node of the node being matched, ignore
it as well.
Reviewers: gribozavr
Subscribers: cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D65944
llvm-svn: 368319
|
|
|
|
|
|
| |
Also updates corresponding html doc.
llvm-svn: 368188
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This document includes the description of the ASTImporter from the user/client
perspective.
A subsequent patch will describe the development internals.
Reviewers: a_sidorin, shafik, gamesh411, balazske, a.sidorin
Subscribers: rnkovacs, dkrupp, arphaman, Szelethus, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D65573
llvm-svn: 368009
|
|
|
|
|
|
| |
The bots are sad that they're not documented.
llvm-svn: 367918
|
|
|
|
|
|
| |
The bots are sad that they're not documented.
llvm-svn: 367914
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reviewers: aaron.ballman
Subscribers: jkorous, dexonsmith, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D65706
llvm-svn: 367889
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary: The minimum compilers support all have alignas, and we don't use LLVM_ALIGNAS anywhere anymore. This also removes an MSVC diagnostic which, according to the comment above, isn't relevant anymore.
Reviewers: rnk
Subscribers: mgorny, jkorous, dexonsmith, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D65458
llvm-svn: 367383
|
|
|
|
| |
llvm-svn: 367270
|
|
|
|
| |
llvm-svn: 367219
|
|
|
|
|
|
| |
Differential Revision: https://reviews.llvm.org/D65092
llvm-svn: 367010
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rename lang mode flag to -cl-std=clc++/-cl-std=CLC++
or -std=clc++/-std=CLC++.
This aligns with OpenCL C conversion and removes ambiguity
with OpenCL C++.
Differential Revision: https://reviews.llvm.org/D65102
llvm-svn: 367008
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a new vectorize predication loop hint:
#pragma clang loop vectorize_predicate(enable)
that can be used to indicate to the vectoriser that all (load/store)
instructions should be predicated (masked). This allows, for example, folding
of the remainder loop into the main loop.
This patch will be followed up with D64916 and D65197. The former is a
refactoring in the loopvectorizer and the groundwork to make tail loop folding
a more general concept, and in the latter the actual tail loop folding
transformation will be implemented.
Differential Revision: https://reviews.llvm.org/D64744
llvm-svn: 366989
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Move `-ftime-trace-granularity` option to frontend options. Without patch
this option is showed up in the help for any tool that links libSupport.
Reviewers: sammccall
Subscribers: hiraditya, cfe-commits, llvm-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D65202
llvm-svn: 366911
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reviewers: dkrupp, a_sidorin, Szelethus, NoQ
Subscribers: whisperity, xazax.hun, baloghadamsoftware, szepet, rnkovacs, a.sidorin, mikhail.ramalho, donat.nagy, gamesh411, Charusso, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D64801
llvm-svn: 366439
|
|
|
|
|
|
| |
and clear the release notes.
llvm-svn: 366427
|
|
|
|
|
|
|
|
|
| |
Added documentation of C++ for OpenCL mode into Clang
User Manual and Language Extensions document.
Differential Revision: https://reviews.llvm.org/D64418
llvm-svn: 366351
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
i.e., recent 5745eccef54ddd3caca278d1d292a88b2281528b:
* Bump the function_type_mismatch handler version, as its signature has changed.
* The function_type_mismatch handler can return successfully now, so
SanitizerKind::Function must be AlwaysRecoverable (like for
SanitizerKind::Vptr).
* But the minimal runtime would still unconditionally treat a call to the
function_type_mismatch handler as failure, so disallow -fsanitize=function in
combination with -fsanitize-minimal-runtime (like it was already done for
-fsanitize=vptr).
* Add tests.
Differential Revision: https://reviews.llvm.org/D61479
llvm-svn: 366186
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Just like in https://reviews.llvm.org/D56803
for -dumpversion
Reviewers: rnk
Reviewed By: rnk
Subscribers: dexonsmith, lebedev.ri, hubert.reinterpretcast, xbolva00, fedor.sergeev, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D63048
llvm-svn: 366091
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
GCC supports named address spaces macros:
https://gcc.gnu.org/onlinedocs/gcc/Named-Address-Spaces.html
clang does as well with address spaces:
https://clang.llvm.org/docs/LanguageExtensions.html#memory-references-to-specified-segments
Add the __seg_fs and __seg_gs macros for compatibility with GCC.
<rdar://problem/52944935>
Subscribers: jkorous, dexonsmith, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D64676
llvm-svn: 366028
|
|
|
|
| |
llvm-svn: 366024
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some targets such as Python 2.7.16 still use VERSION in
their builds. Without VERSION defined, the source code
has syntax errors.
Reverting as it will probably break many other things.
Noticed by Sterling Augustine
llvm-svn: 365992
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
It has been introduced in 2011 for gcc compat:
https://github.com/llvm-mirror/clang/commit/ad1a4c6e89594e704775ddb6b036ac982fd68cad
it is probably time to remove it
Reviewers: rnk, dexonsmith
Reviewed By: rnk
Subscribers: dschuff, aheejin, fedor.sergeev, arphaman, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D64062
llvm-svn: 365962
|
|
|
|
|
|
|
|
|
|
| |
This flag is analoguous to other flags like -nostdlib or -nolibc
and could be used to disable linking of profile runtime library.
This is useful in certain environments like kernel, where profile
instrumentation is still desirable, but we cannot use the standard
runtime library.
llvm-svn: 365808
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Header links should have some standard form so clang tidy
docs can easily reference them. The form is as follows.
Start with the analyzer full name including packages.
Replace all periods with dashes and lowercase everything.
Ex: core.CallAndMessage -> core-callandmessage
Reviewers: JonasToth, aaron.ballman, NoQ, Szelethus
Reviewed By: aaron.ballman, Szelethus
Subscribers: nickdesaulniers, lebedev.ri, baloghadamsoftware, mgrang, a.sidorin, Szelethus, jfb, donat.nagy, dkrupp, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D64543
llvm-svn: 365797
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Add user documentation page. This is an empty page atm, later patches will add
the specific user documentatoins.
Reviewers: dkrupp
Subscribers: whisperity, xazax.hun, baloghadamsoftware, szepet, rnkovacs, a.sidorin, mikhail.ramalho, Szelethus, donat.nagy, gamesh411, Charusso, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D64494
llvm-svn: 365639
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A short granule is a granule of size between 1 and `TG-1` bytes. The size
of a short granule is stored at the location in shadow memory where the
granule's tag is normally stored, while the granule's actual tag is stored
in the last byte of the granule. This means that in order to verify that a
pointer tag matches a memory tag, HWASAN must check for two possibilities:
* the pointer tag is equal to the memory tag in shadow memory, or
* the shadow memory tag is actually a short granule size, the value being loaded
is in bounds of the granule and the pointer tag is equal to the last byte of
the granule.
Pointer tags between 1 to `TG-1` are possible and are as likely as any other
tag. This means that these tags in memory have two interpretations: the full
tag interpretation (where the pointer tag is between 1 and `TG-1` and the
last byte of the granule is ordinary data) and the short tag interpretation
(where the pointer tag is stored in the granule).
When HWASAN detects an error near a memory tag between 1 and `TG-1`, it
will show both the memory tag and the last byte of the granule. Currently,
it is up to the user to disambiguate the two possibilities.
Because this functionality obsoletes the right aligned heap feature of
the HWASAN memory allocator (and because we can no longer easily test
it), the feature is removed.
Also update the documentation to cover both short granule tags and
outlined checks.
Differential Revision: https://reviews.llvm.org/D63908
llvm-svn: 365551
|
|
|
|
| |
llvm-svn: 365446
|
|
|
|
| |
llvm-svn: 365445
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For background of BPF CO-RE project, please refer to
http://vger.kernel.org/bpfconf2019.html
In summary, BPF CO-RE intends to compile bpf programs
adjustable on struct/union layout change so the same
program can run on multiple kernels with adjustment
before loading based on native kernel structures.
In order to do this, we need keep track of GEP(getelementptr)
instruction base and result debuginfo types, so we
can adjust on the host based on kernel BTF info.
Capturing such information as an IR optimization is hard
as various optimization may have tweaked GEP and also
union is replaced by structure it is impossible to track
fieldindex for union member accesses.
Three intrinsic functions, preserve_{array,union,struct}_access_index,
are introducted.
addr = preserve_array_access_index(base, index, dimension)
addr = preserve_union_access_index(base, di_index)
addr = preserve_struct_access_index(base, gep_index, di_index)
here,
base: the base pointer for the array/union/struct access.
index: the last access index for array, the same for IR/DebugInfo layout.
dimension: the array dimension.
gep_index: the access index based on IR layout.
di_index: the access index based on user/debuginfo types.
If using these intrinsics blindly, i.e., transforming all GEPs
to these intrinsics and later on reducing them to GEPs, we have
seen up to 7% more instructions generated. To avoid such an overhead,
a clang builtin is proposed:
base = __builtin_preserve_access_index(base)
such that user wraps to-be-relocated GEPs in this builtin
and preserve_*_access_index intrinsics only apply to
those GEPs. Such a buyin will prevent performance degradation
if people do not use CO-RE, even for programs which use
bpf_probe_read().
For example, for the following example,
$ cat test.c
struct sk_buff {
int i;
int b1:1;
int b2:2;
union {
struct {
int o1;
int o2;
} o;
struct {
char flags;
char dev_id;
} dev;
int netid;
} u[10];
};
static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr)
= (void *) 4;
#define _(x) (__builtin_preserve_access_index(x))
int bpf_prog(struct sk_buff *ctx) {
char dev_id;
bpf_probe_read(&dev_id, sizeof(char), _(&ctx->u[5].dev.dev_id));
return dev_id;
}
$ clang -target bpf -O2 -g -emit-llvm -S -mllvm -print-before-all \
test.c >& log
The generated IR looks like below:
...
define dso_local i32 @bpf_prog(%struct.sk_buff*) #0 !dbg !15 {
%2 = alloca %struct.sk_buff*, align 8
%3 = alloca i8, align 1
store %struct.sk_buff* %0, %struct.sk_buff** %2, align 8, !tbaa !45
call void @llvm.dbg.declare(metadata %struct.sk_buff** %2, metadata !43, metadata !DIExpression()), !dbg !49
call void @llvm.lifetime.start.p0i8(i64 1, i8* %3) #4, !dbg !50
call void @llvm.dbg.declare(metadata i8* %3, metadata !44, metadata !DIExpression()), !dbg !51
%4 = load i32 (i8*, i32, i8*)*, i32 (i8*, i32, i8*)** @bpf_probe_read, align 8, !dbg !52, !tbaa !45
%5 = load %struct.sk_buff*, %struct.sk_buff** %2, align 8, !dbg !53, !tbaa !45
%6 = call [10 x %union.anon]* @llvm.preserve.struct.access.index.p0a10s_union.anons.p0s_struct.sk_buffs(
%struct.sk_buff* %5, i32 2, i32 3), !dbg !53, !llvm.preserve.access.index !19
%7 = call %union.anon* @llvm.preserve.array.access.index.p0s_union.anons.p0a10s_union.anons(
[10 x %union.anon]* %6, i32 1, i32 5), !dbg !53
%8 = call %union.anon* @llvm.preserve.union.access.index.p0s_union.anons.p0s_union.anons(
%union.anon* %7, i32 1), !dbg !53, !llvm.preserve.access.index !26
%9 = bitcast %union.anon* %8 to %struct.anon.0*, !dbg !53
%10 = call i8* @llvm.preserve.struct.access.index.p0i8.p0s_struct.anon.0s(
%struct.anon.0* %9, i32 1, i32 1), !dbg !53, !llvm.preserve.access.index !34
%11 = call i32 %4(i8* %3, i32 1, i8* %10), !dbg !52
%12 = load i8, i8* %3, align 1, !dbg !54, !tbaa !55
%13 = sext i8 %12 to i32, !dbg !54
call void @llvm.lifetime.end.p0i8(i64 1, i8* %3) #4, !dbg !56
ret i32 %13, !dbg !57
}
!19 = distinct !DICompositeType(tag: DW_TAG_structure_type, name: "sk_buff", file: !3, line: 1, size: 704, elements: !20)
!26 = distinct !DICompositeType(tag: DW_TAG_union_type, scope: !19, file: !3, line: 5, size: 64, elements: !27)
!34 = distinct !DICompositeType(tag: DW_TAG_structure_type, scope: !26, file: !3, line: 10, size: 16, elements: !35)
Note that @llvm.preserve.{struct,union}.access.index calls have metadata llvm.preserve.access.index
attached to instructions to provide struct/union debuginfo type information.
For &ctx->u[5].dev.dev_id,
. The "%6 = ..." represents struct member "u" with index 2 for IR layout and index 3 for DI layout.
. The "%7 = ..." represents array subscript "5".
. The "%8 = ..." represents union member "dev" with index 1 for DI layout.
. The "%10 = ..." represents struct member "dev_id" with index 1 for both IR and DI layout.
Basically, traversing the use-def chain recursively for the 3rd argument of bpf_probe_read() and
examining all preserve_*_access_index calls, the debuginfo struct/union/array access index
can be achieved.
The intrinsics also contain enough information to regenerate codes for IR layout.
For array and structure intrinsics, the proper GEP can be constructed.
For union intrinsics, replacing all uses of "addr" with "base" should be enough.
Signed-off-by: Yonghong Song <yhs@fb.com>
Differential Revision: https://reviews.llvm.org/D61809
llvm-svn: 365438
|
|
|
|
|
|
|
|
|
| |
This reverts commit r365435.
Forgot adding the Differential Revision link. Will add to the
commit message and resubmit.
llvm-svn: 365436
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For background of BPF CO-RE project, please refer to
http://vger.kernel.org/bpfconf2019.html
In summary, BPF CO-RE intends to compile bpf programs
adjustable on struct/union layout change so the same
program can run on multiple kernels with adjustment
before loading based on native kernel structures.
In order to do this, we need keep track of GEP(getelementptr)
instruction base and result debuginfo types, so we
can adjust on the host based on kernel BTF info.
Capturing such information as an IR optimization is hard
as various optimization may have tweaked GEP and also
union is replaced by structure it is impossible to track
fieldindex for union member accesses.
Three intrinsic functions, preserve_{array,union,struct}_access_index,
are introducted.
addr = preserve_array_access_index(base, index, dimension)
addr = preserve_union_access_index(base, di_index)
addr = preserve_struct_access_index(base, gep_index, di_index)
here,
base: the base pointer for the array/union/struct access.
index: the last access index for array, the same for IR/DebugInfo layout.
dimension: the array dimension.
gep_index: the access index based on IR layout.
di_index: the access index based on user/debuginfo types.
If using these intrinsics blindly, i.e., transforming all GEPs
to these intrinsics and later on reducing them to GEPs, we have
seen up to 7% more instructions generated. To avoid such an overhead,
a clang builtin is proposed:
base = __builtin_preserve_access_index(base)
such that user wraps to-be-relocated GEPs in this builtin
and preserve_*_access_index intrinsics only apply to
those GEPs. Such a buyin will prevent performance degradation
if people do not use CO-RE, even for programs which use
bpf_probe_read().
For example, for the following example,
$ cat test.c
struct sk_buff {
int i;
int b1:1;
int b2:2;
union {
struct {
int o1;
int o2;
} o;
struct {
char flags;
char dev_id;
} dev;
int netid;
} u[10];
};
static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr)
= (void *) 4;
#define _(x) (__builtin_preserve_access_index(x))
int bpf_prog(struct sk_buff *ctx) {
char dev_id;
bpf_probe_read(&dev_id, sizeof(char), _(&ctx->u[5].dev.dev_id));
return dev_id;
}
$ clang -target bpf -O2 -g -emit-llvm -S -mllvm -print-before-all \
test.c >& log
The generated IR looks like below:
...
define dso_local i32 @bpf_prog(%struct.sk_buff*) #0 !dbg !15 {
%2 = alloca %struct.sk_buff*, align 8
%3 = alloca i8, align 1
store %struct.sk_buff* %0, %struct.sk_buff** %2, align 8, !tbaa !45
call void @llvm.dbg.declare(metadata %struct.sk_buff** %2, metadata !43, metadata !DIExpression()), !dbg !49
call void @llvm.lifetime.start.p0i8(i64 1, i8* %3) #4, !dbg !50
call void @llvm.dbg.declare(metadata i8* %3, metadata !44, metadata !DIExpression()), !dbg !51
%4 = load i32 (i8*, i32, i8*)*, i32 (i8*, i32, i8*)** @bpf_probe_read, align 8, !dbg !52, !tbaa !45
%5 = load %struct.sk_buff*, %struct.sk_buff** %2, align 8, !dbg !53, !tbaa !45
%6 = call [10 x %union.anon]* @llvm.preserve.struct.access.index.p0a10s_union.anons.p0s_struct.sk_buffs(
%struct.sk_buff* %5, i32 2, i32 3), !dbg !53, !llvm.preserve.access.index !19
%7 = call %union.anon* @llvm.preserve.array.access.index.p0s_union.anons.p0a10s_union.anons(
[10 x %union.anon]* %6, i32 1, i32 5), !dbg !53
%8 = call %union.anon* @llvm.preserve.union.access.index.p0s_union.anons.p0s_union.anons(
%union.anon* %7, i32 1), !dbg !53, !llvm.preserve.access.index !26
%9 = bitcast %union.anon* %8 to %struct.anon.0*, !dbg !53
%10 = call i8* @llvm.preserve.struct.access.index.p0i8.p0s_struct.anon.0s(
%struct.anon.0* %9, i32 1, i32 1), !dbg !53, !llvm.preserve.access.index !34
%11 = call i32 %4(i8* %3, i32 1, i8* %10), !dbg !52
%12 = load i8, i8* %3, align 1, !dbg !54, !tbaa !55
%13 = sext i8 %12 to i32, !dbg !54
call void @llvm.lifetime.end.p0i8(i64 1, i8* %3) #4, !dbg !56
ret i32 %13, !dbg !57
}
!19 = distinct !DICompositeType(tag: DW_TAG_structure_type, name: "sk_buff", file: !3, line: 1, size: 704, elements: !20)
!26 = distinct !DICompositeType(tag: DW_TAG_union_type, scope: !19, file: !3, line: 5, size: 64, elements: !27)
!34 = distinct !DICompositeType(tag: DW_TAG_structure_type, scope: !26, file: !3, line: 10, size: 16, elements: !35)
Note that @llvm.preserve.{struct,union}.access.index calls have metadata llvm.preserve.access.index
attached to instructions to provide struct/union debuginfo type information.
For &ctx->u[5].dev.dev_id,
. The "%6 = ..." represents struct member "u" with index 2 for IR layout and index 3 for DI layout.
. The "%7 = ..." represents array subscript "5".
. The "%8 = ..." represents union member "dev" with index 1 for DI layout.
. The "%10 = ..." represents struct member "dev_id" with index 1 for both IR and DI layout.
Basically, traversing the use-def chain recursively for the 3rd argument of bpf_probe_read() and
examining all preserve_*_access_index calls, the debuginfo struct/union/array access index
can be achieved.
The intrinsics also contain enough information to regenerate codes for IR layout.
For array and structure intrinsics, the proper GEP can be constructed.
For union intrinsics, replacing all uses of "addr" with "base" should be enough.
Signed-off-by: Yonghong Song <yhs@fb.com>
llvm-svn: 365435
|