| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Add __kernel_rt_sigreturn to the list of trap handlers for Linux (it's
used as such on aarch64 at least), and __restore_rt as well (used on
x86_64).
Skip decrement-and-recompute for trap handlers in
InitializeNonZerothFrame, as signal dispatch may point the child frame's
return address to the start of the return trampoline.
Parse the 'S' flag for signal handlers from eh_frame augmentation, and
propagate it to the unwind plan.
Reviewers: labath, jankratochvil, compnerd, jfb, jasonmolenda
Reviewed By: jasonmolenda
Subscribers: clayborg, MaskRay, wuzish, nemanjai, kbarton, jrtc27, atanasyan, jsji, javed.absar, kristof.beyls, lldb-commits
Tags: #lldb
Differential Revision: https://reviews.llvm.org/D63667
llvm-svn: 366580
|
|
|
|
|
|
|
| |
Without the link flags, the test always fails on Linux. For some reason,
however, it works on Darwin -- which is why it wasn't caught at first.
llvm-svn: 366579
|
|
|
|
|
|
|
|
|
|
| |
This is the more natural lowering, and presents more opportunities to
reduce 64-bit ops to 32-bit.
This should also help avoid issues graphics shaders have had with
64-bit values, and simplify argument lowering in globalisel.
llvm-svn: 366578
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
By exposing a callback that can guard code publishing results of
'onMainAST' callback in the same manner we guard diagnostics.
Reviewers: sammccall
Reviewed By: sammccall
Subscribers: javed.absar, MaskRay, jkorous, arphaman, kadircet, hokein, jvikstrom, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D64985
llvm-svn: 366577
|
|
|
|
| |
llvm-svn: 366576
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Since background-index can perform disk writes, we don't want to turn
it on tests that won't clear it.
Reviewers: sammccall
Subscribers: ilya-biryukov, MaskRay, jkorous, arphaman, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D64990
llvm-svn: 366575
|
|
|
|
|
|
|
|
| |
This was handled previously for arguments split due to not fitting in
an MVT. This was dropping the register for argument registers split
due to TLI::getRegisterTypeForCallingConv.
llvm-svn: 366574
|
|
|
|
|
|
|
|
|
| |
Also add test coverage for thin archives (which are the only way I could
come up with to test at least some of the diagnostic changes).
Differential Revision: https://reviews.llvm.org/D64927
llvm-svn: 366573
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary: include proper header prior to use of uint8_t typedef
and std::string.
Subscribers: llvm-commits
Reviewers: cherry
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64937
llvm-svn: 366572
|
|
|
|
|
|
|
|
|
|
| |
See bug 40820: https://bugs.llvm.org/show_bug.cgi?id=40820
Reviewers: artem.tamazov, arsenm
Differential Revision: https://reviews.llvm.org/D64629
llvm-svn: 366571
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Current PRE hoists common computations into
CMBB = DT->findNearestCommonDominator(MBB, MBB1).
However, if CMBB is in a hot loop body, we might get performance
degradation.
Differential Revision: https://reviews.llvm.org/D64394
llvm-svn: 366570
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
For split-stack, if the nested argument (i.e. R10) is not used, no need to save/restore it in the prologue.
Reviewers: thanm
Reviewed By: thanm
Subscribers: mstorsjo, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64673
llvm-svn: 366569
|
|
|
|
| |
llvm-svn: 366568
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This is effectively a revert of r344616, which was a partial fix for
PR38964 (compilation of <string> with GCC in C++03 mode). However, that
configuration is explicitly not supported anymore and that partial fix
breaks compilation with Clang when per-TU insulation is provided.
PR42676
rdar://52899715
Reviewers: mclow.lists, EricWF
Subscribers: christof, jkorous, dexonsmith, libcxx-commits
Tags: #libc
Differential Revision: https://reviews.llvm.org/D64941
llvm-svn: 366567
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Fixed SelectionTree bug for macros
- Fixed SelectionTree claimRange for macros and template instantiations
- Fixed SelectionTree unit tests
- Changed a breaking test in TweakTests
Reviewers: sammccall, kadircet
Subscribers: ilya-biryukov, MaskRay, jkorous, arphaman, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D64329
llvm-svn: 366566
|
|
|
|
|
|
|
| |
https://rise4fun.com/Alive/8Rp
https://bugs.llvm.org/show_bug.cgi?id=42673
llvm-svn: 366565
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the legality check is `(shiftNbits-maskNbits) s>= 0`,
then we can simplify it to `shiftNbits u>= maskNbits`,
which is easier to check for.
However, currently switching the `dropRedundantMaskingOfLeftShiftInput()`
to `SimplifyICmpInst()` does not catch these cases and regresses
currently-handled cases, so i'll leave it as is for now.
https://rise4fun.com/Alive/25P
llvm-svn: 366564
|
|
|
|
| |
llvm-svn: 366563
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We'd like to remove this whole function, because these are properties of
functions, not the target as a whole. These two are easy to remove
because they are only used for emitting ARM build attributes, which
expects them to represent the defaults for the whole module, not just
the last function generated.
This is needed to get correct build attributes when using IPRA on ARM,
because IPRA causes resetTargetOptions to get called before
ARMAsmPrinter::emitAttributes.
Differential revision: https://reviews.llvm.org/D64929
llvm-svn: 366562
|
|
|
|
| |
llvm-svn: 366561
|
|
|
|
| |
llvm-svn: 366560
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 9c377105da0be7c2c9a3c70035ce674c71b846af.
[clangd][BackgroundIndexLoader] Directly store DependentTU while loading shard
Summary:
We were deferring the population of DependentTU field in LoadedShard
until BackgroundIndexLoader was consumed. This actually triggers a use after
free since the shards FileToTU was pointing at could've been moved while
consuming the Loader.
Reviewers: sammccall
Subscribers: ilya-biryukov, MaskRay, jkorous, arphaman, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D64980
llvm-svn: 366559
|
|
|
|
|
|
|
|
|
| |
Fixes https://bugs.llvm.org/show_bug.cgi?id=42622.
(--hash-symbols switch is currently broken for 64-bit ELF files, due to r352630.)
Differential revision: https://reviews.llvm.org/D64788
llvm-svn: 366558
|
|
|
|
|
|
|
|
|
| |
If a function definition is not exact, then the linker could select a
differently-compiled version of it, which could use different registers.
https://reviews.llvm.org/D64909
llvm-svn: 366557
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
According to the new Armv8-M specification
https://static.docs.arm.com/ddi0553/bh/DDI0553B_h_armv8m_arm.pdf the
instructions SQRSHRL and UQRSHLL now have an additional immediate
operand <saturate>. The new assembly syntax is:
SQRSHRL<c> RdaLo, RdaHi, #<saturate>, Rm
UQRSHLL<c> RdaLo, RdaHi, #<saturate>, Rm
where <saturate> can be either 64 (the existing behavior) or 48, in
that case the result is saturated to 48 bits.
The new operand is encoded as follows:
#64 Encoded as sat = 0
#48 Encoded as sat = 1
sat is bit 7 of the instruction bit pattern.
This patch adds a new assembler operand class MveSaturateOperand which
implements parsing and encoding. Decoding is implemented in
DecodeMVEOverlappingLongShift.
Reviewers: ostannard, simon_tatham, t.p.northover, samparker, dmgreen, SjoerdMeijer
Reviewed By: simon_tatham
Subscribers: javed.absar, kristof.beyls, hiraditya, pbarrio, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64810
llvm-svn: 366555
|
|
|
|
|
|
|
|
|
|
|
| |
r366458 is causing test failures. r366467 and r366468 had to be reverted as
they were casuing conflict while reverting r366458.
r366468 [clangd] Remove dead code from BackgroundIndex
r366467 [clangd] BackgroundIndex stores shards to the closest project
r366458 [clangd] Refactor background-index shard loading
llvm-svn: 366551
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Defining CLK_NULL_EVENT with a `(void*)` cast has the (unintended?)
side-effect that the address space will be fixed (as generic in OpenCL
2.0 mode). The consequence is that any target specific address space
for the clk_event_t type will not be applied.
It is not clear why the void pointer cast was needed in the first
place, and it seems we can do without it.
Differential Revision: https://reviews.llvm.org/D63876
llvm-svn: 366546
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
The previous patch did not fix the end mark. D64789
fixes second case of https://github.com/clangd/clangd/issues/93
Patch by @lh123 !
Reviewers: sammccall, kadircet
Reviewed By: kadircet
Subscribers: MaskRay, ilya-biryukov, jkorous, arphaman, cfe-commits
Tags: #clang-tools-extra, #clang
Differential Revision: https://reviews.llvm.org/D64970
llvm-svn: 366545
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This patch removes the `default` case from some switches on
`llvm::Triple::ObjectFormatType`, and cases for the missing enumerators
(`UnknownObjectFormat`, `Wasm`, and `XCOFF`) are then added.
For `UnknownObjectFormat`, the effect of the action for the `default`
case is maintained; otherwise, where `llvm_unreachable` is called,
`report_fatal_error` is used instead.
Where the `default` case returns a default value, `report_fatal_error`
is used for XCOFF as a placeholder. For `Wasm`, the effect of the action
for the `default` case in maintained.
The code is structured to avoid strongly implying that the `Wasm` case
is present for any reason other than to make the switch cover all
`ObjectFormatType` enumerator values.
Reviewers: sfertile, jasonliu, daltenty
Reviewed By: sfertile
Subscribers: hiraditya, aheejin, sunfish, llvm-commits, cfe-commits
Tags: #clang, #llvm
Differential Revision: https://reviews.llvm.org/D64222
llvm-svn: 366544
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Change the scan algorithm to use only power-of-two shifts (1, 2, 4, 8,
16, 32) instead of starting off shifting by 1, 2 and 3 and then doing
a 3-way ADD, because:
1. It simplifies the compiler a little.
2. It minimizes vgpr pressure because each instruction is now of the
form vn = vn + vn << c.
3. It is more friendly to the DPP combiner, which currently can't
combine into an ADD3 instruction.
Because of #2 and #3 the end result is improved from this:
v_add_u32_dpp v4, v3, v3 row_shr:1 row_mask:0xf bank_mask:0xf bound_ctrl:0
v_mov_b32_dpp v5, v3 row_shr:2 row_mask:0xf bank_mask:0xf
v_mov_b32_dpp v1, v3 row_shr:3 row_mask:0xf bank_mask:0xf
v_add3_u32 v1, v4, v5, v1
s_nop 1
v_add_u32_dpp v1, v1, v1 row_shr:4 row_mask:0xf bank_mask:0xe
s_nop 1
v_add_u32_dpp v1, v1, v1 row_shr:8 row_mask:0xf bank_mask:0xc
s_nop 1
v_add_u32_dpp v1, v1, v1 row_bcast:15 row_mask:0xa bank_mask:0xf
s_nop 1
v_add_u32_dpp v1, v1, v1 row_bcast:31 row_mask:0xc bank_mask:0xf
To this:
v_add_u32_dpp v1, v1, v1 row_shr:1 row_mask:0xf bank_mask:0xf bound_ctrl:0
s_nop 1
v_add_u32_dpp v1, v1, v1 row_shr:2 row_mask:0xf bank_mask:0xf bound_ctrl:0
s_nop 1
v_add_u32_dpp v1, v1, v1 row_shr:4 row_mask:0xf bank_mask:0xe
s_nop 1
v_add_u32_dpp v1, v1, v1 row_shr:8 row_mask:0xf bank_mask:0xc
s_nop 1
v_add_u32_dpp v1, v1, v1 row_bcast:15 row_mask:0xa bank_mask:0xf
s_nop 1
v_add_u32_dpp v1, v1, v1 row_bcast:31 row_mask:0xc bank_mask:0xf
I.e. two fewer computational instructions, one extra nop where we could
schedule something else.
Reviewers: arsenm, sheredom, critson, rampitec, vpykhtin
Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64411
llvm-svn: 366543
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enable loop peeling with multiple exits where all non-latch exits
ends up with deopt by default.
Reviewers: reames, fhahn
Reviewed By: reames
Subscribers: xbolva00, hiraditya, zzheng, llvm-commits
Differential Revision: https://reviews.llvm.org/D64619
llvm-svn: 366542
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
main file.
Summary: We have variant implementations in the codebase, this patch unifies them.
Reviewers: ilya-biryukov, kadircet
Subscribers: MaskRay, jkorous, arphaman, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D64915
llvm-svn: 366541
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.
There are many variants to this pattern:
f. `((x << MaskShAmt) a>> MaskShAmt) << ShiftShAmt`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
f. `(ShiftShAmt-MaskShAmt) s>= 0` (i.e. `ShiftShAmt u>= MaskShAmt`)
Normally, the inner pattern is sign-extend,
but for our purposes it's no different to other patterns:
alive proofs:
f: https://rise4fun.com/Alive/7U3
For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.
https://bugs.llvm.org/show_bug.cgi?id=42563
Differential Revision: https://reviews.llvm.org/D64524
llvm-svn: 366540
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.
There are many variants to this pattern:
e. `((x << MaskShAmt) l>> MaskShAmt) << ShiftShAmt`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
e. `(ShiftShAmt-MaskShAmt) s>= 0` (i.e. `ShiftShAmt u>= MaskShAmt`)
alive proofs:
e: https://rise4fun.com/Alive/0FT
For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.
https://bugs.llvm.org/show_bug.cgi?id=42563
Differential Revision: https://reviews.llvm.org/D64521
llvm-svn: 366539
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.
There are many variants to this pattern:
d. `(x & ((-1 << MaskShAmt) >> MaskShAmt)) << ShiftShAmt`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
d. `(ShiftShAmt-MaskShAmt) s>= 0` (i.e. `ShiftShAmt u>= MaskShAmt`)
alive proofs:
d: https://rise4fun.com/Alive/I5Y
For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.
https://bugs.llvm.org/show_bug.cgi?id=42563
Differential Revision: https://reviews.llvm.org/D64519
llvm-svn: 366538
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.
There are many variants to this pattern:
c. `(x & (-1 >> MaskShAmt)) << ShiftShAmt`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
c. `(ShiftShAmt-MaskShAmt) s>= 0` (i.e. `ShiftShAmt u>= MaskShAmt`)
alive proofs:
c: https://rise4fun.com/Alive/RgJh
For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.
https://bugs.llvm.org/show_bug.cgi?id=42563
Differential Revision: https://reviews.llvm.org/D64517
llvm-svn: 366537
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.
There are many variants to this pattern:
b. `(x & (~(-1 << maskNbits))) << shiftNbits`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
b. `(MaskShAmt+ShiftShAmt) u>= bitwidth(x)`
alive proof:
b: https://rise4fun.com/Alive/y8M
For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.
https://bugs.llvm.org/show_bug.cgi?id=42563
Differential Revision: https://reviews.llvm.org/D64514
llvm-svn: 366536
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.
There are many variants to this pattern:
a. `(x & ((1 << MaskShAmt) - 1)) << ShiftShAmt`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
a. `(MaskShAmt+ShiftShAmt) u>= bitwidth(x)`
alive proof:
a: https://rise4fun.com/Alive/wi9
Indeed, not all of these patterns are canonical.
But since this fold will only produce a single instruction
i'm really interested in handling even uncanonical patterns,
since i have this general kind of pattern in hotpaths,
and it is not totally outlandish for bit-twiddling code.
For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.
https://bugs.llvm.org/show_bug.cgi?id=42563
Reviewers: spatel, nikic, huihuiz, xbolva00
Reviewed By: xbolva00
Subscribers: efriedma, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64512
llvm-svn: 366535
|
|
|
|
| |
llvm-svn: 366534
|
|
|
|
| |
llvm-svn: 366533
|
|
|
|
|
|
|
|
|
| |
* Delete aarch64-tls-static.s: it is covered by aarch64-tlsdesc.c
* Add --no-show-raw-insn to llvm-objdump -d tests
* When linking an executable with %t.so, the path %t.so will be recorded in the DT_NEEDED entry if %t.so doesn't have DT_SONAME. The DT_NEEDED has varying lengths on different systems.
Add -soname to make tests more robust. This issue will become outstanding if we allow overlapping PT_LOAD (D64930).
llvm-svn: 366532
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In debug frame information, some fields, e.g., Length in CIE/FDE and
Offset in FDE are attributes to describe the structure of CIE/FDE. They
are not related to the relaxed code. However, these attributes are
symbol differences. So, in current design, these attributes will be
filled as zero and LLVM generates relocations for them.
We only need to generate relocations for symbols in executable sections.
So, if the symbols are not located in executable sections, we still
evaluate their values under relaxation.
Differential Revision: https://reviews.llvm.org/D61584
llvm-svn: 366531
|
|
|
|
| |
llvm-svn: 366530
|
|
|
|
|
|
| |
to the new copy.
llvm-svn: 366529
|
|
|
|
| |
llvm-svn: 366528
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary: The test case added in D62718 did not work unless the user was root because write bits were not set for the output file. This change uses only permissions with user write (0200) to ensure tests pass regardless of the users permissions.
Reviewers: jhenderson, rupprecht, MaskRay, espindola, alexshap
Reviewed By: MaskRay
Subscribers: emaste, arichardson, jakehehrlich, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64302
llvm-svn: 366527
|
|
|
|
| |
llvm-svn: 366526
|
|
|
|
|
|
| |
Build libFuzzer for all Android supported architectures.
llvm-svn: 366525
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It is necessary to generate fixups in .debug_frame or .eh_frame as
relaxation is enabled due to the address delta may be changed after
relaxation.
There is an opcode with 6-bits data in debug frame encoding. So, we
also need 6-bits fixup types.
Differential Revision: https://reviews.llvm.org/D58335
llvm-svn: 366524
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Inline asm doesn't use labels when compiled as an object file. Therefore, we
shouldn't create one for the (potential) callbr destination. Instead, use the
symbol for the MachineBasicBlock.
Reviewers: nickdesaulniers, craig.topper
Reviewed By: nickdesaulniers
Subscribers: xbolva00, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64888
llvm-svn: 366523
|