| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
- Changed variable/function names to be more consistent
- Improved comments in test files
- Added more tests
- Fixed a few typos
- Misc. cosmetic changes
Reviewers: dschuff
Subscribers: sbc100, jgravelle-google, sunfish, llvm-commits
Differential Revision: https://reviews.llvm.org/D49087
llvm-svn: 336598
|
| |
|
|
|
|
|
|
|
| |
Add support for this builtin:
double builtin_truncf128_round_to_odd(float128)
Differential Revision: https://reviews.llvm.org/D48483
llvm-svn: 336595
|
| |
|
|
|
|
|
|
|
| |
Ensure that, if updating a tied operand pair, to only update
that pair.
Differential Revision: https://reviews.llvm.org/D49052
llvm-svn: 336593
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This patch adds support for the atomicrmw instructions and the strong
cmpxchg instruction to the IRTranslator.
I've left out weak cmpxchg because LangRef.rst isn't entirely clear on what
difference it makes to the backend. As far as I can tell from the code, it
only matters to AtomicExpandPass which is run at the LLVM-IR level.
Reviewers: ab, t.p.northover, qcolombet, rovka, aditya_nandakumar, volkan, javed.absar
Reviewed By: qcolombet
Subscribers: kristof.beyls, javed.absar, igorb, llvm-commits
Differential Revision: https://reviews.llvm.org/D40092
llvm-svn: 336589
|
| |
|
|
|
|
|
|
|
|
| |
These won't work for the forseeable future. These aren't allowed
from OpenCL, but IPO optimizations can make them appear.
Also directly set the attributes on functions, regardless
of the linkage rather than cloning functions like before.
llvm-svn: 336587
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This adds a reverse transform for the instcombine canonicalizations
that were added in D47980, D47981.
As discussed later, that was worse at least for the code size,
and potentially for the performance, too.
https://rise4fun.com/Alive/Zmpl
Reviewers: craig.topper, RKSimon, spatel
Reviewed By: spatel
Subscribers: reames, llvm-commits
Differential Revision: https://reviews.llvm.org/D48768
llvm-svn: 336585
|
| |
|
|
|
|
|
|
|
|
|
|
| |
GCC has builtins for these round to odd instructions:
__float128 __builtin_sqrtf128_round_to_odd (__float128)
__float128 __builtin_{add,sub,mul,div}f128_round_to_odd (__float128, __float128)
__float128 __builtin_fmaf128_round_to_odd (__float128, __float128, __float128)
Differential Revision: https://reviews.llvm.org/D47550
llvm-svn: 336578
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Add bitcode compatibility test for 6.0. On top of the normal disassemble
test, also runs the verifier to make sure simple 6.0 bitcode can pass
the current IR verifier.
Reviewers: vsk
Reviewed By: vsk
Subscribers: dexonsmith, llvm-commits
Differential Revision: https://reviews.llvm.org/D49086
llvm-svn: 336574
|
| |
|
|
| |
llvm-svn: 336570
|
| |
|
|
|
|
|
|
| |
expected type before creating the new FMA node.
Previously, we were creating malformed SDNodes, but nothing noticed because the type constraints prevented isel from noticing.
llvm-svn: 336566
|
| |
|
|
|
|
| |
Let the update script merge 32/64 tests where possible
llvm-svn: 336565
|
| |
|
|
|
|
|
|
|
|
| |
As discussed in D49047 / D48987, shift-by-undef produces poison,
so we can't use undef vector elements in that case..
Note that we need to extend this for poison-generating flags,
and there's a proposal to create poison from FMF in D47963,
llvm-svn: 336562
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
When implementing the DWARF accelerator tables in dsymutil I ran into an
assertion in the assembler. Debugging these kind of issues is a lot
easier when looking at the assembly instead of debugging the assembler
itself. Since it's only a matter of creating an AsmStreamer instead of a
MCObjectStreamer it made sense to turn this into a (hidden) dsymutil
feature.
Differential revision: https://reviews.llvm.org/D49079
llvm-svn: 336561
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
To allow bitcode built by old compiler to pass the current verifer,
BitcodeReader needs to auto infer the correct runtime preemption from
linkage and visibility for GlobalValues.
Since llvm-6.0 bitcode already contains the new field but can be
incorrect in some cases, the attribute needs to be recomputed all the
time in BitcodeReader. This will make all the GVs has dso_local marked
correctly if read from bitcode, and it should still allow the verifier
to catch mistakes in optimization passes.
This should fix PR38009.
Reviewers: sfertile, vsk
Reviewed By: vsk
Subscribers: dexonsmith, llvm-commits
Differential Revision: https://reviews.llvm.org/D49039
llvm-svn: 336560
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds support for the following instructions:
CNTB CNTH - Determine the number of active elements implied by
CNTW CNTD the named predicate constant, multiplied by an
immediate, e.g.
cnth x0, vl8, #16
CNTP - Count active predicate elements, e.g.
cntp x0, p0, p1.b
counts the number of active elements in p1, predicated
by p0, and stores the result in x0.
llvm-svn: 336552
|
| |
|
|
|
|
|
|
| |
Added handling for the select f128.
Differential Revision: https://reviews.llvm.org/D48294
llvm-svn: 336548
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch completes support for shifts, which include:
- LSL - Logical Shift Left
- LSLR - Logical Shift Left, Reversed form
- LSR - Logical Shift Right
- LSRR - Logical Shift Right, Reversed form
- ASR - Arithmetic Shift Right
- ASRR - Arithmetic Shift Right, Reversed form
- ASRD - Arithmetic Shift Right for Divide
In the following variants:
- Predicated shift by immediate - ASR, LSL, LSR, ASRD
e.g.
asr z0.h, p0/m, z0.h, #1
(active lanes of z0 shifted by #1)
- Unpredicated shift by immediate - ASR, LSL*, LSR*
e.g.
asr z0.h, z1.h, #1
(all lanes of z1 shifted by #1, stored in z0)
- Predicated shift by vector - ASR, LSL*, LSR*
e.g.
asr z0.h, p0/m, z0.h, z1.h
(active lanes of z0 shifted by z1, stored in z0)
- Predicated shift by vector, reversed form - ASRR, LSLR, LSRR
e.g.
lslr z0.h, p0/m, z0.h, z1.h
(active lanes of z1 shifted by z0, stored in z0)
- Predicated shift left/right by wide vector - ASR, LSL, LSR
e.g.
lsl z0.h, p0/m, z0.h, z1.d
(active lanes of z0 shifted by wide elements of vector z1)
- Unpredicated shift left/right by wide vector - ASR, LSL, LSR
e.g.
lsl z0.h, z1.h, z2.d
(all lanes of z1 shifted by wide elements of z2, stored in z0)
*Variants added in previous patches.
llvm-svn: 336547
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As noted in D48987, there are many different ways for this transform to go wrong.
In particular, the poison potential for shifts means we have to more careful with those ops.
I added tests to make that behavior visible for all of the different cases that I could find.
This is a partial fix. To make this review easier, I did not make changes for the single binop
pattern (handled in foldSelectShuffleWith1Binop()). I also left out some potential optimizations
noted with TODO comments. I'll follow-up once we're confident that things are correct here.
The goal is to correct all marked FIXME tests to either avoid the shuffle transform or do it safely.
Note that distinguishing when the shuffle mask contains undefs and using getBinOpIdentity() allows
for some improvements to div/rem patterns, so there are wins along with the missed opportunities
and fixes.
Differential Revision: https://reviews.llvm.org/D49047
llvm-svn: 336546
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Related to http://reviews.llvm.org/D15772
Depends on http://reviews.llvm.org/D16889
Adds [D]REM[U] instructions.
Patch By: Srdjan Obucina
Contributions from: Simon Dardis
Differential Revision: https://reviews.llvm.org/D17036
llvm-svn: 336545
|
| |
|
|
|
|
|
|
|
|
|
| |
Support for SVE's TBL instruction for programmable table
lookup/permute using vector of element indices, e.g.
tbl z0.d, { z1.d }, z2.d
stores elements from z1, indexed by elements from z2, into z0.
llvm-svn: 336544
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
instruction.
This is a short-term fix for PR38093.
For now, we llvm::report_fatal_error if the instruction builder finds an
unsupported instruction in the instruction stream.
We need to revisit this fix once we start addressing PR38101.
Essentially, we need a better framework for error handling.
llvm-svn: 336543
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
r335553 with the non-trivial unswitching of switches.
The code correctly updated most aspects of the CFG and analyses, but
missed some crucial aspects:
1) When multiple cases have the same successor, we unswitch that
a single time and replace the switch with a direct branch. The CFG
here is correct, but the target of this direct branch may have had
a PHI node with multiple entries in it.
2) When we still have to clone a successor of the switch into an
unswitched copy of the loop, we'll delete potentially multiple edges
entering this successor, not just one.
3) We also have to delete multiple edges entering the successors in the
original loop when they have to be retained.
4) When the "retained successor" *also* occurs as a case successor, we
just assert failed everywhere. This doesn't happen very easily
because its always valid to simply drop the case -- the retained
successor for switches is always the default successor. However, it
is likely possible through some contrivance of different loop passes,
unrolling, and simplifying for this to occur in practice and
certainly there is nothing "invalid" about the IR so this pass needs
to handle it.
5) In the case of #4, we also will replace these multiple edges with
a direct branch much like in #1 and need to collapse the entries in
any PHI nodes to a single enrty.
All of this stems from the delightful fact that the same successor can
show up in multiple parts of the switch terminator, and each of these
are considered a distinct edge for the purpose of PHI nodes (and
iterating the successors and predecessors) but not for unswitching
itself, the dominator tree, or many other things. For the record,
I intensely dislike this "feature" of the IR in large part because of
the complexity it causes in passes like this. We already have a ton of
logic building sets and handling duplicates, and we just had to add
a bunch more.
I've added a complex test case that covers all five of the above failure
modes. I've also added a variation on it where #4 and #5 occur in loop
exit, adding fun where we have an LCSSA PHI node with "multiple entries"
despite have dedicated exits. There were no additional issues found by
this, but it seems a useful corner case to cover with testing.
One thing that working on all of this code has made painfully clear for
me as well is how amazingly inefficient our PHI node representation is
(in terms of the in-memory data structures and the APIs used to update
them). This code has truly marvelous complexity bounds because every
time we remove an entry from a PHI node we do a linear scan to find it
and then a linear update to the data structure to remove it. We could in
theory batch all of the PHI node updates into a single linear walk of
the operands making this much more efficient, but the APIs fight hard
against this and the fact that we have to handle duplicates in the
peculiar manner we do (removing all but one in some cases) makes even
implementing that very tedious and annoying. Anyways, none of this is
new here or specific to loop unswitching. All code in LLVM that updates
PHI node operands suffers from these problems.
llvm-svn: 336536
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Supporting various addressing modes:
- adr z0.s, [z0.s, z0.s]
- adr z0.s, [z0.s, z0.s, lsl #<shift>]
- adr z0.d, [z0.d, z0.d]
- adr z0.d, [z0.d, z0.d, lsl #<shift>]
- adr z0.d, [z0.d, z0.d, uxtw #<shift>]
- adr z0.d, [z0.d, z0.d, sxtw #<shift>]
Reviewers: rengolin, fhahn, SjoerdMeijer, samparker, javed.absar
Reviewed By: SjoerdMeijer
Differential Revision: https://reviews.llvm.org/D48870
llvm-svn: 336533
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds support for:
UZP1 Concatenate even elements from two vectors
UZP2 Concatenate odd elements from two vectors
TRN1 Interleave even elements from two vectors
TRN2 Interleave odd elements from two vectors
With variants for both data and predicate vectors, e.g.
uzp1 z0.b, z1.b, z2.b
trn2 p0.s, p1.s, p2.s
llvm-svn: 336531
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
PGOMemOPSize only modifies CFG in a couple of places; thus we can preserve the DominatorTree with little effort.
When optimizing SQLite with -O3, this patch can decrease 3.8% of the numbers of nodes traversed by DFS and 5.7% of the times DominatorTreeBase::recalculation is called.
Reviewers: kuhar, davide, dmgreen
Reviewed By: dmgreen
Subscribers: mzolotukhin, vsk, llvm-commits
Differential Revision: https://reviews.llvm.org/D48914
llvm-svn: 336522
|
| |
|
|
| |
llvm-svn: 336514
|
| |
|
|
|
|
|
|
| |
Pre-AVX512 (which can perform a quick extend/shift/truncate), extending to 2 v8i16 for the PMULLW and then truncating is more performant than relying on the generic PBLENDVB vXi8 shift path and uses a similar amount of mask constant pool data.
Differential Revision: https://reviews.llvm.org/D48963
llvm-svn: 336513
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Reviewers: RKSimon, andreadb, courbet
Reviewed By: RKSimon
Subscribers: gbedwell, llvm-commits
Differential Revision: https://reviews.llvm.org/D48997
llvm-svn: 336510
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the 'detectCTLZIdiom' function support for loops that use LSHR instruction instead of ASHR has been added.
This supports creating ctlz from the following code.
int lzcnt(int x) {
int count = 0;
while (x > 0) {
count++;
x = x >> 1;
}
return count;
}
Patch by Olga Moldovanova
Differential Revision: https://reviews.llvm.org/D48354
llvm-svn: 336509
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows us to handle masking in a very similar way to the default rounding version that uses llvm.fma.
I had to add new rounding mode CodeGenOnly instructions to support isel when we can't find a movss to grab the upper bits from to use the b_Int instruction.
Fast-isel tests have been updated to match new clang codegen.
We are currently having trouble folding fneg into the new intrinsic. I'm going to correct that in a follow up patch to keep the size of this one down.
A future patch will also remove the old intrinsics.
llvm-svn: 336506
|
| |
|
|
|
|
| |
tests to match clang test cases.
llvm-svn: 336505
|
| |
|
|
| |
llvm-svn: 336496
|
| |
|
|
|
|
|
|
| |
As discussed on PR37989, this patch adds EXTRACT_SUBVECTOR handling to TargetLowering::SimplifyDemandedVectorElts and calls it from DAGCombiner::visitEXTRACT_SUBVECTOR.
Differential Revision: https://reviews.llvm.org/D48825
llvm-svn: 336490
|
| |
|
|
|
|
|
|
|
|
| |
We penalize general SDIV/UDIV costs but don't do the same for SREM/UREM.
This patch makes general vector SREM/UREM x20 as costly as scalar, the same approach as we do for SDIV/UDIV. The patch also extends the existing SDIV/UDIV constant costs for SREM/UREM - at the moment this means the additional cost of a MUL+SUB (see D48975).
Differential Revision: https://reviews.llvm.org/D48980
llvm-svn: 336486
|
| |
|
|
|
|
| |
Differential Revision: https://reviews.llvm.org/D48934
llvm-svn: 336484
|
| |
|
|
|
|
| |
This should bring the bots back to green state.
llvm-svn: 336482
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
after trivial unswitching.
This PR illustrates that a fundamental analysis update was not performed
with the new loop unswitch. This update is also somewhat fundamental to
the core idea of the new loop unswitch -- we actually *update* the CFG
based on the unswitching. In order to do that, we need to update the
loop nest in addition to the domtree.
For some reason, when writing trivial unswitching, I thought that the
loop nest structure cannot be changed by the transformation. But the PR
helps illustrate that it clearly can. I've expanded this to a number of
different test cases that try to cover the different cases of this. When
we unswitch, we move an exit edge of a loop out of the loop. If this
exit edge changes which loop reached by an exit is the innermost loop,
it changes the parent of the loop. Essentially, this transformation may
hoist the inner loop up the nest. I've added the simple logic to handle
this reliably in the trivial unswitching case. This just requires
updating LoopInfo and rebuilding LCSSA on the impacted loops. In the
trivial case, we don't even need to handle dedicated exits because we're
only hoisting the one loop and we just split its preheader.
I've also ported all of these tests to non-trivial unswitching and
verified that the logic already there correctly handles the loop nest
updates necessary.
Differential Revision: https://reviews.llvm.org/D48851
llvm-svn: 336477
|
| |
|
|
|
|
| |
This reverts commit r336140. Our tests shows that LSR assert fails with it.
llvm-svn: 336473
|
| |
|
|
|
|
|
|
|
|
|
| |
This diff adds support for handling static libraries
to llvm-objcopy and llvm-strip.
Test plan: make check-all
Differential revision: https://reviews.llvm.org/D48413
llvm-svn: 336455
|
| |
|
|
| |
llvm-svn: 336454
|
| |
|
|
| |
llvm-svn: 336453
|
| |
|
|
|
|
|
|
| |
Suppress the diagnostic for mis-sized dbg.values when a value operand is
narrower than the unsigned variable it describes. Assume that a debugger
would implicitly zero-extend these values.
llvm-svn: 336452
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The replaceAllDbgUsesWith utility helps passes preserve debug info when
replacing one value with another.
This improves upon the existing insertReplacementDbgValues API by:
- Updating debug intrinsics in-place, while preventing use-before-def of
the replacement value.
- Falling back to salvageDebugInfo when a replacement can't be made.
- Moving the responsibiliy for rewriting llvm.dbg.* DIExpressions into
common utility code.
Along with the API change, this teaches replaceAllDbgUsesWith how to
create DIExpressions for three basic integer and pointer conversions:
- The no-op conversion. Applies when the values have the same width, or
have bit-for-bit compatible pointer representations.
- Truncation. Applies when the new value is wider than the old one.
- Zero/sign extension. Applies when the new value is narrower than the
old one.
Testing:
- check-llvm, check-clang, a stage2 `-g -O3` build of clang,
regression/unit testing.
- This resolves a number of mis-sized dbg.value diagnostics from
Debugify.
Differential Revision: https://reviews.llvm.org/D48676
llvm-svn: 336451
|
| |
|
|
|
|
|
|
| |
As discussed in D48987 and D48893, there are many different
ways to go wrong depending on the binop (and as shown here
we already do go wrong in some cases).
llvm-svn: 336450
|
| |
|
|
|
|
|
|
|
| |
Added statistics for the number of SMLAD instructions created, and
als renamed the pass name to -arm-parallel-dsp.
Differential Revision: https://reviews.llvm.org/D48971
llvm-svn: 336441
|
| |
|
|
|
|
|
|
| |
http://green.lab.llvm.org/green/job/clang-stage1-cmake-RA-incremental/50537/testReport/junit/LLVM/CodeGen_AArch64/FoldRedundantShiftedMasking_ll/
This removes the comments of the function label causing this error.
llvm-svn: 336440
|
| |
|
|
|
|
|
|
| |
This adds:
- outer shareable TLB Maintenance instructions, and
- TLB range maintenance instructions.
llvm-svn: 336434
|
| |
|
|
|
|
| |
Now with the asm operand definition included.
llvm-svn: 336432
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
D48278
Allow to reduce redundant shift masks.
For example:
x1 = x & 0xAB00
x2 = (x >> 8) & 0xAB
can be reduced to:
x1 = x & 0xAB00
x2 = x1 >> 8
It only allows folding when the masks and shift values are constants.
llvm-svn: 336426
|
| |
|
|
|
|
| |
It's causing build errors.
llvm-svn: 336422
|