| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
we don't create a BUILD_VECTOR with an illegal type after type legalization.
llvm-svn: 312621
|
| |
|
|
|
|
|
|
|
|
|
| |
performing a zext of a register.
On the PR there is discussion of how to more effectively handle this,
but this patch prevents us from miscompiling code.
Differential Revision: https://reviews.llvm.org/D37504
llvm-svn: 312620
|
| |
|
|
|
|
| |
This matches what we already do for AVX512. The peephole pass makes up for this in most if not all cases. But this makes isel behavior for these consistent with every other instruction.
llvm-svn: 312613
|
| |
|
|
|
|
|
| |
xscvdpspn was not introduced until the P8, so don't use it on the P7. Fixes a
regression introduced in r288152.
llvm-svn: 312612
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Most instructions in AVX work “in-lane”, that is, each source element is applied only to other
elements of the same lane, thus a cross lane permutation is costly and needs more than one instrution.
AVX2 includes instructions to perform any-to-any permutation of words over a 256-bit register
and vectorized table lookup.
This should also Fix PR34369
Differential Revision: https://reviews.llvm.org/D37388
llvm-svn: 312608
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Previous would throw warning whenever libxml2 is not installed. Now
only give this warning if merging manifest fails.
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D37240
llvm-svn: 312604
|
| |
|
|
|
|
|
|
|
| |
When packet size equals packet align and is power of 2, transform
__read_pipe* and __write_pipe* to specialized library function.
Differential Revision: https://reviews.llvm.org/D36831
llvm-svn: 312598
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
null constants
This is a preliminary step towards solving the remaining part of PR27145 - IR for isfinite():
https://bugs.llvm.org/show_bug.cgi?id=27145
In order to solve that one more generally, we need to add matching for and/or of fcmp ord/uno
with a constant operand.
But while looking at those patterns, I realized we were missing a canonicalization for nonzero
constants. Rather than limiting to just folds for constants, we're adding a general value
tracking method for this based on an existing DAG helper.
By transforming everything to 0.0, we can simplify the existing code in foldLogicOfFCmps()
and pick up missing vector folds.
Differential Revision: https://reviews.llvm.org/D37427
llvm-svn: 312591
|
| |
|
|
|
|
|
|
|
|
|
| |
Missing these could potentially screw up post-ra scheduling.
Issue found by inspection, so I don't have a real testcase. Included
test just verifies the expected operands after expansion.
Differential Revision: https://reviews.llvm.org/D35156
llvm-svn: 312589
|
| |
|
|
|
|
|
|
| |
This allows -run-pass etc. to refer to it.
(Split off from D35156.)
llvm-svn: 312587
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
S_UDT records are basically the "bridge" between the debugger's
expression evaluator and the type information. If you type
(Foo*)nullptr into the watch window, the debugger looks for an
S_UDT record named Foo. If it can find one, it displays your type.
Otherwise you get an error.
We have always understood this to mean that if you have code like
this:
struct A {
int X;
};
struct B {
typedef A AT;
AT Member;
};
that you will get 3 S_UDT records. "A", "B", and "B::AT". Because
if you were to type (B::AT*)nullptr into the debugger, it would
need to find an S_UDT record named "B::AT".
But "B::AT" is actually the S_UDT record that would be generated
if B were a namespace, not a struct. So the debugger needs to be
able to distinguish this case. So what it does is:
1. Look for an S_UDT named "B::AT". If it finds one, it knows
that AT is in a namespace.
2. If it doesn't find one, split at the scope resolution operator,
and look for an S_UDT named B. If it finds one, look up the type
for B, and then look for AT as one of its members.
With this algorithm, S_UDT records for nested typedefs are not just
unnecessary, but actually wrong!
The results of implementing this in clang are dramatic. It cuts
our /DEBUG:FASTLINK PDB sizes by more than 50%, and we go from
being ~20% larger than MSVC PDBs on average, to ~40% smaller.
It also slightly speeds up link time. We get about 10% faster
links than without this patch.
Differential Revision: https://reviews.llvm.org/D37410
llvm-svn: 312583
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit r312526.
Revert "Fix test/DebugInfo/dwarfdump-decompression-invalid-size.test"
This reverts commit r312527.
It causes an ASan failure:
http://lab.llvm.org:8080/green/job/clang-stage2-cmake-RgSan_check/4150
llvm-svn: 312582
|
| |
|
|
| |
llvm-svn: 312575
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This intrinsic represents a label with a list of associated metadata
strings. It is modelled as reading and writing inaccessible memory so
that it won't be removed as dead code. I think the intention is that the
annotation strings should appear at most once in the debug info, so I
marked it noduplicate. We are allowed to inline code with annotations as
long as we strip the annotation, but that can be done later.
Reviewers: majnemer
Subscribers: eraman, llvm-commits, hiraditya
Differential Revision: https://reviews.llvm.org/D36904
llvm-svn: 312569
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
handles out of range truncations of the start and accum values
Summary:
When constructing the predicate P1 in ScalarEvolution::createAddRecFromPHIWithCastsImpl() it is possible
for the PHISCEV from which the predicate is constructed to be a SCEVConstant instead of a SCEVAddRec. If
this happens, then the cast<SCEVAddRec>(PHISCEV) in the code will assert.
Such a PHISCEV is possible if either the start value or the accumulator value is a constant value
that not equal to its truncated value, and if the truncated value is zero.
This patch adds tests that demonstrate the cast<> assertion, and fixes this problem by checking
whether the PHISCEV is a constant before constructing the P1 predicate; if it is, then P1 is
equivalent to one of P2 or P3. Additionally, if we know that the start value or accumulator
value are constants then we check whether the P2 and/or P3 predicates are known false at compile
time; if either is, then we bail out of constructing the AddRec.
Reviewers: sanjoy, mkazantsev, silviu.baranga
Reviewed By: mkazantsev
Subscribers: mkazantsev, llvm-commits
Differential Revision: https://reviews.llvm.org/D37265
llvm-svn: 312568
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
It appears that a potential race between the cache client and the cache
pruner that I thought was unlikely actually happened in practice [1].
Try to avoid the race condition by opening the temporary file before
renaming it. Do this only on non-Windows platforms because we cannot
rename open files on Windows using the sys::fs::rename function.
[1] https://luci-logdog.appspot.com/v/?s=chromium%2Fbb%2Fchromium.memory%2FLinux_CFI%2F1610%2F%2B%2Frecipes%2Fsteps%2Fcompile%2F0%2Fstdout
Differential Revision: https://reviews.llvm.org/D37410
llvm-svn: 312567
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
FR32X)))) patterns
We had already disabled the pattern for SSE4.1 and SSE4.2. But it got re-enabled for AVX and AVX512.
With SSE41 we rely on a separate (v4f32 (X86vzmovl VR128)) pattern to select blendps with a xorps to create zeroess. And a separate (v4f32 (scalar_to_vector FR32X)) to select a COPY_TO_REG_CLASS to move FR32 to VR128
The same thing can happen for AVX with vblendps and those separate patterns already exist.
For AVX512, (v4f32 (X86vzmov VR128)) will select a VMOVSS instruction instead of VBLENDPS due to their not being a EVEX VBLENDPS. This is what we were getting out of the larger pattern anyway. So the larger pattern is unneeded for AVX512 too.
For SSE1-SSSE3 we can rely on (v4f32 (X86vzmov VR128)) selecting a MOVSS similar to AVX512. Again this is what the larger pattern did too.
So the only real change here is that AVX1/2 now properly outputs a VBLENDPS during isel instead of a VMOVSS to match SSE41. Most tests didn't notice because the two address instruction pass knows how to turn VMOVSS into VBLENDPS to get an independent destination register.
llvm-svn: 312564
|
| |
|
|
|
|
|
|
|
| |
- Refactor SIMemOpInfo's constructors
- Allow construction of NotAtomic SIMemOpInfo
Differential Revision: https://reviews.llvm.org/D37396
llvm-svn: 312563
|
| |
|
|
|
|
|
|
| |
If the only call in a function is a tail call, the
function isn't considered to have a call since it's a
type of return.
llvm-svn: 312561
|
| |
|
|
|
|
|
|
| |
more general.
Commit on behalf of Graham Yiu (gyiu@ca.ibm.com)
llvm-svn: 312547
|
| |
|
|
|
|
|
|
| |
(v4f32 (scalar_to_vector FR32X:)), (iPTR 0)))) and the same for v4f64.
We don't have this same pattern for AVX2 so I don't believe we should have it for AVX512. We also didn't have it for v16f32.
llvm-svn: 312543
|
| |
|
|
|
|
|
|
|
|
| |
- Make SIMemOpInfo a class
- Add accessor methods to SIMemOpInfo
- Move get*Info methods to SIMemOpInfo
Differential Revision: https://reviews.llvm.org/D37395
llvm-svn: 312541
|
| |
|
|
|
|
|
|
|
| |
- Rename MemOpInfo -> SIMemOpInfo
- Move SIMemOpInfo class out of SIMemoryLegalizer class
Differential Revision: https://reviews.llvm.org/D37394
llvm-svn: 312540
|
| |
|
|
|
|
|
|
| |
As suggested by @niravd : https://bugs.llvm.org/show_bug.cgi?id=34421#c2
Differential Revision: https://reviews.llvm.org/D37464
llvm-svn: 312534
|
| |
|
|
| |
llvm-svn: 312531
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This patch adds failing gracefully when running out of memory when
allocating a buffer for decompression.
This provides a work-around for:
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=3224
Differential revision: https://reviews.llvm.org/D37447
llvm-svn: 312526
|
| |
|
|
|
|
|
|
| |
Use the STI member of ARMInstructionSelector instead of
TII.getSubtarget() and also make use of STI's methods instead of
checking the object format manually.
llvm-svn: 312522
|
| |
|
|
|
|
|
|
|
| |
In RWPI code, globals that are not read-only are accessed relative to
the SB register (R9). This is achieved by explicitly generating an ADD
instruction between SB and an offset that we either load from a constant
pool or movw + movt into a register.
llvm-svn: 312521
|
| |
|
|
|
|
| |
had their patterns removed.
llvm-svn: 312520
|
| |
|
|
|
|
| |
enable reuse in a future patch.
llvm-svn: 312518
|
| |
|
|
|
|
|
|
|
|
| |
for sure we're going to use it and avoid an unnecessary call to m_APInt.
Instead of creating a Constant and then calling m_APInt with it (which will always return true). Just create an APInt initially, and use that for the checks in isSelect01 function. If it turns out we do need the Constant, create it from the APInt.
This is a refactor for a future patch that will do some more checks of the constant values here.
llvm-svn: 312517
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If multiple conditional branches are executed based on the same comparison, we can execute multiple conditional branches based on the result of one comparison on PPC. For example,
if (a == 0) { ... }
else if (a < 0) { ... }
can be executed by one compare and two conditional branches instead of two pairs of a compare and a conditional branch.
This patch identifies a code sequence of the two pairs of a compare and a conditional branch and merge the compares if possible.
To maximize the opportunity, we do canonicalization of code sequence before merging compares.
For the above example, the input for this pass looks like:
cmplwi r3, 0
beq 0, .LBB0_3
cmpwi r3, -1
bgt 0, .LBB0_4
So, before merging two compares, we canonicalize it as
cmpwi r3, 0 ; cmplwi and cmpwi yield same result for beq
beq 0, .LBB0_3
cmpwi r3, 0 ; greather than -1 means greater or equal to 0
bge 0, .LBB0_4
The generated code should be
cmpwi r3, 0
beq 0, .LBB0_3
bge 0, .LBB0_4
Differential Revision: https://reviews.llvm.org/D37211
llvm-svn: 312514
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces RemoteObjectClientLayer and RemoteObjectServerLayer,
which can be used to forward ORC object-layer operations from a JIT stack in
the client to a JIT stack (consisting only of object-layers) in the server.
This is a new way to support remote-JITing in LLVM. The previous approach
(supported by OrcRemoteTargetClient and OrcRemoteTargetServer) used a
remote-mapping memory manager that sat "beneath" the JIT stack and sent
fully-relocated binary blobs to the server. The main advantage of the new
approach is that relocatable objects can be cached on the server and re-used
(if the code that they represent hasn't changed), whereas fully-relocated blobs
can not (since the addresses they have been permanently bound to will change
from run to run).
llvm-svn: 312511
|
| |
|
|
|
|
| |
detect self-cycles of phi nodes. We also need to not ignore certain types of arguments when testing whether the phi has a backedge or was originally constant.
llvm-svn: 312510
|
| |
|
|
|
|
| |
aggregate value simplification
llvm-svn: 312509
|
| |
|
|
| |
llvm-svn: 312508
|
| |
|
|
|
|
| |
finding is done. Where we had it before, we would stop looking when we hit the original instruction, but skip it. Now we skip it and keep looking.
llvm-svn: 312507
|
| |
|
|
|
|
|
|
|
|
| |
forwarding""
This crashes on boringSSL on PPC (will send reduced testcase)
This reverts commit r312328.
llvm-svn: 312490
|
| |
|
|
|
|
| |
Avoid use of VPERMPS where we don't need it by instead using the variable mask version of VPERMILPS for unary shuffles.
llvm-svn: 312486
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
range lists.
It solves issue of wrong section index evaluating for ranges when
base address is used.
Based on David Blaikie's patch D36097.
Differential revision: https://reviews.llvm.org/D37214
llvm-svn: 312477
|
| |
|
|
| |
llvm-svn: 312473
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Improve how MaxVF is computed while taking into account that MaxVF should not be larger than the loop's trip count.
Other than saving on compile-time by pruning the possible MaxVF candidates, this patch fixes pr34438 which exposed the following flow:
1. Short trip count identified -> Don't bail out, set OptForSize:=True to avoid tail-loop and runtime checks.
2. Compute MaxVF returned 16 on a target supporting AVX512.
3. OptForSize -> choose VF:=MaxVF.
4. Bail out because TripCount = 8, VF = 16, TripCount % VF !=0 means we need a tail loop.
With this patch step 2. will choose MaxVF=8 based on TripCount.
Reviewers: Ayal, dorit, mkuper, hfinkel
Reviewed By: hfinkel
Subscribers: hfinkel, llvm-commits
Differential Revision: https://reviews.llvm.org/D37425
llvm-svn: 312472
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Debug information can be, and was, corrupted when the runtime
remainder loop was fully unrolled. This is because a !null node can
be created instead of a unique one describing the loop. In this case,
the original node gets incorrectly updated with the NewLoopID
metadata.
In the case when the remainder loop is going to be quickly fully
unrolled, there isn't the need to add loop metadata for it anyway.
Differential Revision: https://reviews.llvm.org/D37338
llvm-svn: 312471
|
| |
|
|
|
|
|
|
| |
This reorders some patterns to get tablegen to detect them as duplicates. Tablegen only detects duplicates when creating variants for commutable operations. It does not detect duplicates between the patterns as written in the td file. So we need to ensure all the FMA patterns in the td file are unique.
This also uses null_frag to remove some other unneeded patterns.
llvm-svn: 312470
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
patterns.
This uses the capability introduced in r312464 to make SDNode patterns commutable on the first two operands.
This allows us to remove some of the extra FMA patterns that have to put loads and mask operands in different places to cover all cases. This even includes patterns that were missing to support match a load in the first operand with FMA4. Non-broadcast loads with masking for AVX512.
I believe this is causing us to generate some duplicate patterns because tablegen's isomorphism checks don't catch isomorphism between the patterns as written in the td. It only detects isomorphism in the commuted variants it tries to create. The the unmasked 231 and 132 memory forms are isomorphic as written in the td file so we end up keeping both. I think we precommute the 132 pattern to fix this.
We also need a follow up patch to go back to the legacy FMA3 instructions and add patterns to the 231 and 132 forms which we currently don't have.
llvm-svn: 312469
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
references in .text
Summary:
This is a re-roll of D36615 which uses PLT relocations in the back-end
to the call to __xray_CustomEvent() when building in -fPIC and
-fxray-instrument mode.
Reviewers: pcc, djasper, bkramer
Subscribers: sdardis, javed.absar, llvm-commits
Differential Revision: https://reviews.llvm.org/D37373
llvm-svn: 312466
|
| |
|
|
|
|
|
|
| |
together write the whole vector, but the starting vector isn't undef.
In this case we should replace the starting vector with undef.
llvm-svn: 312462
|
| |
|
|
| |
llvm-svn: 312461
|
| |
|
|
|
|
| |
X, Idx), Idx) into an insert of X into the larger zero vector.
llvm-svn: 312460
|
| |
|
|
|
|
| |
register that I missed in r312450.
llvm-svn: 312459
|