| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |
|
|
|
|
|
|
|
|
|
|
| |
setcc), undef,))), C) into (bitcast (vXi1 (concat_vectors (vYi1 setcc), zero,)))
The legalization of v2i1->i2 or v4i1->i4 bitcasts followed by a setcc can create an and after the bitcast. If we're lucky enough that the input to the bitcast is a concat_vectors where the first operand is a setcc that can natively 0 all the upper bits of ak-register, then we should replace the other operands of the concat_vectors with zero in order to remove the AND.
With the AND removed we might be able to use a kortest on the result.
Differential Revision: https://reviews.llvm.org/D69205
|
|
|
|
|
|
|
| |
Detect scalar ISD::ZERO_EXTEND generated by memcmp lowering and convert
it to ISD::INSERT_SUBVECTOR.
https://reviews.llvm.org/D69464
|
|
|
|
|
|
|
|
|
|
|
| |
PTEST and especially the MOVMSK instructions are slow on Knights Landing
or later. As a bonus, this patch increases instruction parallelism by
emitting:
KORTEST(PCMPNEQ(a, b), PCMPNEQ(c, d)) == 0
Instead of:
KORTEST(AND(PCMPEQ(a, b), PCMPEQ(c, d))) == ~0
https://reviews.llvm.org/D69157
|
|
|
|
| |
Without this, we can create a PSADBW node that isn't legal.
|
| |
|
|
|
|
|
|
| |
reduction combine
llvm-svn: 375463
|
|
|
|
|
|
| |
This doesn't need to be just for bitops, but the ops do need to be fully associative.
llvm-svn: 375445
|
|
|
|
|
|
|
|
|
|
| |
emitting X86ISD::FHADD in LowerUINT_TO_FP_i64.
This was a regression from r375341.
Fixes PR43729.
llvm-svn: 375381
|
|
|
|
|
|
|
|
| |
'Zeroable' known undef/zero bits. NFCI.
Renamed 'resolveTargetShuffleAndZeroables' to 'resolveTargetShuffleFromZeroables' to match.
llvm-svn: 375348
|
|
|
|
|
|
| |
tryToWidenViaDuplication lowers using the shuffle_v8i16(unpack_v16i8(shuffle_v8i16(x),shuffle_v8i16(x))) pattern, but the unpack only needs the even/odd 16i8 args if the original v16i8 shuffle mask references the even/odd elements - which isn't true for many extension style shuffles.
llvm-svn: 375342
|
|
|
|
|
|
|
|
| |
We were always generating a single source HADDPD, but really we should only do this if shouldUseHorizontalOp says its a good idea.
Differential Revision: https://reviews.llvm.org/D69175
llvm-svn: 375341
|
|
|
|
|
|
| |
NFCI.
llvm-svn: 375253
|
|
|
|
|
|
| |
https://reviews.llvm.org/D69111
llvm-svn: 375197
|
|
|
|
|
|
|
|
|
|
|
| |
Add generic DAG combine for extending masked loads.
Allow us to generate sext/zext masked loads which can access v4i8,
v8i8 and v4i16 memory to produce v4i32, v8i16 and v4i32 respectively.
Differential Revision: https://reviews.llvm.org/D68337
llvm-svn: 375085
|
|
|
|
|
|
|
|
|
|
| |
from the resolveTargetShuffleAndZeroables call.
Exposes an issue in getFauxShuffleMask where the OR(SHUFFLE,SHUFFLE) decode should always resolve zero/undef elements.
Part of the fix for PR43024 where ideally we shouldn't call resolveTargetShuffleAndZeroables for Depth == 0
llvm-svn: 374928
|
|
|
|
| |
llvm-svn: 374922
|
|
|
|
|
|
| |
NFCI.
llvm-svn: 374878
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
VBROADCAST when trying to share broadcasts.
The only things VBROADCAST_LOAD uses is an address and a chain
node. It has no vector inputs.
So if its a user of the source of another broadcast that could
only mean one of two things. The other broadcast is broadcasting
the address of the broadcast_load. Or the source is a load and
the use we're seeing is the chain result from that load. Neither
of these cases make sense to combine here.
This issue was reported post-commit r373871. Test case has not
been reduced yet.
llvm-svn: 374862
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
from the subtract directly during isel.
This prevents isel from emitting a TEST instruction that
optimizeCompareInstr will need to remove later.
In some of the modified tests, the SUB gets duplicated due to
the flags being needed in two places and being clobbered in
between. optimizeCompareInstr was able to optimize away the TEST
that was using the result of one of them, but optimizeCompareInstr
doesn't know to turn SUB into CMP after removing the TEST. It
only knows how to turn SUB into CMP if the result was already
dead.
With this change the TEST never exists, so optimizeCompareInstr
doesn't have to remove it. Then it can just turn the SUB into
CMP immediately.
Fixes PR43649.
llvm-svn: 374755
|
|
|
|
|
|
|
|
| |
well as KnownZero.
We were already controlling whether the KnownZero elements were being written to the target mask, this extends it to the KnownUndef elements as well so we can prevent the target shuffle mask being manipulated at all.
llvm-svn: 374732
|
|
|
|
|
|
|
|
|
|
| |
This enables use of the saturating truncate instructions when the
result type is less than 128 bits. It also enables the use of
saturating truncate instructions on KNL when the input is less
than 512 bits. We can do this by widening the input and then
extracting the result.
llvm-svn: 374731
|
|
|
|
|
|
| |
getTargetShuffleInputs with KnownUndef/Zero results.
llvm-svn: 374725
|
|
|
|
|
|
| |
Adjust SimplifyDemandedVectorEltsForTargetNode to use the known elts masks instead of recomputing it locally.
llvm-svn: 374724
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in combineSelect.
This seems to improve std::midpoint code where we have a min and
a max with the same condition. If we split the setcc we can end
up with two compares if the one of the operands is a constant.
Since we aggressively canonicalize compares with constants.
For non-constants it can interfere with our ability to share
control flow if we need to expand cmovs into control flow.
I'm also not sure I understand this min/max canonicalization code.
The motivating case talks about comparing with 0. But we don't
check for 0 explicitly.
Removes one instruction from the codegen for PR43658.
llvm-svn: 374706
|
|
|
|
|
|
| |
instructions with avx512.
llvm-svn: 374705
|
|
|
|
| |
llvm-svn: 374674
|
|
|
|
| |
llvm-svn: 374669
|
|
|
|
| |
llvm-svn: 374668
|
|
|
|
| |
llvm-svn: 374667
|
|
|
|
|
|
| |
This should go away once D66004 has landed and we can simplify shuffle chains using demanded elts.
llvm-svn: 374658
|
|
|
|
|
|
|
|
|
|
|
| |
is the largest legal vector and the result type is at least 256 bits.
Since the input type is larger than 256-bits we'll need to some
concatenating to reassemble the results. The pack instructions
ability to concatenate while packing make this a shorter/faster
sequence.
llvm-svn: 374643
|
|
|
|
|
|
|
|
| |
We already did this for VTRUNCUS with a specific combination of
types. This extends this to VTRUNCS and handles any types where
a truncating store is legal.
llvm-svn: 374615
|
|
|
|
| |
llvm-svn: 374579
|
|
|
|
|
|
| |
Now that its used by isNegatibleForFree we should try to avoid costly deep recursion
llvm-svn: 374534
|
|
|
|
|
|
| |
saturating truncating store.
llvm-svn: 374509
|
|
|
|
|
|
|
|
|
|
| |
instructions in more situations.
If we don't have VLX we won't end up selecting a saturating
truncate for 256-bit or smaller vectors so we should just use
the pack lowering.
llvm-svn: 374487
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When handling the packus pattern for i32->i8 we do a two step
process using a packss to i16 followed by a packus to i8. If the
final i8 step is a type with less than 64-bits the packus step
will return SDValue(), but the i32->i16 step might have succeeded.
This leaves the nodes from the middle step dangling.
Guard against this by pre-checking that the number of elements is
at least 8 before doing the middle step.
With that check in place this should mean the only other
case the middle step itself can fail is when SSE2 is disabled. So
add an early SSE2 check then just assert that neither the middle
or final step ever fail.
llvm-svn: 374460
|
|
|
|
|
|
|
|
|
|
| |
inputs to be between 0 and 255 when zmm registers are disabled on SKX.
If we've disable zmm registers, the v16i32 will need to be split. This split will propagate through min/max the truncate. This creates two sequences that need to be concatenated back to v16i8. We can instead use packusdw to do part of the clamping, truncating, and concatenating all at once. Then we can use a vpmovuswb to finish off the clamp.
Differential Revision: https://reviews.llvm.org/D68763
llvm-svn: 374431
|
|
|
|
|
|
| |
Split off from D67557.
llvm-svn: 374356
|
|
|
|
|
|
| |
Split off from D67557, fixes the compile time regression mentioned in rL372756
llvm-svn: 374351
|
|
|
|
|
|
|
|
|
|
| |
placeholders. NFCI.
Continuing to undo the rL372756 reversion.
Differential Revision: https://reviews.llvm.org/D67557
llvm-svn: 374345
|
|
|
|
|
|
|
|
|
|
|
|
| |
As background, starting in D66309, I'm working on support unordered atomics analogous to volatile flags on normal LoadSDNode/StoreSDNodes for X86.
As part of that, I spent some time going through usages of LoadSDNode and StoreSDNode looking for cases where we might have missed a volatility check or need an atomic check. I couldn't find any cases that clearly miscompile - i.e. no test cases - but a couple of pieces in code loop suspicious though I can't figure out how to exercise them.
This patch adds defensive checks and asserts in the places my manual audit found. If anyone has any ideas on how to either a) disprove any of the checks, or b) hit the bug they might be fixing, I welcome suggestions.
Differential Revision: https://reviews.llvm.org/D68419
llvm-svn: 374261
|
|
|
|
|
|
|
|
|
|
| |
larger than i32.
Gather instructions can use i32 or i64 elements for indices. If
the index is zero extended from a type smaller than i32 to i64, we
can shrink the extend to just extend to i32.
llvm-svn: 373982
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the target option GuaranteedTailCallOpt is specified, calls with
the fastcc calling convention will be transformed into tail calls if
they are in tail position. This diff adds a new calling convention,
tailcc, currently supported only on X86, which behaves the same way as
fastcc, except that the GuaranteedTailCallOpt flag does not need to
enabled in order to enable tail call optimization.
Patch by Dwight Guth <dwight.guth@runtimeverification.com>!
Reviewed By: lebedev.ri, paquette, rnk
Differential Revision: https://reviews.llvm.org/D67855
llvm-svn: 373976
|
|
|
|
|
|
|
|
| |
NFCI.
Stop all the callers from having to check the value type before calling getTargetShuffleInputs.
llvm-svn: 373915
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
load (PR43217)
If a fp scalar is loaded and then used as both a scalar and a vector broadcast, perform the load as a broadcast and then extract the scalar for 'free' from the 0th element.
This involved switching the order of the X86ISD::BROADCAST combines so we only convert to X86ISD::BROADCAST_LOAD once all other canonicalizations have been attempted.
Adds a DAGCombinerInfo::recursivelyDeleteUnusedNodes wrapper.
Fixes PR43217
Differential Revision: https://reviews.llvm.org/D68544
llvm-svn: 373871
|
|
|
|
| |
llvm-svn: 373870
|
|
|
|
|
|
|
|
| |
directly.
Move the resolveTargetShuffleInputsAndMask call to after the shuffle mask combine before the undef/zero constant fold instead.
llvm-svn: 373868
|
|
|
|
|
|
|
|
| |
Replaces setTargetShuffleZeroElements with getTargetShuffleAndZeroables which reports the Zeroable elements but doesn't merge them into the decoded target shuffle mask (the merging has been moved up into getTargetShuffleInputs until we can get rid of it entirely).
This is part of the work to fix PR43024 and allow us to use SimplifyDemandedElts to simplify shuffle chains - we need to get to a point where the target shuffle mask isn't adjusted by its source inputs but instead we cache them in a parallel Zeroable mask.
llvm-svn: 373867
|