| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
| |
llvm-svn: 288045
|
| |
|
|
|
|
|
|
| |
Bit-shifts by a whole number of bytes can be represented as a shuffle mask suitable for combining.
Added a 'getFauxShuffleMask' function to allow us to create shuffle masks from other suitable operations.
llvm-svn: 288040
|
| |
|
|
|
|
| |
commutable as operand 0 should pass its upper bits through to the output.
llvm-svn: 288011
|
| |
|
|
|
|
| |
patterns.
llvm-svn: 288010
|
| |
|
|
| |
llvm-svn: 288009
|
| |
|
|
|
|
| |
I don't think isel selects these today, favoring adding the register to itself instead. But the load folding tables shouldn't be so concerned with what isel will use and just represent the relationships.
llvm-svn: 288007
|
| |
|
|
|
|
| |
PSLL/PSRL bit shifts
llvm-svn: 288006
|
| |
|
|
| |
llvm-svn: 288004
|
| |
|
|
|
|
| |
Moved most of matching code into matchVectorShuffleAsShift to share with target shuffle combines (in a future commit).
llvm-svn: 288003
|
| |
|
|
|
|
|
|
|
|
| |
instruction's load size is smaller than the register size.
If we were to unfold these, the load size would be increased to the register size. This is not safe to do since the enlarged load can do things like cross a page boundary into a page that doesn't exist.
I probably missed some instructions, but this should be a large portion of them.
llvm-svn: 288001
|
| |
|
|
| |
llvm-svn: 287995
|
| |
|
|
|
|
|
|
| |
instructions that don't have a restriction.
Most of these are the SSE4.1 PMOVZX/PMOVSX instructions which all read less than 128-bits. The only other was PMOVUPD which by definition is an unaligned load.
llvm-svn: 287991
|
| |
|
|
|
|
| |
IsProfitableToFold.
llvm-svn: 287987
|
| |
|
|
|
|
|
|
| |
X86DAGToDAGISel::selectScalarSSELoad to pass the load node to IsProfitableToFold and IsLegalToFold.
Previously we were passing the SCALAR_TO_VECTOR node.
llvm-svn: 287986
|
| |
|
|
| |
llvm-svn: 287985
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
from being folded multiple times.
Summary: When selectScalarSSELoad is looking for a scalar_to_vector of a scalar load, it makes sure the load is only used by the scalar_to_vector. But it doesn't make sure the scalar_to_vector is only used once. This can cause the same load to be folded multiple times. This can be bad for performance. This also causes the chain output to be duplicated, but not connected to anything so chain dependencies will not be satisfied.
Reviewers: RKSimon, zvi, delena, spatel
Subscribers: andreadb, llvm-commits
Differential Revision: https://reviews.llvm.org/D26790
llvm-svn: 287983
|
| |
|
|
| |
llvm-svn: 287975
|
| |
|
|
|
|
| |
folding tables.
llvm-svn: 287974
|
| |
|
|
|
|
| |
tables.
llvm-svn: 287972
|
| |
|
|
|
|
| |
available. Fix the avx512vl stack folding tests to clobber more registers or otherwise they use xmm16 after this change.
llvm-svn: 287971
|
| |
|
|
| |
llvm-svn: 287970
|
| |
|
|
|
|
| |
The W bit distinquishes which operand is the memory operand. But if the mod bits are 3 then the memory operand is a register and there are two possible encodings. We already did this correctly for several other XOP instructions.
llvm-svn: 287961
|
| |
|
|
|
|
|
|
| |
tables for consistency.
Not sure this is truly needed but we had the floating point equivalents, the aligned equivalents, and the EVEX equivalents. So this just makes it complete.
llvm-svn: 287960
|
| |
|
|
|
|
| |
alphabetical order. This is consistent with the older sections of the table. NFC
llvm-svn: 287956
|
| |
|
|
| |
llvm-svn: 287940
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a vselect with 32-bit element size.
Summary:
Shuffle lowering may have widened the element size of a i32 shuffle to i64 before selecting X86ISD::SHUF128. If this shuffle was used by a vselect this can prevent us from selecting masked operations.
This patch detects this and changes the element size to match the vselect.
I don't handle changing integer to floating point or vice versa as its not clear if its better to push such a bitcast to the inputs of the shuffle or to the user of the vselect. So I'm ignoring that case for now.
Reviewers: delena, zvi, RKSimon
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D27087
llvm-svn: 287939
|
| |
|
|
| |
llvm-svn: 287937
|
| |
|
|
| |
llvm-svn: 287909
|
| |
|
|
| |
llvm-svn: 287908
|
| |
|
|
| |
llvm-svn: 287889
|
| |
|
|
|
|
| |
No functional change.
llvm-svn: 287888
|
| |
|
|
|
|
|
|
|
|
| |
Vectorize UINT_TO_FP v2i32 -> v2f64 instead of scalarization (albeit still on the SIMD unit).
The codegen matches that generated by legalization (and is in fact used by AVX for UINT_TO_FP v4i32 -> v4f64), but has to be done in the x86 backend to account for legalization via 4i32.
Differential Revision: https://reviews.llvm.org/D26938
llvm-svn: 287886
|
| |
|
|
|
|
|
|
| |
AVX512DQ-only targets
Use 512-bit instructions with subvector insertion/extraction like we do in a number of similar circumstances
llvm-svn: 287882
|
| |
|
|
|
|
| |
of upper 64-bits of xmm result
llvm-svn: 287878
|
| |
|
|
| |
llvm-svn: 287877
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The bug arises during register allocation on i686 for
CMPXCHG8B instruction when base pointer is needed. CMPXCHG8B
needs 4 implicit registers (EAX, EBX, ECX, EDX) and a memory address,
plus ESI is reserved as the base pointer. With such constraints the only
way register allocator would do its job successfully is when the addressing
mode of the instruction requires only one register. If that is not the case
- we are emitting additional LEA instruction to compute the address.
It fixes PR28755.
Patch by Alexander Ivchenko <alexander.ivchenko@intel.com>
Differential Revision: https://reviews.llvm.org/D25088
llvm-svn: 287875
|
| |
|
|
|
|
|
|
|
|
| |
Move the definitions of three variables out of the switch.
Patch by Alexander Ivchenko <alexander.ivchenko@intel.com>
Differential Revision: https://reviews.llvm.org/D25192
llvm-svn: 287874
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
- It does not modify the input instruction
- Second operand of any address is always an Index Register,
make sure we actually check for that, instead of a check for
an immediate value
Patch by Alexander Ivchenko <alexander.ivchenko@intel.com>
Differential Revision: https://reviews.llvm.org/D24938
llvm-svn: 287873
|
| |
|
|
|
|
|
|
|
|
| |
Replace the CVTTPD2DQ/CVTTPD2UDQ and CVTDQ2PD/CVTUDQ2PD opcodes with general versions.
This is an initial step towards similar FP_TO_SINT/FP_TO_UINT and SINT_TO_FP/UINT_TO_FP lowering to AVX512 CVTTPS2QQ/CVTTPS2UQQ and CVTQQ2PS/CVTUQQ2PS with illegal types.
Differential Revision: https://reviews.llvm.org/D27072
llvm-svn: 287870
|
| |
|
|
|
|
|
|
| |
upper 64-bits of xmm result
We've already added the equivalent for (v)cvttpd2dq (rL284459) and vcvttpd2udq
llvm-svn: 287835
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
We did not support subregs in InlineSpiller:foldMemoryOperand() because targets
may not deal with them correctly.
This adds a target hook to let the spiller know that a target can handle
subregs, and actually enables it for x86 for the case of stack slot reloads.
This fixes PR30832.
Differential Revision: https://reviews.llvm.org/D26521
llvm-svn: 287792
|
| |
|
|
|
|
|
|
| |
AVX512DQ-only targets
Use 512-bit instructions with subvector insertion/extraction like we do in a number of similar circumstances
llvm-svn: 287762
|
| |
|
|
| |
llvm-svn: 287760
|
| |
|
|
|
|
| |
shuffles.
llvm-svn: 287744
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Summary: This function is only called with integer VT arguments, so remove code that handles FP vectors.
Reviewers: RKSimon, craig.topper, delena, andreadb
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D26985
llvm-svn: 287743
|
| |
|
|
|
|
|
|
| |
Currently, XRay only supports emitting the XRay table (xray_instr_map) on ELF binaries. Let's add Mach-O support.
Differential Revision: https://reviews.llvm.org/D26983
llvm-svn: 287734
|
| |
|
|
|
|
|
|
| |
This occurs during UINT_TO_FP v2f64 lowering.
We can easily generalize this to other horizontal ops (FHSUB, PACKSS, PACKUS) as required - we are doing something similar with PACKUS in lowerV2I64VectorShuffle
llvm-svn: 287676
|
| |
|
|
|
|
|
|
|
| |
No-one actually had a mangler handy when calling this function, and
getSymbol itself went most of the way towards getting its own mangler
(with a local TLOF variable) so forcing all callers to supply one was
just extra complication.
llvm-svn: 287645
|
| |
|
|
| |
llvm-svn: 287644
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Summary: Splat vectors are canonicalized to BUILD_VECTOR's so the code can be simplified. NFC-ish.
Reviewers: craig.topper, delena, RKSimon, andreadb
Subscribers: RKSimon, llvm-commits
Differential Revision: https://reviews.llvm.org/D26678
llvm-svn: 287643
|