| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
| |
Rename functions to a consistent format to make it easier to track coverage.
llvm-svn: 312619
|
|
|
|
|
|
| |
Some of the cases show missing pattern i intend to fix shortly.
llvm-svn: 312560
|
|
|
|
|
|
|
|
|
|
| |
a 512-bit register
This enables the use of a smaller encoding by using a VEX instruction when possible.
Differential Revision: https://reviews.llvm.org/D37092
llvm-svn: 312100
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch completely replaces the instruction scheduling information for the Haswell architecture target by modifying the file X86SchedHaswell.td located under the X86 Target.
We used the scheduling information retrieved from the Haswell architects in order to replace and modify the existing scheduling.
The patch continues the scheduling replacement effort started with the SNB target in r307529 and r310792.
Information includes latency, number of micro-Ops and used ports by each HSW instruction.
Please expect some performance fluctuations due to code alignment effects.
Reviewers: RKSimon, zvi, aymanmus, craig.topper, m_zuckerman, igorb, dim, chandlerc, aaboud
Differential Revision: https://reviews.llvm.org/D36663
llvm-svn: 311879
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
AVX512DQI is enabled.
There's no reason to switch instructions with and without DQI. It just creates extra isel patterns and test divergences.
There is however value in enabling the masked version of the instructions with DQI.
This required introducing some new multiclasses to enabling this splitting.
Differential Revision: https://reviews.llvm.org/D36661
llvm-svn: 311091
|
|
|
|
|
|
| |
Differential Revision: https://reviews.llvm.org/D35965
llvm-svn: 309926
|
|
|
|
|
|
| |
tables.
llvm-svn: 309632
|
|
|
|
|
|
| |
Differential Revision: https://reviews.llvm.org/D35839
llvm-svn: 309298
|
|
|
|
| |
llvm-svn: 308110
|
|
|
|
| |
llvm-svn: 306532
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to include the following data:
•static latency
•number of uOps from which the instructions consists
•all ports used by the instruction
Reviewers:
RKSimon
zvi
aymanmus
m_zuckerman
Differential Revision: https://reviews.llvm.org/D33897
llvm-svn: 306414
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we know that both operands of an unsigned integer vector comparison are non-negative,
then it's safe to directly use a signed-compare-greater-than instruction (the only non-equality
integer vector compare predicate provided by SSE/AVX).
We're intentionally not changing the condition code to signed in order to preserve the
existing transforms that use min/max/psubus below here.
This should solve PR33276:
https://bugs.llvm.org/show_bug.cgi?id=33276
Differential Revision: https://reviews.llvm.org/D33862
llvm-svn: 304909
|
|
|
|
|
|
|
|
| |
There's nothing darwin-specific in these tests, and using
that setting causes extra phantom diffs when the auto-generated
check lines are regenerated today.
llvm-svn: 304614
|
|
|
|
|
|
|
|
|
|
| |
This patch defines the i1 type as illegal in the X86 backend for AVX512.
For DAG operations on <N x i1> types (build vector, extract vector element, ...) i8 is used, and should be truncated/extended.
This should produce better scalar code for i1 types since GPRs will be used instead of mask registers.
Differential Revision: https://reviews.llvm.org/D32273
llvm-svn: 303421
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We've had several bugs(PR32256, PR32241) recently that resulted from usages of AH/BH/CH/DH either before or after a copy to/from a mask register.
This ultimately occurs because we create COPY_TO_REGCLASS with VK1 and GR8. Then in CopyToFromAsymmetricReg in X86InstrInfo we find a 32-bit super register for the GR8 to emit the KMOV with. But as these tests are demonstrating, its possible for the GR8 register to be a high register and we end up doing an accidental extra or insert from bits 15:8.
I think the best way forward is to stop making copies directly between mask registers and GR8/GR16. Instead I think we should restrict to only copies between mask registers and GR32/GR64 and use EXTRACT_SUBREG/INSERT_SUBREG to handle the conversion from GR32 to GR16/8 or vice versa.
Unfortunately, this complicates fastisel a bit more now to create the subreg extracts where we used to create GR8 copies. We can probably make a helper function to bring down the repitition.
This does result in KMOVD being used for copies when BWI is available because we don't know the original mask register size. This caused a lot of deltas on tests because we have to split the checks for KMOVD vs KMOVW based on BWI.
Differential Revision: https://reviews.llvm.org/D30968
llvm-svn: 298928
|
|
|
|
|
|
|
|
|
|
|
| |
Up until now, vpmovm2 instruction described its destination operand size
by the source operand size. This patch adds new pattern for the vpmovm2
instruction. The node describes new expansion of the destination (from
{128|256} to 512).
Differential Revision: https://reviews.llvm.org/D30654
llvm-svn: 298586
|
|
|
|
|
|
|
|
| |
VZEROUPPER should not be issued on Knights Landing (KNL), but on Skylake-avx512 it should be.
Differential Revision: https://reviews.llvm.org/D29874
llvm-svn: 296859
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
masks
During legalization we are often creating shuffles (via a build_vector scalarization stage) that are "any_extend_vector_inreg" style masks, and also other masks that are the equivalent of "truncate_vector_inreg" (if we had such a thing).
This patch is an attempt to match these cases to help undo the effects of just leaving shuffle lowering to handle it - which typically means we lose track of the undefined elements of the shuffles resulting in an unnecessary extension+truncation stage for widened illegal types.
The 2011-10-21-widen-cmp.ll regression will be fixed by making SIGN_EXTEND_VECTOR_IN_REG legal in SSE instead of lowering them to X86ISD::VSEXT (PR31712).
Differential Revision: https://reviews.llvm.org/D29454
llvm-svn: 295451
|
|
|
|
|
|
| |
with VLX, but no DQ or BW support.
llvm-svn: 291747
|
|
|
|
|
|
| |
when avx512vl is available, but not avx512dq.
llvm-svn: 291746
|
|
|
|
|
|
|
|
| |
test to show some poor codegen examples.
We're definitely doing bad things when avx512vl is enabled without avx512dq. It looks like avx512vl/dq without avx512bw may also have some issues.
llvm-svn: 291744
|
|
|
|
|
|
|
|
|
|
|
| |
The code emiited by Clang's intrinsics for (v)cvtsi2ss, (v)cvtsi2sd,
(v)cvtsd2ss and (v)cvtss2sd is lowered to a code sequence that includes
redundant (v)movss/(v)movsd instructions. This patch adds patterns for
optimizing these sequences.
Differential revision: https://reviews.llvm.org/D28455
llvm-svn: 291660
|
|
|
|
|
|
|
|
| |
vselects of all ones and all zeros.
Previously we emitted a VPTERNLOG and a separate masked move.
llvm-svn: 291415
|
|
|
|
|
|
|
|
|
|
|
|
| |
size by encoding EVEX AVX-512 instructions using the shorter VEX encoding when possible.
There are cases of AVX-512 instructions that have two possible encodings. This is the case with instructions that use vector registers with low indexes of 0 - 15 and do not use the zmm registers or the mask k registers.
The EVEX encoding prefix requires 4 bytes whereas the VEX prefix can take only up to 3 bytes. Consequently, using the VEX encoding for these instructions results in a code size reduction of ~2 bytes even though it is compiled with the AVX-512 features enabled.
Reviewers: Craig Topper, Zvi Rackoover, Elena Demikhovsky
Differential Revision: https://reviews.llvm.org/D27901
llvm-svn: 290663
|
|
|
|
|
|
| |
As discussed on D27692, the next step will be to allow cross-domain shuffles once the combined shuffle depth passes a certain point.
llvm-svn: 290064
|
|
|
|
| |
llvm-svn: 288049
|
|
|
|
| |
llvm-svn: 287877
|
|
|
|
| |
llvm-svn: 287106
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch helps avoids poor legalization of boolean vector results (e.g. 8f32 -> 8i1 -> 8i16) that feed into SINT_TO_FP by inserting an early SIGN_EXTEND and so help improve the truncation logic.
This is not necessary for AVX512 targets where boolean vectors are legal - AVX512 manages to lower ( sint_to_fp vXi1 ) into some form of ( select mask, 1.0f , 0.0f ) in most cases.
Fix for PR13248
Differential Revision: https://reviews.llvm.org/D26583
llvm-svn: 286979
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vcvtpd2dq/vcvttpd2dq/vcvtpd2ps and similar instructions.
-Don't print the 'x' suffix for the 128-bit reg/mem VEX encoded instructions in Intel syntax. This is consistent with the EVEX versions.
-Don't print the 'y' suffix for the 256-bit reg/reg VEX encoded instructions in Intel or AT&T syntax. This is consistent with the EVEX versions.
-Allow the 'x' and 'y' suffixes to be used for the reg/mem forms when we're assembling using Intel syntax.
-Allow the 'x' and 'y' suffixes on the reg/reg EVEX encoded instructions in Intel or AT&T syntax. This is consistent with what VEX was already allowing.
This should fix at least some of PR28850.
llvm-svn: 286787
|
|
|
|
|
|
| |
hasUndefRegUpdate.
llvm-svn: 282356
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
as sitofp
With D24253 we can now use SelectionDAG::SignBitIsZero with vector operations.
This patch uses SelectionDAG::SignBitIsZero to recognise that a zero sign bit means that we can use a sitofp instead of a uitofp (which is not directly support on pre-AVX512 hardware).
While AVX512 does provide support for uitofp, the conversion to sitofp should not cause any regressions.
Differential Revision: https://reviews.llvm.org/D24343
llvm-svn: 281852
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch helps avoid false dependencies on undef registers by updating the machine instructions' undef operand to use a register that the instruction is truly dependent on, or use a register with clearance higher than Pref.
Pseudo example:
loop:
xmm0 = ...
xmm1 = vcvtsi2sdl eax, xmm0<undef>
... = inst xmm0
jmp loop
In this example, selecting xmm0 as the undef register creates false dependency between loop iterations.
This false dependency cannot be solved by inserting an xor before vcvtsi2sdl because xmm0 is alive at the point of the vcvtsi2sdl instruction.
Selecting a different register instead of xmm0, especially a register that is not used in the loop, will eliminate this problem.
Differential Revision: https://reviews.llvm.org/D22466
llvm-svn: 278321
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This handles the case in:
https://llvm.org/bugs/show_bug.cgi?id=28895
...but we are not getting all of the possibilities yet.
Eg, we use 'X86::FANDN' for scalar FP select combines.
That enhancement is filed as:
https://llvm.org/bugs/show_bug.cgi?id=28925
Differential Revision: https://reviews.llvm.org/D23337
llvm-svn: 278270
|
|
|
|
| |
llvm-svn: 277933
|
|
|
|
|
|
| |
preventing VMOVDQU32/VMOVDQA32 from being recognized. Fix a bug in the code that stops execution dependency fix from turning operations on 32-bit integer element types into operations on 64-bit integer element types.
llvm-svn: 277327
|
|
|
|
|
|
|
|
| |
It failed with assertion before this patch.
Differential Revision: https://reviews.llvm.org/D22735
llvm-svn: 276648
|
|
|
|
|
|
| |
vectors.
llvm-svn: 275045
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
An identity COPY like this:
%AL = COPY %AL, %EAX<imp-def>
has no semantic effect, but encodes liveness information: Further users
of %EAX only depend on this instruction even though it does not define
the full register.
Replace the COPY with a KILL instruction in those cases to maintain this
liveness information. (This reverts a small part of r238588 but this
time adds a comment explaining why a KILL instruction is useful).
llvm-svn: 274952
|
|
|
|
|
|
|
|
|
| |
Fixed fp_to_uint instruction selection on KNL.
One pattern was missing for <4 x double> to <4 x i32>
Differential Revision: http://reviews.llvm.org/D18512
llvm-svn: 264701
|
|
|
|
|
|
|
|
|
| |
vextracti64x4 ,vextracti64x2, vextracti32x8, vextracti32x4, vextractf64x4, vextractf64x2, vextractf32x8, vextractf32x4
Added tests for intrinsics and encoding.
Differential Revision: http://reviews.llvm.org/D11802
llvm-svn: 247276
|
|
|
|
|
|
|
|
| |
,vextracti64x2, vextracti32x8, vextracti32x4, vextractf64x4, vextractf64x2, vextractf32x8, vextractf32x4 Added tests for intrinsics and encoding."
This reverts commit r247149, as it was breaking numerous buildbots of varied architectures.
llvm-svn: 247177
|
|
|
|
|
|
|
|
|
| |
vextracti64x4 ,vextracti64x2, vextracti32x8, vextracti32x4, vextractf64x4, vextractf64x2, vextractf32x8, vextractf32x4
Added tests for intrinsics and encoding.
Differential Revision: http://reviews.llvm.org/D11802
llvm-svn: 247149
|
|
|
|
|
|
|
|
|
|
| |
SKX supports conversion for all FP types. Integer types include doublewords and quardwords.
I added "Legal" status for these nodes and a bunch of tests.
I added "NoVLX" for AVX DAG selection to force VLX instructions selection when VLX is supported.
Differential Revision: http://reviews.llvm.org/D11255
llvm-svn: 242637
|
|
|
|
| |
llvm-svn: 236955
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
load instruction
Essentially the same as the GEP change in r230786.
A similar migration script can be used to update test cases, though a few more
test case improvements/changes were required this time around: (r229269-r229278)
import fileinput
import sys
import re
pat = re.compile(r"((?:=|:|^)\s*load (?:atomic )?(?:volatile )?(.*?))(| addrspace\(\d+\) *)\*($| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$)")
for line in sys.stdin:
sys.stdout.write(re.sub(pat, r"\1, \2\3*\4", line))
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7649
llvm-svn: 230794
|
|
|
|
|
|
|
| |
Checked some corner cases, for example translation
of <8 x i1> to <8 x double>
llvm-svn: 221883
|
|
|
|
| |
llvm-svn: 211164
|
|
|
|
| |
llvm-svn: 205754
|
|
|
|
| |
llvm-svn: 199016
|