| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
|
|
| |
Differential Revision: https://reviews.llvm.org/D39846
llvm-svn: 319013
|
| |
|
|
|
|
| |
operands have already been split.
llvm-svn: 319010
|
| |
|
|
| |
llvm-svn: 319009
|
| |
|
|
|
|
| |
The check is actually unnecessary since AVX512VBMI implies AVX512BW which is the other part of the assert.
llvm-svn: 319006
|
| |
|
|
| |
llvm-svn: 319005
|
| |
|
|
| |
llvm-svn: 319000
|
| |
|
|
| |
llvm-svn: 318999
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
SCEVBackedgeConditionFolder, NFC.
Summary:
For a given loop, getLoopLatch returns a non-null value
when a loop has only one latch block. In the modified
context adding an assertion to check that both the outgoing branches of
a terminator instruction (of latch) does not target same header.
+
few minor code reorganization.
Reviewers: jbhateja
Reviewed By: jbhateja
Subscribers: sanjoy
Differential Revision: https://reviews.llvm.org/D40460
llvm-svn: 318997
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Shadow stack solution introduces a new stack for return addresses only.
The HW has a Shadow Stack Pointer (SSP) that points to the next return address.
If we return to a different address, an exception is triggered.
The shadow stack is managed using a series of intrinsics that are introduced in this patch as well as the new register (SSP).
The intrinsics are mapped to new instruction set that implements CET mechanism.
The patch also includes initial infrastructure support for IBT.
For more information, please see the following:
https://software.intel.com/sites/default/files/managed/4d/2a/control-flow-enforcement-technology-preview.pdf
Differential Revision: https://reviews.llvm.org/D40223
Change-Id: I4daa1f27e88176be79a4ac3b4cd26a459e88fed4
llvm-svn: 318996
|
| |
|
|
|
|
|
|
|
|
| |
galois field arithmetic (GF(2^8)) insns:
gf2p8affineinvqb
gf2p8affineqb
gf2p8mulb
Differential Revision: https://reviews.llvm.org/D40373
llvm-svn: 318993
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
For a given loop, getLoopLatch returns a non-null value
when a loop has only one latch block. In the modified
context a check on both the outgoing branches of a terminator instruction (of latch) to same header is redundant.
Reviewers: jbhateja
Reviewed By: jbhateja
Subscribers: sanjoy
Differential Revision: https://reviews.llvm.org/D40460
llvm-svn: 318991
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
These instructions zero the non-scalar part of the lower 128-bits which makes them different than the FMA3 instructions which pass through the non-scalar part of the lower 128-bits.
I've only added fmadd because we should be able to derive all other variants using operand negation in the intrinsic header like we do for AVX512.
I think there are still some missed negate folding opportunities with the FMA4 instructions in light of this behavior difference that I hadn't noticed before.
I've split the tests so that we can use different intrinsics for scalar testing between the two. I just copied the tests split the RUN lines and changed out the scalar intrinsics.
fma4-fneg-combine.ll is a new test to make sure we negate the fma4 intrinsics correctly though there are a couple TODOs in it.
Reviewers: RKSimon, spatel
Reviewed By: RKSimon
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D39851
llvm-svn: 318984
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
disabled. Allow gather on SKX/CNL/ICL when AVX512 is disabled by using AVX2 instructions.
Summary:
This adds a new fast gather feature bit to cover all CPUs that support fast gather that we can use independent of whether the AVX512 feature is enabled. I'm only using this new bit to qualify AVX2 codegen. AVX512 is still implicitly assuming fast gather to keep tests working and to match the scatter behavior.
Test command lines have been added for these two cases.
Reviewers: magabari, delena, RKSimon, zvi
Reviewed By: RKSimon
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D40282
llvm-svn: 318983
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Currently ScalarizeVecRes_SETCC checks for the result type being a vector and jumps to ScalarizeVecRes_VSETCC. But if we're scalarizing a vector result, aren't we guaranteed to be looking at a vector type?
This patch deletes the current ScalarizeVecRes_SETCC and renames ScalarizeVecRes_VSETCC to ScalarizeVecRes_SETCC.
Reviewers: RKSimon, arsenm, eladcohen, zvi
Reviewed By: RKSimon
Subscribers: wdng, llvm-commits
Differential Revision: https://reviews.llvm.org/D40452
llvm-svn: 318982
|
| |
|
|
|
|
| |
Differential Revision: https://reviews.llvm.org/D40124
llvm-svn: 318977
|
| |
|
|
|
|
| |
Make the condition for doing a std::swap simpler so we don't have to repeat the full checks.
llvm-svn: 318970
|
| |
|
|
|
|
| |
Other checks inside require a build_vector, but we this lets us stop earlier and makes the code more clear.
llvm-svn: 318969
|
| |
|
|
|
|
| |
With SSE1 only, we emit FAND and FXOR nodes for v4f32.
llvm-svn: 318968
|
| |
|
|
|
|
|
|
|
|
| |
FAND/FOR/FXOR whe only SSE1 is available.
v4i32 isn't a legal type with sse1 only and would end up getting scalarized otherwise.
This isn't completely ideal as it doesn't handle cases like v8i32 that would get split to v4i32. But it at least helps with code written using the clang intrinsic header.
llvm-svn: 318967
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
division algorithm by default"
The previous commit had the condition in the do/while backwards.
Debug builds currently print out low level details of the Knuth division algorithm when -debug is used. This information isn't useful in most cases and just adds noise to the log.
This adds a new preprocessor flag to enable the prints in the knuth division code in APInt.
Differential Revision: https://reviews.llvm.org/D40404
llvm-svn: 318966
|
| |
|
|
|
|
|
|
| |
This optimization can occur after type legalization and emit a vselect with v4i32 type. But that type is not legal with sse1. This ultimately gets scalarized by the second type legalization that runs after vector op legalization, but that's really intended to handle the scalar types that might be introduced by legalizing vector ops.
For now just stop this from happening by disabling the optimization with sse1.
llvm-svn: 318965
|
| |
|
|
|
|
|
|
| |
division algorithm by default"
I seem to have botched the logic when switching to push_macro
llvm-svn: 318964
|
| |
|
|
|
|
|
|
|
|
|
|
| |
by default
Debug builds currently print out low level details of the Knuth division algorithm when -debug is used. This information isn't useful in most cases and just adds noise to the log.
This adds a new preprocessor flag to enable the prints in the knuth division code in APInt.
Differential Revision: https://reviews.llvm.org/D40404
llvm-svn: 318963
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CodeGenPrepare sinks address computations from one basic block to another
and attempts to reuse address computations that have already been sunk. If
the same address computation appears twice with the first instance as an
operand of a load whose result is an operand to a simplifable select,
CodeGenPrepare simplifies the select and recursively erases the now dead
instructions. CodeGenPrepare then attempts to use the erased address
computation for the second load.
Fix this by erasing the cached address value if it has zero uses before
looking for the address value in the sunken address map.
This partially resolves PR35209.
Thanks to Alexander Richardson for reporting the issue!
This fixed version relands r318032 which was reverted in r318049 due to
sanitizer buildbot failures.
Reviewers: john.brawn
Differential Revision: https://reviews.llvm.org/D39841
llvm-svn: 318956
|
| |
|
|
|
|
|
|
|
|
| |
See bug 33629: https://bugs.llvm.org//show_bug.cgi?id=33629
Reviewers: artem.tamazov, SamWot, arsenm
Differential Revision: https://reviews.llvm.org/D39488
llvm-svn: 318955
|
| |
|
|
| |
llvm-svn: 318953
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This patch extends the recent work in optimizeMemoryInst to make it able to
combine more ExtAddrMode fields than just the BaseReg.
This fixes some benchmark regressions introduced by r309397, where GVN PRE is
hoisting a getelementptr such that it can no longer be combined into the
addressing mode of the load or store that uses it.
Differential Revision: https://reviews.llvm.org/D38133
llvm-svn: 318949
|
| |
|
|
|
|
|
|
|
|
| |
This patch fixes an issue where microMIPS ASE flag is not set
when a function has micromips attribute or when .set micromips
directive is used.
Differential Revision: https://reviews.llvm.org/D40316
llvm-svn: 318948
|
| |
|
|
|
|
|
|
|
|
|
|
| |
with SP3
See bug 35329: https://bugs.llvm.org//show_bug.cgi?id=35329
Reviewers: arsenm, vpykhtin, artem.tamazov
Differential Revision: https://reviews.llvm.org/D40350
llvm-svn: 318947
|
| |
|
|
|
|
|
|
|
|
| |
instructions in optimizeCompareInstr.
The NewCC variable is calculated outside of the loop that processes jcc/setcc/cmovcc instructions. If we invert it during the loop it can cause an incorrect value to be used by a later iteration. Instead only read it during the loop and use a new variable to store the possibly inverted value.
Fixes PR35399.
llvm-svn: 318934
|
| |
|
|
|
|
| |
register.
llvm-svn: 318933
|
| |
|
|
|
|
| |
We never reach here with these opcodes.
llvm-svn: 318932
|
| |
|
|
|
|
|
|
| |
instead of ISD::VSELECT
A later DAG combine will turn the VSELECT into an AND, but we have the other mask compare opcodes here so add this one too.
llvm-svn: 318931
|
| |
|
|
| |
llvm-svn: 318930
|
| |
|
|
|
|
| |
AVX512 code never reaches here so we don't need to handle X86ISD::CMPM as an opcode.
llvm-svn: 318929
|
| |
|
|
| |
llvm-svn: 318923
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
reductions (PR32841)
(V)PHMINPOSUW determines the UMIN element in an v8i16 input, with suitable bit flipping it can also be used for SMAX/SMIN/UMAX cases as well.
This patch matches vXi16 SMAX/SMIN/UMAX/UMIN horizontal reductions and reduces the input down to a v8i16 vector before calling (V)PHMINPOSUW.
A later patch will use this for v16i8 reductions as well (PR32841).
Differential Revision: https://reviews.llvm.org/D39729
llvm-svn: 318917
|
| |
|
|
|
|
|
|
|
|
|
|
| |
TableGen already generates code for selecting a G_FDIV, so we only need
to add a test.
For the legalizer and reg bank select, we do the same thing as for the
other floating point binary operations: either mark as legal if we have
a FP unit or lower to a libcall, and map to the floating point
registers.
llvm-svn: 318915
|
| |
|
|
|
|
|
|
|
|
|
| |
TableGen already generates code for selecting a G_FMUL, so we only need
to add a test for that part.
For the legalizer and reg bank select, we do the same thing as the other
floating point binary operators: either mark as legal if we have a FP
unit or lower to a libcall, and map to the floating point registers.
llvm-svn: 318910
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The MIPS delay slot filler converts delay slot branches into compact
forms for the MIPS ISAs which support them. For branches that compare
(in)equality with with zero, it converts them into branches with implict
zero register operands. These branches have a slightly greater range
than normal two register operands branches.
Changing the branches at this point in the pipeline offers the long
branch pass the ability to mark better judgements if a long branch
sequence is required.
Reviewers: atanasyan
Differential Revision: https://reviews.llvm.org/D40314
llvm-svn: 318908
|
| |
|
|
|
|
|
|
|
|
| |
2/3
vpshufbitqmb encoding
3/3
vpshufbitqmb intrinsics
Differential Revision: https://reviews.llvm.org/D40222
llvm-svn: 318904
|
| |
|
|
|
|
|
|
|
|
|
|
| |
MSan used to insert the shadow check of the store pointer operand
_after_ the shadow of the value operand has been written.
This happens to work in the userspace, as the whole shadow range is
always mapped. However in the kernel the shadow page may not exist, so
the bug may cause a crash.
This patch moves the address check in front of the shadow access.
llvm-svn: 318901
|
| |
|
|
|
|
|
|
|
| |
In a lambda where we expect to have result within bounds, add respective `nsw/nuw` flags to
help SCEV just in case if it fails to figure them out on its own.
Differential Revision: https://reviews.llvm.org/D40168
llvm-svn: 318898
|
| |
|
|
|
|
| |
TableGen
llvm-svn: 318895
|
| |
|
|
|
|
| |
If Values.size() == 0, we should have returned 0 or undef earlier. If it was 1, it's a splat and we already handled that too.
llvm-svn: 318894
|
| |
|
|
|
|
| |
256 and 512 bit vectors were picked off earlier in the function. Lots of code between there and here already assumed 128-bit vectors.
llvm-svn: 318893
|
| |
|
|
|
|
| |
impossible case. NFC
llvm-svn: 318892
|
| |
|
|
|
|
| |
We are checking for AVX512 in an SSE1 only block.
llvm-svn: 318891
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch reverts change to X86TargetLowering::getScalarShiftAmountTy in
rL318727 and move the logic to DAGTypeLegalizer::SplitInteger.
The reason is that getScalarShiftAmountTy returns a shift amount type that
is suitable for common use cases in CodeGen. DAGTypeLegalizer::SplitInteger
is a rare situation which requires a shift amount type larger than what
getScalarShiftAmountTy. In this case, it is more reasonable to do special
handling of shift amount type in DAGTypeLegalizer::SplitInteger only. If
similar situations arises the logic may be moved to a separate function.
Differential Revision: https://reviews.llvm.org/D40320
llvm-svn: 318890
|
| |
|
|
|
|
| |
Fix the modeling of some loads and stores.
llvm-svn: 318884
|