summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
...
* Update BTVER2 sched numbers for SSE42 string instructions.Andrew V. Tischenko2017-11-271-24/+30
| | | | | | Differential Revision: https://reviews.llvm.org/D39846 llvm-svn: 319013
* [SelectionDAG] Teach SplitVecRes_SETCC to call GetSplitVector if the ↵Craig Topper2017-11-271-3/+12
| | | | | | operands have already been split. llvm-svn: 319010
* [SelectionDAG] Fix function name in comment. NFCCraig Topper2017-11-271-2/+2
| | | | llvm-svn: 319009
* [X86] Fix an assert that was incorrectly checking for BMI instead of AVX512VBMI.Craig Topper2017-11-261-2/+1
| | | | | | The check is actually unnecessary since AVX512VBMI implies AVX512BW which is the other part of the assert. llvm-svn: 319006
* [X86][3DNow] Add 3DNow! instruction itinerary and scheduling classesSimon Pilgrim2017-11-262-37/+84
| | | | llvm-svn: 319005
* [X86][3DNow] Remove unused I3DNow_binop_rm/I3DNow_conv_rm templates. NFCISimon Pilgrim2017-11-261-11/+0
| | | | llvm-svn: 319000
* [X86][MMX] Add IIC_MMX_MOVMSK instruction itinerary classSimon Pilgrim2017-11-263-2/+4
| | | | llvm-svn: 318999
* [SCEV] Adding a check on outgoing branches of a terminator instr for ↵Jatin Bhateja2017-11-261-10/+13
| | | | | | | | | | | | | | | | | | | | | | SCEVBackedgeConditionFolder, NFC. Summary: For a given loop, getLoopLatch returns a non-null value when a loop has only one latch block. In the modified context adding an assertion to check that both the outgoing branches of a terminator instruction (of latch) does not target same header. + few minor code reorganization. Reviewers: jbhateja Reviewed By: jbhateja Subscribers: sanjoy Differential Revision: https://reviews.llvm.org/D40460 llvm-svn: 318997
* Control-Flow Enforcement Technology - Shadow Stack support (LLVM side)Oren Ben Simhon2017-11-2610-13/+98
| | | | | | | | | | | | | | | | | | Shadow stack solution introduces a new stack for return addresses only. The HW has a Shadow Stack Pointer (SSP) that points to the next return address. If we return to a different address, an exception is triggered. The shadow stack is managed using a series of intrinsics that are introduced in this patch as well as the new register (SSP). The intrinsics are mapped to new instruction set that implements CET mechanism. The patch also includes initial infrastructure support for IBT. For more information, please see the following: https://software.intel.com/sites/default/files/managed/4d/2a/control-flow-enforcement-technology-preview.pdf Differential Revision: https://reviews.llvm.org/D40223 Change-Id: I4daa1f27e88176be79a4ac3b4cd26a459e88fed4 llvm-svn: 318996
* [x86][icelake]GFNICoby Tayree2017-11-2611-3/+175
| | | | | | | | | | galois field arithmetic (GF(2^8)) insns: gf2p8affineinvqb gf2p8affineqb gf2p8mulb Differential Revision: https://reviews.llvm.org/D40373 llvm-svn: 318993
* [SCEV] NFC : Removing unnecessary check on outgoing branches of a branch instr.Jatin Bhateja2017-11-261-2/+1
| | | | | | | | | | | | | | | | | Summary: For a given loop, getLoopLatch returns a non-null value when a loop has only one latch block. In the modified context a check on both the outgoing branches of a terminator instruction (of latch) to same header is redundant. Reviewers: jbhateja Reviewed By: jbhateja Subscribers: sanjoy Differential Revision: https://reviews.llvm.org/D40460 llvm-svn: 318991
* [X86] Add separate intrinsics for scalar FMA4 instructions.Craig Topper2017-11-258-24/+56
| | | | | | | | | | | | | | | | | | | | | | | Summary: These instructions zero the non-scalar part of the lower 128-bits which makes them different than the FMA3 instructions which pass through the non-scalar part of the lower 128-bits. I've only added fmadd because we should be able to derive all other variants using operand negation in the intrinsic header like we do for AVX512. I think there are still some missed negate folding opportunities with the FMA4 instructions in light of this behavior difference that I hadn't noticed before. I've split the tests so that we can use different intrinsics for scalar testing between the two. I just copied the tests split the RUN lines and changed out the scalar intrinsics. fma4-fneg-combine.ll is a new test to make sure we negate the fma4 intrinsics correctly though there are a couple TODOs in it. Reviewers: RKSimon, spatel Reviewed By: RKSimon Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D39851 llvm-svn: 318984
* [X86] Don't report gather is legal on Skylake CPUs when AVX2/AVX512 is ↵Craig Topper2017-11-254-14/+35
| | | | | | | | | | | | | | | | | | | disabled. Allow gather on SKX/CNL/ICL when AVX512 is disabled by using AVX2 instructions. Summary: This adds a new fast gather feature bit to cover all CPUs that support fast gather that we can use independent of whether the AVX512 feature is enabled. I'm only using this new bit to qualify AVX2 codegen. AVX512 is still implicitly assuming fast gather to keep tests working and to match the scatter behavior. Test command lines have been added for these two cases. Reviewers: magabari, delena, RKSimon, zvi Reviewed By: RKSimon Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D40282 llvm-svn: 318983
* [SelectionDAG] Remove some dead code from vector scalaringCraig Topper2017-11-251-16/+1
| | | | | | | | | | | | | | | | | Summary: Currently ScalarizeVecRes_SETCC checks for the result type being a vector and jumps to ScalarizeVecRes_VSETCC. But if we're scalarizing a vector result, aren't we guaranteed to be looking at a vector type? This patch deletes the current ScalarizeVecRes_SETCC and renames ScalarizeVecRes_VSETCC to ScalarizeVecRes_SETCC. Reviewers: RKSimon, arsenm, eladcohen, zvi Reviewed By: RKSimon Subscribers: wdng, llvm-commits Differential Revision: https://reviews.llvm.org/D40452 llvm-svn: 318982
* Add BTVER2 sched support for SHLD/SHRD.Andrew V. Tischenko2017-11-251-0/+24
| | | | | | Differential Revision: https://reviews.llvm.org/D40124 llvm-svn: 318977
* [X86] Simplify some code in combineSetCC. NFCICraig Topper2017-11-251-10/+6
| | | | | | Make the condition for doing a std::swap simpler so we don't have to repeat the full checks. llvm-svn: 318970
* [X86] Qualify some vector specific code with VT.isVector(). NFCICraig Topper2017-11-251-2/+2
| | | | | | Other checks inside require a build_vector, but we this lets us stop earlier and makes the code more clear. llvm-svn: 318969
* [X86] Support folding to andnps with SSE1 only.Craig Topper2017-11-251-1/+4
| | | | | | With SSE1 only, we emit FAND and FXOR nodes for v4f32. llvm-svn: 318968
* [X86] Add some early DAG combines to turn v4i32 AND/OR/XOR into ↵Craig Topper2017-11-251-6/+31
| | | | | | | | | | FAND/FOR/FXOR whe only SSE1 is available. v4i32 isn't a legal type with sse1 only and would end up getting scalarized otherwise. This isn't completely ideal as it doesn't handle cases like v8i32 that would get split to v4i32. But it at least helps with code written using the clang intrinsic header. llvm-svn: 318967
* Recommit r318963 "[APInt] Don't print debug messages from the APInt knuth ↵Craig Topper2017-11-241-0/+10
| | | | | | | | | | | | | | division algorithm by default" The previous commit had the condition in the do/while backwards. Debug builds currently print out low level details of the Knuth division algorithm when -debug is used. This information isn't useful in most cases and just adds noise to the log. This adds a new preprocessor flag to enable the prints in the knuth division code in APInt. Differential Revision: https://reviews.llvm.org/D40404 llvm-svn: 318966
* [X86] Prevent using X * rsqrt(X) to approximate sqrt when only sse1 is enabled.Craig Topper2017-11-241-1/+4
| | | | | | | | This optimization can occur after type legalization and emit a vselect with v4i32 type. But that type is not legal with sse1. This ultimately gets scalarized by the second type legalization that runs after vector op legalization, but that's really intended to handle the scalar types that might be introduced by legalizing vector ops. For now just stop this from happening by disabling the optimization with sse1. llvm-svn: 318965
* Revert 318963 "[APInt] Don't print debug messages from the APInt knuth ↵Craig Topper2017-11-241-10/+0
| | | | | | | | division algorithm by default" I seem to have botched the logic when switching to push_macro llvm-svn: 318964
* [APInt] Don't print debug messages from the APInt knuth division algorithm ↵Craig Topper2017-11-241-0/+10
| | | | | | | | | | | | by default Debug builds currently print out low level details of the Knuth division algorithm when -debug is used. This information isn't useful in most cases and just adds noise to the log. This adds a new preprocessor flag to enable the prints in the knuth division code in APInt. Differential Revision: https://reviews.llvm.org/D40404 llvm-svn: 318963
* [CodeGenPrepare] Check that erased sunken address are not reusedSimon Dardis2017-11-241-5/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | CodeGenPrepare sinks address computations from one basic block to another and attempts to reuse address computations that have already been sunk. If the same address computation appears twice with the first instance as an operand of a load whose result is an operand to a simplifable select, CodeGenPrepare simplifies the select and recursively erases the now dead instructions. CodeGenPrepare then attempts to use the erased address computation for the second load. Fix this by erasing the cached address value if it has zero uses before looking for the address value in the sunken address map. This partially resolves PR35209. Thanks to Alexander Richardson for reporting the issue! This fixed version relands r318032 which was reverted in r318049 due to sanitizer buildbot failures. Reviewers: john.brawn Differential Revision: https://reviews.llvm.org/D39841 llvm-svn: 318956
* [AMDGPU][MC][GFX9] Added v_interp_p2_f16 and v_interp_p2_legacy_f16Dmitry Preobrazhensky2017-11-241-2/+18
| | | | | | | | | | See bug 33629: https://bugs.llvm.org//show_bug.cgi?id=33629 Reviewers: artem.tamazov, SamWot, arsenm Differential Revision: https://reviews.llvm.org/D39488 llvm-svn: 318955
* Make helpers static. NFC.Benjamin Kramer2017-11-244-12/+14
| | | | llvm-svn: 318953
* [CGP] Make optimizeMemoryInst able to combine more kinds of ExtAddrMode fieldsJohn Brawn2017-11-241-12/+94
| | | | | | | | | | | | | This patch extends the recent work in optimizeMemoryInst to make it able to combine more ExtAddrMode fields than just the BaseReg. This fixes some benchmark regressions introduced by r309397, where GVN PRE is hoisting a getelementptr such that it can no longer be combined into the addressing mode of the load or store that uses it. Differential Revision: https://reviews.llvm.org/D38133 llvm-svn: 318949
* [mips] Set microMIPS ASE flagAleksandar Beserminji2017-11-242-1/+4
| | | | | | | | | | This patch fixes an issue where microMIPS ASE flag is not set when a function has micromips attribute or when .set micromips directive is used. Differential Revision: https://reviews.llvm.org/D40316 llvm-svn: 318948
* [AMDGPU][MC][GFX9] Added support of 'inst_offset' modifier for compatibility ↵Dmitry Preobrazhensky2017-11-241-2/+5
| | | | | | | | | | | | with SP3 See bug 35329: https://bugs.llvm.org//show_bug.cgi?id=35329 Reviewers: arsenm, vpykhtin, artem.tamazov Differential Revision: https://reviews.llvm.org/D40350 llvm-svn: 318947
* [X86] Don't invert NewCC variable while processing the jcc/setcc/cmovcc ↵Craig Topper2017-11-231-7/+9
| | | | | | | | | | instructions in optimizeCompareInstr. The NewCC variable is calculated outside of the loop that processes jcc/setcc/cmovcc instructions. If we invert it during the loop it can cause an incorrect value to be used by a later iteration. Instead only read it during the loop and use a new variable to store the possibly inverted value. Fixes PR35399. llvm-svn: 318934
* [X86] Teach isel that X86ISD::CMPM_RND zeros the upper bits of the mask ↵Craig Topper2017-11-232-1/+3
| | | | | | register. llvm-svn: 318933
* [X86] Remove some unneeded opcodes from getVectorMaskingNode. NFCCraig Topper2017-11-231-3/+0
| | | | | | We never reach here with these opcodes. llvm-svn: 318932
* [X86] Add X86ISD::CMPM_RND to getVectorMaskingNode to select ISD::AND ↵Craig Topper2017-11-231-0/+1
| | | | | | | | instead of ISD::VSELECT A later DAG combine will turn the VSELECT into an AND, but we have the other mask compare opcodes here so add this one too. llvm-svn: 318931
* [X86] Remove some dead code leftover from when i1 was a legal type. NFCICraig Topper2017-11-231-22/+3
| | | | llvm-svn: 318930
* [X86] Remove some dead code. NFCCraig Topper2017-11-231-4/+2
| | | | | | AVX512 code never reaches here so we don't need to handle X86ISD::CMPM as an opcode. llvm-svn: 318929
* MSan: remove an unnecessary cast. NFC for userspace instrumenetation.Alexander Potapenko2017-11-231-3/+3
| | | | llvm-svn: 318923
* [X86][SSE] Use (V)PHMINPOSUW for vXi16 SMAX/SMIN/UMAX/UMIN horizontal ↵Simon Pilgrim2017-11-235-7/+77
| | | | | | | | | | | | | | reductions (PR32841) (V)PHMINPOSUW determines the UMIN element in an v8i16 input, with suitable bit flipping it can also be used for SMAX/SMIN/UMAX cases as well. This patch matches vXi16 SMAX/SMIN/UMAX/UMIN horizontal reductions and reduces the input down to a v8i16 vector before calling (V)PHMINPOSUW. A later patch will use this for v16i8 reductions as well (PR32841). Differential Revision: https://reviews.llvm.org/D39729 llvm-svn: 318917
* [ARM GlobalISel] Support G_FDIV for s32 and s64Diana Picus2017-11-233-3/+8
| | | | | | | | | | | | TableGen already generates code for selecting a G_FDIV, so we only need to add a test. For the legalizer and reg bank select, we do the same thing as for the other floating point binary operations: either mark as legal if we have a FP unit or lower to a libcall, and map to the floating point registers. llvm-svn: 318915
* [ARM GlobalISel] Support G_FMUL for s32 and s64Diana Picus2017-11-233-3/+8
| | | | | | | | | | | TableGen already generates code for selecting a G_FMUL, so we only need to add a test for that part. For the legalizer and reg bank select, we do the same thing as the other floating point binary operators: either mark as legal if we have a FP unit or lower to a libcall, and map to the floating point registers. llvm-svn: 318910
* [mips] Use the delay slot filler to convert branches for microMIPSR6.Simon Dardis2017-11-231-10/+8
| | | | | | | | | | | | | | | | | | The MIPS delay slot filler converts delay slot branches into compact forms for the MIPS ISAs which support them. For branches that compare (in)equality with with zero, it converts them into branches with implict zero register operands. These branches have a slightly greater range than normal two register operands branches. Changing the branches at this point in the pipeline offers the long branch pass the ability to mark better judgements if a long branch sequence is required. Reviewers: atanasyan Differential Revision: https://reviews.llvm.org/D40314 llvm-svn: 318908
* [x86][icelake]BITALGCoby Tayree2017-11-235-0/+45
| | | | | | | | | | 2/3 vpshufbitqmb encoding 3/3 vpshufbitqmb intrinsics Differential Revision: https://reviews.llvm.org/D40222 llvm-svn: 318904
* [MSan] Move the access address check before the shadow access for that addressAlexander Potapenko2017-11-231-2/+1
| | | | | | | | | | | | MSan used to insert the shadow check of the store pointer operand _after_ the shadow of the value operand has been written. This happens to work in the userspace, as the whole shadow range is always mapped. However in the kernel the shadow page may not exist, so the bug may cause a crash. This patch moves the address check in front of the shadow access. llvm-svn: 318901
* [IRCE][NFC] Add no wrap flags to no-wrapping SCEV calculationMax Kazantsev2017-11-231-2/+3
| | | | | | | | | In a lambda where we expect to have result within bounds, add respective `nsw/nuw` flags to help SCEV just in case if it fails to figure them out on its own. Differential Revision: https://reviews.llvm.org/D40168 llvm-svn: 318898
* Add backend name to AVR Target to enable runtime info to be fed back into ↵Leslie Zhai2017-11-231-1/+1
| | | | | | TableGen llvm-svn: 318895
* [X86] Turn an if condition that should always be true into an assert. NFCICraig Topper2017-11-231-44/+43
| | | | | | If Values.size() == 0, we should have returned 0 or undef earlier. If it was 1, it's a splat and we already handled that too. llvm-svn: 318894
* [X86] Remove unnecessary check for is128BitVector. NFCCraig Topper2017-11-231-1/+1
| | | | | | 256 and 512 bit vectors were picked off earlier in the function. Lots of code between there and here already assumed 128-bit vectors. llvm-svn: 318893
* [X86] Simplify some bitmasking and use llvm_unreachable to mark an ↵Craig Topper2017-11-231-2/+2
| | | | | | impossible case. NFC llvm-svn: 318892
* [X86] Remove a ternary operator that can only ever be false. NFCCraig Topper2017-11-231-2/+1
| | | | | | We are checking for AVX512 in an SSE1 only block. llvm-svn: 318891
* [NFC] CodeGen: Handle shift amount type in DAGTypeLegalizer::SplitIntegerYaxun Liu2017-11-232-13/+9
| | | | | | | | | | | | | | | | This patch reverts change to X86TargetLowering::getScalarShiftAmountTy in rL318727 and move the logic to DAGTypeLegalizer::SplitInteger. The reason is that getScalarShiftAmountTy returns a shift amount type that is suitable for common use cases in CodeGen. DAGTypeLegalizer::SplitInteger is a rare situation which requires a shift amount type larger than what getScalarShiftAmountTy. In this case, it is more reasonable to do special handling of shift amount type in DAGTypeLegalizer::SplitInteger only. If similar situations arises the logic may be moved to a separate function. Differential Revision: https://reviews.llvm.org/D40320 llvm-svn: 318890
* [AArch64] Adjust the cost model for Exynos M1 and M2Evandro Menezes2017-11-221-12/+17
| | | | | | Fix the modeling of some loads and stores. llvm-svn: 318884
OpenPOWER on IntegriCloud