summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/X86
Commit message (Collapse)AuthorAgeFilesLines
...
* [X86] Fix f128->i16 fptosi to promote the i16 to i32 before trying to form a ↵Craig Topper2019-11-201-15/+16
| | | | | | libcall. Previously one of the test cases added here gave an error.
* [X86] Mark vector STRICT_FP_ROUND as Legal instead of Custom.Craig Topper2019-11-201-3/+9
| | | | | | | | | The Custom handler doesn't do anything for these nodes anyway. SelectionDAGISel won't mutate them if they are Legal or Custom. X86 has custom code for mutating them due to missing isel patterns. When the isel patterns are added Legal will be the right answer. So go ahead a change it now since that's where we'll end up.
* [SelectionDAG][X86] Mutate strictFP nodes to non-strict in ↵Craig Topper2019-11-201-0/+7
| | | | | | | | | | DoInstructionSelection when the node is marked Expand rather than when it is not Legal. This allows operations that are marked Custom, but have some type combinations that are legal to get past this code. Add custom mutation code to X86's Select function for the nodes that don't have isel patterns yet.
* [musttail] Don't forward AL on Win64Reid Kleckner2019-11-191-2/+2
| | | | | | | | | | | | | | | | AL is only used for varargs on SysV platforms. Don't forward it on Windows. This allows control flow guard to set up an extra hidden parameter in RAX, as described in PR44049. This also has the effect of freeing up RAX for use in virtual member pointer thunks, which may also be a nice little code size improvement on Win64. Fixes PR44049 Reviewers: ajpaverd, efriedma, hans Differential Revision: https://reviews.llvm.org/D70413
* [LegalizeDAG][X86] Enable STRICT_FP_TO_SINT/UINT to be promotedCraig Topper2019-11-191-4/+7
| | | | Differential Revision: https://reviews.llvm.org/D70220
* [X86] Add custom type legalization and lowering for scalar ↵Craig Topper2019-11-192-30/+119
| | | | | | | | | | | | | | | STRICT_FP_TO_SINT/UINT This is a first pass at Custom lowering for these operations. I also updated some of the vector code where it was obviously easy and straightforward. More work needed in follow up. This enables these operations to be handled with X87 where special rounding control adjustments are needed to perform a truncate. Still need to fix Promotion in the target independent code in LegalizeDAG. llrint/llround split into separate test file because we can't make a strict libcall properly yet either and we need to do that when i64 isn't a legal type. This does not include any isel support. So we still rely on the mutation in SelectionDAGIsel to remove the strict from this stuff later. Except for the X87 stuff which goes through custom nodes that already had chains. Differential Revision: https://reviews.llvm.org/D70214
* DAG: Add function context to isFMAFasterThanFMulAndFAddMatt Arsenault2019-11-192-3/+4
| | | | | | | | AMDGPU needs to know the FP mode for the function to answer this correctly when this is removed from the subtarget. AArch64 had to make this more complicated by using this from an IR hook, so add an IR typed overload.
* [X86][SSE] Remove XFormVExtractWithShuffleIntoLoad to prevent legalization ↵Simon Pilgrim2019-11-191-122/+2
| | | | | | | | infinite loops (PR43971) As detailed in PR43971/D70267, the use of XFormVExtractWithShuffleIntoLoad causes issues where we end up in infinite loops of extract(targetshuffle(vecload)) -> extract(shuffle(vecload)) -> extract(vecload) -> extract(targetshuffle(vecload)), there are just too many legalization checks at every stage that we can't guarantee that extract(shuffle(vecload)) -> scalarload can occur. At the moment we see a number of minor regressions as we don't fold extract(shuffle(vecload)) -> scalarload before legal ops, these can be addressed in future patches and extension of X86ISelLowering's combineExtractWithShuffle.
* [X86] Add a 'break;' to the end of the last case in a switch to avoid ↵Craig Topper2019-11-181-0/+2
| | | | surprising the next person to add a case after this one. NFC
* [SVE][CodeGen] Scalable vector MVT size queriesGraham Hunter2019-11-181-4/+5
| | | | | | | | | | | | | | | | | | | * Implements scalable size queries for MVTs, split out from D53137. * Contains a fix for FindMemType to avoid using scalable vector type to contain non-scalable types. * Explicit casts for several places where implicit integer sign changes or promotion from 32 to 64 bits caused problems. * CodeGenDAGPatterns will treat scalable and non-scalable vector types as different. Reviewers: greened, cameron.mcinally, sdesmalen, rovka Reviewed By: rovka Differential Revision: https://reviews.llvm.org/D66871
* [WinEH] Fix the wrong alignment orientation during calculating EH frame.Wang, Pengfei2019-11-151-1/+1
| | | | | | | | | | | | Summary: This is a bug fix for further issues in PR43585. Reviewers: rnk, RKSimon, craig.topper, andrew.w.kaylor Subscribers: hiraditya, llvm-commits, annita.zhang Tags: #llvm Differential Revision: https://reviews.llvm.org/D70224
* Sink all InitializePasses.h includesReid Kleckner2019-11-133-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This file lists every pass in LLVM, and is included by Pass.h, which is very popular. Every time we add, remove, or rename a pass in LLVM, it caused lots of recompilation. I found this fact by looking at this table, which is sorted by the number of times a file was changed over the last 100,000 git commits multiplied by the number of object files that depend on it in the current checkout: recompiles touches affected_files header 342380 95 3604 llvm/include/llvm/ADT/STLExtras.h 314730 234 1345 llvm/include/llvm/InitializePasses.h 307036 118 2602 llvm/include/llvm/ADT/APInt.h 213049 59 3611 llvm/include/llvm/Support/MathExtras.h 170422 47 3626 llvm/include/llvm/Support/Compiler.h 162225 45 3605 llvm/include/llvm/ADT/Optional.h 158319 63 2513 llvm/include/llvm/ADT/Triple.h 140322 39 3598 llvm/include/llvm/ADT/StringRef.h 137647 59 2333 llvm/include/llvm/Support/Error.h 131619 73 1803 llvm/include/llvm/Support/FileSystem.h Before this change, touching InitializePasses.h would cause 1345 files to recompile. After this change, touching it only causes 550 compiles in an incremental rebuild. Reviewers: bkramer, asbirlea, bollu, jdoerfert Differential Revision: https://reviews.llvm.org/D70211
* [X86] Don't treat mxcsr as a register name when parsing MS inline assembly.Craig Topper2019-11-131-2/+3
| | | | | No instruction takes mxcsr as a an operand so we should always treat it as an identifier name.
* [X86] Don't set the operation action for i16 SINT_TO_FP to Promote just ↵Craig Topper2019-11-131-3/+9
| | | | | | | because SSE1 is enabled. Instead do custom promotion in the handler so that we can still allow i16 to be used with fp80. And f64 without sse2.
* [X86] Fix typo in comment. NFCCraig Topper2019-11-131-1/+1
|
* [X86] Move all the FP_TO_XINT/XINT_TO_FP setOperationActions into the same ↵Craig Topper2019-11-131-41/+28
| | | | | | | | | !useSoftFloat block. Qualify all of the Promote actions for these with !useSoftFloat too. NFCI The Promote action doesn't apply until LegalizeDAG. By the time we get there, we would have already softened all the FP operations if useSoftFloat was true. So there wouldn't be any operation left to Promote.
* [X86][AVX] Add plausible schedule classes to MASKPAIR/VP2INTERSECT/VDPBF16PS ↵Simon Pilgrim2019-11-131-20/+24
| | | | | | | | instructions These are really just placeholders that use approximately the right resources - once we have CPUs scheduler models that support these instructions they will need revisiting. In the meantime this means that all instructions have a class of some kind., meaning models can be more easily flagged as complete.
* [X86] Remove setOperationAction for FP_TO_SINT v8i16.Craig Topper2019-11-122-8/+3
| | | | | | | | This is no longer needed after widening legalization as we custom legalize v8i8 ourselves. Added entries to the cost model, but bumped the cost slightly to account for the truncate shuffle that wasn't costed before.
* [X86] Don't consider v64i1 as a legal type unless v64i8 is also a legal type.Craig Topper2019-11-121-25/+47
| | | | | This avoids some nasty issues with argument passing and lowering of arbitrary v64i8 shuffles.
* [X86] Only pass v64i8/v32i16 as v16i32 on non-avx512bw targets if the v16i32 ↵Craig Topper2019-11-121-4/+4
| | | | | | | | | | | type won't be split by prefer-vector-width=256 Otherwise just let the v64i8/v32i16 types be split to v32i8/v16i16. In reality this shouldn't happen because it means we have a 512-bit vector argument, but min-legal-vector-width says a value less than 512. But a 512-bit argument should have been factored into the preferred vector width.
* [X86] Update stale comment. NFCCraig Topper2019-11-111-2/+2
|
* [X86] Remove setOperationAction lines that say to promote MVT::i1Craig Topper2019-11-111-6/+0
| | | | | | | | MVT::i1 should be removed by type legalization before we reach any code that would act on the promote action. Mainly to avoid replicating this for strict FP versions of these operations.
* [X86] Remove some else branches after checking for !useSoftFloat() that set ↵Craig Topper2019-11-111-9/+0
| | | | | | | | | operations to Expand. If we're using soft floats, then these operations shoudl be softened during type legalization. They'll never get to LegalizeVectorOps or LegalizeDAG so they don't need to be Expanded there.
* Use MCRegister in copyPhysRegMatt Arsenault2019-11-112-3/+3
|
* [X86] Handle MO_ConstantPoolIndex in X86AsmPrinter::PrintOperandCraig Topper2019-11-091-0/+1
| | | | Fixes PR43952
* [AArch64][X86] Don't assume __powidf2 is available on Windows.Eli Friedman2019-11-081-0/+6
| | | | | | | | | | We had some code for this for 32-bit ARM, but this doesn't really need to be in target-specific code; generalize it. (I think this started showing up recently because we added an optimization that converts pow to powi.) Differential Revision: https://reviews.llvm.org/D69013
* Reland: [TII] Use optional destination and source pair as a return value; NFCDjordje Todorovic2019-11-082-13/+9
| | | | | | | | | | Refactor usage of isCopyInstrImpl, isCopyInstr and isAddImmediate methods to return optional machine operand pair of destination and source registers. Patch by Nikola Prica Differential Revision: https://reviews.llvm.org/D69622
* X86FrameLowering - fix bool to unsigned cast static analyzer warnings. NFCI.Simon Pilgrim2019-11-071-7/+7
|
* X86CondBrFolding - remove non-existent fixBranchProb function. NFC.Simon Pilgrim2019-11-071-2/+0
|
* [X86] Remove unused variable. NFCCraig Topper2019-11-061-1/+0
|
* [X86] Remove dead code from combineStore.Craig Topper2019-11-061-44/+10
| | | | | | Leftovers from before we switched to widening legalization. Fixes PR43919.
* [X86] Clamp large constant shift amounts for MMX shift intrinsics to 8-bits.Craig Topper2019-11-061-2/+5
| | | | | | | | | | | | | | | | | | The MMX intrinsics for shift by immediate take a 32-bit shift amount but the hardware for shifting by immediate only encodes 8-bits. For the intrinsic we don't require the shift amount to fit in 8-bits in the frontend because we don't check that its an immediate in the frontend. If its is not an immediate we move it to an MMX register and use the shift by register. But if it is an immediate we'll use the shift by immediate instruction. But we need to change the shift amount to 8-bits. We were previously doing this accidentally by masking it in the encoder. But this can make a large shift amount into a small in bounds shift amount. Instead we should clamp larger shift amounts to 255 so that the they don't become in bounds. Fixes PR43922
* [X86TargetTransformInfo] Fixed warning: Expression 'ISD == ISD::UREM' is ↵Dávid Bolvanský2019-11-061-1/+1
| | | | always true. NFCI.
* [X86] Fix SLM v2i64 ADD/Sub/CMPEQ instruction schedulesSimon Pilgrim2019-11-061-0/+16
| | | | | | Noticed while fixing the reduction costs for D59710 - the SLM model doesn't account for the poor throughput of v2i64 ops. Numbers taken from Intel AOM (+ checked against Agner)
* [X86] Fix SLM v2f64 ADD/MUL + FP BLEND/HADD instruction schedulesSimon Pilgrim2019-11-061-7/+7
| | | | Noticed while fixing the reduction costs for D59710 - the SLM model doesn't account for the poor throughput of v2f64/v2i64 ops.
* [X86ISelLowering] Fixed typo in assert. NFCI.Dávid Bolvanský2019-11-061-1/+1
|
* [CostModel][X86] Improve add vXi64 + fadd vXf64 reduction tests for SLMSimon Pilgrim2019-11-061-0/+26
| | | | As noted on D59710 we weren't handling the high costs of these operations on SLM.
* [x86] avoid crashing when splitting AVX stores with non-simple type (PR43916)Sanjay Patel2019-11-061-3/+5
| | | | | The store splitting transform was assuming a simple type (MVT), but that's not necessarily the case as shown in the test.
* [X86] Fix uninitialized variable warnings. NFCI.Simon Pilgrim2019-11-067-26/+26
|
* [X86] LowerAVXExtend - fix dodgy self-comparison assert.Simon Pilgrim2019-11-061-1/+1
| | | | PVS Studio noticed that we were asserting "VT.getVectorNumElements() == VT.getVectorNumElements()" instead of "VT.getVectorNumElements() == InVT.getVectorNumElements()".
* [X86] Gate select->fmin/fmax transform on NoSignedZeros instead of UnsafeFPMathBenjamin Kramer2019-11-051-8/+7
|
* [X86/Atomics] (Semantically) revert G246098, switch back to the old atomic ↵Philip Reames2019-11-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | example When writing an email for a follow up proposal, I realized one of the diffs in the committed change was incorrect. Digging into it revealed that the fix is complicated enough to require some thought, so reverting in the meantime. The problem is visible in this diff (from the revert): ; X64-SSE-LABEL: store_fp128: ; X64-SSE: # %bb.0: -; X64-SSE-NEXT: movaps %xmm0, (%rdi) +; X64-SSE-NEXT: subq $24, %rsp +; X64-SSE-NEXT: .cfi_def_cfa_offset 32 +; X64-SSE-NEXT: movaps %xmm0, (%rsp) +; X64-SSE-NEXT: movq (%rsp), %rsi +; X64-SSE-NEXT: movq {{[0-9]+}}(%rsp), %rdx +; X64-SSE-NEXT: callq __sync_lock_test_and_set_16 +; X64-SSE-NEXT: addq $24, %rsp +; X64-SSE-NEXT: .cfi_def_cfa_offset 8 ; X64-SSE-NEXT: retq store atomic fp128 %v, fp128* %fptr unordered, align 16 ret void The problem here is three fold: 1) x86-64 doesn't guarantee atomicity of anything larger than 8 bytes. Some platforms observably break this guarantee, others don't, but the codegen isn't considering this, so it's wrong on at least some platforms. 2) When I started to track down the problem, I discovered that DAGCombiner had stripped the atomicity off the store entirely. This comes down to idiomatic usage of DAG.getStore passing all MMO components separately as opposed to just passing the MMO. 3) On x86 (not -64), there are cases where 8 byte atomiciy is supported, but only for floating point operations. This would seem to imply that operation typing matters for correctness, and DAGCombine happily folds away bitcasts. I'm not 100% sure there's a problem here, but I'm not entirely sure there isn't either. I plan on returning to each issue in turn; sorry for the churn here.
* [X86] Specifically limit fmin/fmax commutativity to NoNaNs + NoSignedZerosBenjamin Kramer2019-11-051-2/+3
| | | | | The backend UnsafeFPMath flag is not a superset of all the others, so limit it to the exact bits needed.
* [globalisel] Rename G_GEP to G_PTR_ADDDaniel Sanders2019-11-053-8/+8
| | | | | | | | | | | | | | | | Summary: G_GEP is rather poorly named. It's a simple pointer+scalar addition and doesn't support any of the complexities of getelementptr. I therefore propose that we rename it. There's a G_PTR_MASK so let's follow that convention and go with G_PTR_ADD Reviewers: volkan, aditya_nandakumar, bogner, rovka, arsenm Subscribers: sdardis, jvesely, wdng, nhaehnle, hiraditya, jrtc27, atanasyan, arphaman, Petar.Avramovic, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D69734
* [X86] Lower the cost of avx512 horizontal bool and/or reductions to ↵Craig Topper2019-11-041-0/+21
| | | | | | | | | | | | | | | | | 2*log2(bitwidth)+1 for legal types. This better represents the kshift+binop we'd get for each stage before the final extract. Its likely we'll do even better by doing a kmov and a cmp with a GPR, but this is a good start. The default handling was costing a worst case single source permute shuffle of the vector before the binop. This worst case assumes the shuffle might have to be emulated with extracts and inserts. But since we know we're doing a reduction we can assume we'll get kshift lowering. There's still some room for improvement here, but this is much better than it was.
* [X86] Teach X86MCInstLower to swap operands of commutable instructions to ↵Craig Topper2019-11-041-0/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | enable 2-byte VEX encoding. Summary: The 2 source operands commutable instructions are encoded in the VEX.VVVV field and the r/m field of the MODRM byte plus the VEX.B field. The VEX.B field is missing from the 2-byte VEX encoding. If the VEX.VVVV source is 0-7 and the other register is 8-15 we can swap them to avoid needing the VEX.B field. This works as long as the VEX.W, VEX.mmmmm, and VEX.X fields are also not needed. Fixes PR36706. Reviewers: RKSimon, spatel Reviewed By: RKSimon Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D68550
* [X86] Add support for -mvzeroupper and -mno-vzeroupper to match gccCraig Topper2019-11-044-53/+68
| | | | | | | | | | | | | | | | | -mvzeroupper will force the vzeroupper insertion pass to run on CPUs that normally wouldn't. -mno-vzeroupper disables it on CPUs where it normally runs. To support this with the default feature handling in clang, we need a vzeroupper feature flag in X86.td. Since this flag has the opposite polarity of the fast-partial-ymm-or-zmm-write we used to use to disable the pass, we now need to add this new flag to every CPU except KNL/KNM and BTVER2 to keep identical behavior. Remove -fast-partial-ymm-or-zmm-write which is no longer used. Differential Revision: https://reviews.llvm.org/D69786
* [X86] Fix uninitialized variable warnings. NFCI.Simon Pilgrim2019-11-0410-42/+42
|
* [X86] Convert ShrinkMode to scoped enum class. NFCI.Simon Pilgrim2019-11-041-11/+15
|
* [X86] SimplifyDemandedVectorElts - attempt to recombine target shuffle using ↵Simon Pilgrim2019-11-041-0/+17
| | | | | | | | | | DemandedElts mask (REAPPLIED) If we don't demand all elements, then attempt to combine to a simpler shuffle. At the moment we can only do this if Depth == 0 as combineX86ShufflesRecursively uses Depth to track whether the shuffle has really changed or not - we'll need to change this before we can properly start merging combineX86ShufflesRecursively into SimplifyDemandedVectorElts (see D66004). This reapplies rL368307 (reverted at rL369167) after the fix for the infinite loop reported at PR43024 was applied at rG3f087e38a2e7b87a5adaaac1c1b61e51220e7ff3
OpenPOWER on IntegriCloud