summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/AMDGPU/SIISelLowering.cpp
Commit message (Collapse)AuthorAgeFilesLines
...
* [AMDGPU] Widened vector length for global/constant address space.Farhana Aleen2018-03-071-2/+8
| | | | llvm-svn: 326904
* [TargetLowering] Rename DAGCombinerInfo::isAfterLegalizeVectorOps to ↵Craig Topper2018-03-061-1/+1
| | | | | | | | | | | | DAGCombiner::isAfterLegalizeDAG since that's what it checks. NFC The code checks Level == AfterLegalizeDAG which is the fourth and last of the possible DAG combine stages that we have. There is a Level called AfterLegalVectorOps, but that's the third DAG combine and it doesn't always run. A function called isAfterLegalVectorOps should imply it returns true in either of the DAG combines that runs after the legalize vector ops stage, but that's not what this function does. llvm-svn: 326832
* Pass Divergence Analysis data to Selection DAG to drive divergenceAlexander Timofeev2018-03-051-2/+2
| | | | | | | | dependent instruction selection. Differential revision: https://reviews.llvm.org/D35267 llvm-svn: 326703
* AMDGPU/GCN: Promote i16 ctpopJan Vesely2018-03-021-0/+1
| | | | | | | | | i16 capable ASICs do not support i16 operands for this instruction. Add tablegen pattern to merge chained i16 additions. Differential Revision: https://reviews.llvm.org/D43985 llvm-svn: 326535
* AMDGPU/SI: Turn off GPR Indexing Mode immediately after the interested ↵Changpeng Fang2018-02-161-38/+23
| | | | | | | | | | | | | | | | | | instruction. Summary: In the current implementation of GPR Indexing Mode when the index is of non-uniform, the s_set_gpr_idx_off instruction is incorrectly inserted after the loop. This will lead the instructions with vgpr operands (v_readfirstlane for example) to read incorrect vgpr. In this patch, we fix the issue by inserting s_set_gpr_idx_on/off immediately around the interested instruction. Reviewers: rampitec Differential Revision: https://reviews.llvm.org/D43297 llvm-svn: 325355
* [AMDGPU] Remove non-temporal flag from argument loadsStanislav Mekhanoshin2018-02-141-1/+0
| | | | | | | | | Kernel arguments likely read by all workitems and should not bypass cache. Fixes performance hit in sub-dword argument loads. Differential Revision: https://reviews.llvm.org/D43249 llvm-svn: 325146
* Reapply "AMDGPU: Add 32-bit constant address space"Matt Arsenault2018-02-091-9/+21
| | | | | | This reverts r324494 and reapplies r324487. llvm-svn: 324747
* AMDGPU: Fix layering issueMatt Arsenault2018-02-091-1/+1
| | | | | | | Move utility function that depends on codegen. Fixes build with r324487 reapplied. llvm-svn: 324746
* Revert "AMDGPU: Add 32-bit constant address space"Rafael Espindola2018-02-071-21/+9
| | | | | | | | This reverts commit r324487. It broke clang tests. llvm-svn: 324494
* AMDGPU: Add 32-bit constant address spaceMarek Olsak2018-02-071-9/+21
| | | | | | | | | | | | | | | | | | | | | | | Note: This is a candidate for LLVM 6.0, because it was planned to be in that release but was delayed due to a long review period. Merge conflict in release_60 - resolution: Add "-p6:32:32" into the second (non-amdgiz) string. Only scalar loads support 32-bit pointers. An address in a VGPR will fail to compile. That's OK because the results of loads will only be used in places where VGPRs are forbidden. Updated AMDGPUAliasAnalysis and used SReg_64_XEXEC. The tests cover all uses cases we need for Mesa. Reviewers: arsenm, nhaehnle Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits Differential Revision: https://reviews.llvm.org/D41651 llvm-svn: 324487
* AMDGPU: Add intrinsics llvm.amdgcn.cvt.{pknorm.i16, pknorm.u16, pk.i16, pk.u16}Marek Olsak2018-01-311-4/+46
| | | | | | | | | | Reviewers: arsenm, nhaehnle Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye Differential Revision: https://reviews.llvm.org/D41663 llvm-svn: 323908
* [AMDGPU] fix LDS f32 intrinsicsDaniil Fukalov2018-01-261-12/+12
| | | | | | | | | | | | - using qualified pointer addrspace in intrinsics class to avoid .f32 mangling - changed too common atomic mangling to ds - added missing intrinsics to AMDGPUTTIImpl::getTgtMemIntrinsic Reviewed by: b-sumner Differential Revision: https://reviews.llvm.org/D42383 llvm-svn: 323516
* AMDGPU/SI: Add d16 support for image intrinsics.Changpeng Fang2018-01-181-24/+320
| | | | | | | | | | | | | Summary: This patch implements d16 support for image load, image store and image sample intrinsics. Reviewers: Matt, Brian. Differential Revision: https://reviews.llvm.org/D3991 llvm-svn: 322903
* [AMDGPU] add LDS f32 intrinsicsDaniil Fukalov2018-01-171-6/+36
| | | | | | | | | | | | added llvm.amdgcn.atomic.{add|min|max}.f32 intrinsics to allow generate ds_{add|min|max}[_rtn]_f32 instructions needed for OpenCL float atomics in LDS Reviewed by: arsenm Differential Revision: https://reviews.llvm.org/D37985 llvm-svn: 322656
* AMDGPU/SI: Add d16 support for buffer intrinsics.Changpeng Fang2018-01-121-20/+131
| | | | | | | | | | Differential Revision: https://reviews.llvm.org/D38906 Reviewers: Matt and Brian. llvm-svn: 322402
* AMDGPU: Use unique PSVs for buffer resourcesMatt Arsenault2017-12-291-32/+78
| | | | | | | Also fixes using the wrong memory type for some intrinsics when custom lowering them. llvm-svn: 321557
* AMDGPU: Implement getTgtMemIntrinsic for imagesMatt Arsenault2017-12-291-15/+154
| | | | | | | | | | | | | Currently all images are lowered to have a single image PseudoSourceValue. Image stores happen to have overly strict mayLoad/mayStore/hasSideEffects flags set on them, so this happens to work. When these are fixed to be correct, the scheduler breaks this because the identical PSVs are assumed to be the same address. These need to be unique to the image resource value. llvm-svn: 321555
* MachineFunction: Return reference from getFunction(); NFCMatthias Braun2017-12-151-15/+15
| | | | | | The Function can never be nullptr so we can return a reference. llvm-svn: 320884
* TLI: Allow using PSV for intrinsic mem operandsMatt Arsenault2017-12-141-0/+1
| | | | llvm-svn: 320756
* DAG: Expose all MMO flags in getTgtMemIntrinsicMatt Arsenault2017-12-141-3/+4
| | | | | | | | | | | | | | Rather than adding more bits to express every MMO flag you could want, just directly use the MMO flags. Also fixes using a bunch of bool arguments to getMemIntrinsicNode. On AMDGPU, buffer and image intrinsics should always have MODereferencable set, but currently there is no way to do that directly during the initial intrinsic lowering. llvm-svn: 320746
* AMDGPU: Partially fix disassembly of MIMG instructionsMatt Arsenault2017-12-131-1/+2
| | | | | | | | | | | | | | | | | | | | | Stores failed to decode at all since they didn't have a DecoderNamespace set. Loads worked, but did not change the register width displayed to match the numbmer of enabled channels. The number of printed registers for vaddr is still wrong, but I don't think that's encoded in the instruction so there's not much we can do about that. Image atomics are still broken. MIMG is the same encoding for SI/VI, but the image atomic classes are split up into encoding specific versions unlike every other MIMG instruction. They have isAsmParserOnly set on them for some reason. dmask is also special for these, so we probably should not have it as an explicit operand as it is now. llvm-svn: 320614
* AMDGPU: image_getlod and image_getresinfo do not read memoryMatt Arsenault2017-12-081-13/+32
| | | | llvm-svn: 320187
* AMDGPU: Preserve MMO in adjustWritemaskMatt Arsenault2017-12-081-0/+2
| | | | | | | | Follow up to r319705. Currently the MMO is produced after this in the custom inserter, so this doesn't change anything yet. llvm-svn: 320186
* AMDGPU: Fix creating invalid copy when adjusting dmaskMatt Arsenault2017-12-041-46/+42
| | | | | | | | | Move the entire optimization to one place. Before it was possible to adjust dmask without changing the register class of the output instruction, since they were done in separate places. Fix all lane sizes and move all of the optimization into the DAG folding. llvm-svn: 319705
* AMDGPU: Use gfx9 carry-less add/sub instructionsMatt Arsenault2017-11-301-0/+8
| | | | llvm-svn: 319491
* DAG: Add nuw when splitting loads and storesMatt Arsenault2017-11-291-6/+5
| | | | | | | | | | | The object can't straddle the address space wrap around, so I think it's OK to assume any offsets added to the base object pointer can't overflow. Similar logic already appears to be applied in SelectionDAGBuilder when lowering aggregate returns. llvm-svn: 319272
* [CodeGen] Print register names in lowercase in both MIR and debug outputFrancis Visoiu Mistrih2017-11-281-1/+1
| | | | | | | | | | | As part of the unification of the debug format and the MIR format, always print registers as lowercase. * Only debug printing is affected. It now follows MIR. Differential Revision: https://reviews.llvm.org/D40417 llvm-svn: 319187
* [AMDGPU] Fix SITargetLowering::LowerCall for pointer info of byval argumentYaxun Liu2017-11-221-2/+3
| | | | | | | | | | | SITargetLowering::LowerCall uses dummy pointer info for byval argument, which causes flat load instead of buffer load. This patch fixes that. Differential Revision: https://reviews.llvm.org/D40040 llvm-svn: 318844
* Fix a bunch more layering of CodeGen headers that are in TargetDavid Blaikie2017-11-171-2/+2
| | | | | | | | All these headers already depend on CodeGen headers so moving them into CodeGen fixes the layering (since CodeGen depends on Target, not the other way around). llvm-svn: 318490
* AMDGPU: Replace i64 add/sub loweringMatt Arsenault2017-11-151-4/+48
| | | | | | | | | | | | | | | Use VOP3 add/addc like usual. This has some tradeoffs. Inline immediates fold a little better, but other constants are worse off. SIShrinkInstructions could be made smarter to handle these cases. This allows us to avoid selecting scalar adds where we need to track the carry in scc and replace its users. This makes it easier to use the carryless VALU adds. llvm-svn: 318340
* AMDGPU: Don't use MUBUF vaddr if address may overflowMatt Arsenault2017-11-151-1/+35
| | | | | | | Effectively revert r263964. Before we would not allow this if vaddr was not known to be positive. llvm-svn: 318240
* AMDGPU: Handle or in multi-use shl ptr combineMatt Arsenault2017-11-141-2/+2
| | | | llvm-svn: 318223
* AMDGPU: Preserve nuw in shl add ptr combineMatt Arsenault2017-11-131-1/+6
| | | | llvm-svn: 318017
* AMDGPU: Fix multi-use shl/add combineMatt Arsenault2017-11-131-31/+14
| | | | | | | | | | | This was using a custom function that didn't handle the addressing modes properly for private. Use isLegalAddressingMode to avoid duplicating this. Additionally, skip the combine if there is only one use since the standard combine will handle it. llvm-svn: 318013
* AMDGPU: Lower buffer store and atomic intrinsics manuallyMarek Olsak2017-11-091-0/+113
| | | | | | | | | | | | | | Summary: Without this, SIMemoryLegalizer inserts s_waitcnt vmcnt(0) before every buffer store and atomic instruction. Reviewers: arsenm, nhaehnle Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, llvm-commits, t-tye Differential Revision: https://reviews.llvm.org/D39060 llvm-svn: 317754
* AMDGPU: Select v_mad_u64_u32 and v_mad_i64_i32Matt Arsenault2017-11-061-4/+43
| | | | llvm-svn: 317492
* [AMDGPU] Fix assertion due to assuming pointer in default addr space is 32 bitYaxun Liu2017-11-061-5/+10
| | | | | | | | | | | | The backend assumes pointer in default addr space is 32 bit, which is not true for the new addr space mapping and causes assertion for unresolved functions. This patch fixes that. Differential Revision: https://reviews.llvm.org/D39643 llvm-svn: 317476
* AMDGPU: Add new intrinsic llvm.amdgcn.kill(i1)Marek Olsak2017-10-241-3/+4
| | | | | | | | | | | | | | | | Summary: Kill the thread if operand 0 == false. llvm.amdgcn.wqm.vote can be applied to the operand. Also allow kill in all shader stages. Reviewers: arsenm, nhaehnle Subscribers: kzhuravl, wdng, yaxunl, dstuttard, tpr, t-tye, llvm-commits Differential Revision: https://reviews.llvm.org/D38544 llvm-svn: 316427
* Use the return value of UpdateNodeOperands(); in some cases, ↵Mark Searles2017-10-161-2/+1
| | | | | | | | UpdateNodeOperands() modifies the node in-place and using the return value isn’t strictly necessary. However, it does not necessarily modify the node, but may return a resultant node if it already exists in the DAG. See comments in UpdateNodeOperands(). In that case, the return value must be used to avoid such scenarios as an infinite loop (node is assumed to have been updated, so added back to the worklist, and re-processed; however, node hasn’t changed so it is once again passed to UpdateNodeOperands(), assumed modified, added back to worklist; cycle infinitely repeats). Differential Revision: https://reviews.llvm.org/D38466 llvm-svn: 315957
* AMDGPU: Implement hasBitPreservingFPLogicMatt Arsenault2017-10-131-0/+4
| | | | llvm-svn: 315754
* [AMDGPU] For amdpal, widen interpolation mode workaroundTim Renouf2017-10-121-8/+25
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: The interpolation mode workaround ensures that at least one interpolation mode is enabled in PSInputAddr. It does not also check PSInputEna on the basis that the user might enable bits in that depending on run-time state. However, for amdpal os type, the user does not enable some bits after compilation based on run-time states; the register values being generated here are the final ones set in the hardware. Therefore, apply the workaround to PSInputAddr and PSInputEnable together. (The case where a bit is set in PSInputAddr but not in PSInputEnable is where the frontend set up an input arg for a particular interpolation mode, but nothing uses that input arg. Really we should have an earlier pass that removes such an arg.) Reviewers: arsenm, nhaehnle, dstuttard Subscribers: kzhuravl, wdng, yaxunl, t-tye, llvm-commits Differential Revision: https://reviews.llvm.org/D37758 llvm-svn: 315591
* AMDGPU: Set v2i32 any_extend to expandMatt Arsenault2017-10-051-0/+1
| | | | llvm-svn: 314993
* AMDGPU: VALU carry-in and v_cndmask condition cannot be EXECNicolai Haehnle2017-09-291-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | The hardware will only forward EXEC_LO; the high 32 bits will be zero. Additionally, inline constants do not work. At least, v_addc_u32_e64 v0, vcc, v0, v1, -1 which could conceivably be used to combine (v0 + v1 + 1) into a single instruction, acts as if all carry-in bits are zero. The llvm.amdgcn.ps.live test is adjusted; it would be nice to combine s_mov_b64 s[0:1], exec v_cndmask_b32_e64 v0, v1, v2, s[0:1] into v_mov_b32 v0, v3 but it's not particularly high priority. Fixes dEQP-GLES31.functional.shaders.helper_invocation.value.* llvm-svn: 314522
* AMDGPU: Start selecting v_mad_mixhi_f16Matt Arsenault2017-09-201-1/+45
| | | | llvm-svn: 313814
* AMDGPU: Stop modifying SP in call sequencesMatt Arsenault2017-09-141-3/+3
| | | | | | | | | | | | | | | Because the stack growth direction and addressing is done in the same direction, modifying SP at the beginning of the call sequence was incorrect. If we had a stack passed argument, we would end up skipping that number of bytes before pushing arguments, leaving unused/inconsistent space. The callee creates fixed stack objects in its frame, so the space necessary for these is already logically allocated in the callee, so we just let the callee increment SP if it really requires it. llvm-svn: 313279
* AMDGPU: Make frame register caller preservedMatt Arsenault2017-09-141-0/+15
| | | | | | | | | | | | | Using SplitCSR for the frame register was very broken. Often the copies in the prolog and epilog were optimized out, in addition to them being inserted after the true prolog where the FP was clobbered. I have a hacky solution which works that continues to use split CSR, but for now this is simpler and will get to working programs. llvm-svn: 313274
* AMDGPU: Don't legalize i16 extloads to i32 with legal i16Matt Arsenault2017-09-071-0/+3
| | | | | | | Keeping non-i16 extloads makes it easier to match some new gfx9 load instructions. llvm-svn: 312699
* AMDGPU: Select clamp pattern with v2f16Matt Arsenault2017-08-301-15/+30
| | | | llvm-svn: 312087
* AMDGPU: Start adding tail call supportMatt Arsenault2017-08-111-19/+185
| | | | | | Handle the sibling call cases. llvm-svn: 310753
* [AMDGPU] Add support for Whole Wavefront ModeConnor Abbott2017-08-041-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Whole Wavefront Wode (WWM) is similar to WQM, except that all of the lanes are always enabled, regardless of control flow. This is required for implementing wavefront reductions in non-uniform control flow, where we need to use the inactive lanes to propagate intermediate results, so they need to be enabled. We need to propagate WWM to uses (unless they're explicitly marked as exact) so that they also propagate intermediate results correctly. We do the analysis and exec mask munging during the WQM pass, since there are interactions with WQM for things that require both WQM and WWM. For simplicity, WWM is entirely block-local -- blocks are never WWM on entry or exit of a block, and WWM is not propagated to the block level. This means that computations involving WWM cannot involve control flow, but we only ever plan to use WWM for a few limited purposes (none of which involve control flow) anyways. Shaders can ask for WWM using the @llvm.amdgcn.wwm intrinsic. There isn't yet a way to turn WWM off -- that will be added in a future change. Finally, it turns out that turning on inactive lanes causes a number of problems with register allocation. While the best long-term solution seems like teaching LLVM's register allocator about predication, for now we need to add some hacks to prevent ourselves from getting into trouble due to constraints that aren't currently expressed in LLVM. For the gory details, see the comments at the top of SIFixWWMLiveness.cpp. Reviewers: arsenm, nhaehnle, tpr Subscribers: kzhuravl, wdng, mgorny, yaxunl, dstuttard, t-tye, llvm-commits Differential Revision: https://reviews.llvm.org/D35524 llvm-svn: 310087
OpenPOWER on IntegriCloud