summaryrefslogtreecommitdiffstats
path: root/llvm/test/CodeGen/AMDGPU/fmul-2-combine-multi-use.ll
Commit message (Collapse)AuthorAgeFilesLines
* [AMDGPU] Come back patch for the 'Assign register class for cross block ↵Alexander Timofeev2019-10-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | values according to the divergence.' Detailed description: After https://reviews.llvm.org/D59990 submit several issues were discovered. Changes in common code were preserved but AMDGPU specific part was reverted to keep the backend working correctly. Discovered issues were addressed in the following commits: https://reviews.llvm.org/D67662 https://reviews.llvm.org/D67101 https://reviews.llvm.org/D63953 https://reviews.llvm.org/D63731 This change brings back AMDGPU specific changes. Reviewed by: rampitec, arsenm Differential Revision: https://reviews.llvm.org/D68635 llvm-svn: 374767
* [AMDGPU] gfx10 v_fmac_f16 operand foldingStanislav Mekhanoshin2019-09-251-5/+5
| | | | | | | | Fold immediates into v_fmac_f16. Differential Revision: https://reviews.llvm.org/D68037 llvm-svn: 372906
* [AMDGPU] more gfx1010 tests. NFC.Stanislav Mekhanoshin2019-06-121-42/+62
| | | | llvm-svn: 363190
* DAG: Enhance isKnownNeverNaNMatt Arsenault2018-08-031-12/+12
| | | | | | | | | | | | Add a parameter for testing specifically for sNaNs - at least one instruction pattern on AMDGPU needs to check specifically for this. Also handle more cases, and add a target hook for custom nodes, similar to the hooks for known bits. llvm-svn: 338910
* AMDGPU: Fix test check line bugsMatt Arsenault2018-07-311-3/+3
| | | | llvm-svn: 338374
* AMDGPU: Add pass to lower kernel arguments to loadsMatt Arsenault2018-06-261-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This replaces most argument uses with loads, but for now not all. The code in SelectionDAG for calling convention lowering is actively harmful for amdgpu_kernel. It attempts to split the argument types into register legal types, which results in low quality code for arbitary types. Since all kernel arguments are passed in memory, we just want the raw types. I've tried a couple of methods of mitigating this in SelectionDAG, but it's easier to just bypass this problem alltogether. It's possible to hack around the problem in the initial lowering, but the real problem is the DAG then expects to be able to use CopyToReg/CopyFromReg for uses of the arguments outside the block. Exposing the argument loads in the IR also has the advantage that the LoadStoreVectorizer can merge them. I'm not sure the best approach to dealing with the IR argument list is. The patch as-is just leaves the IR arguments in place, so all the existing code will still compute the same kernarg size and pointlessly lowers the arguments. Arguably the frontend should emit kernels with an empty argument list in the first place. Alternatively a dummy array could be inserted as a single argument just to reserve space. This does have some disadvantages. Local pointer kernel arguments can no longer have AssertZext placed on them as the equivalent !range metadata is not valid on pointer typed loads. This is mostly bad for SI which needs to know about the known bits in order to use the DS instruction offset, so in this case this is not done. More importantly, this skips noalias arguments since this pass does not yet convert this to the equivalent !alias.scope and !noalias metadata. Producing this metadata correctly seems to be tricky, although this logically is the same as inlining into a function which doesn't exist. Additionally, exposing these loads to the vectorizer may result in degraded aliasing information if a pointer load is merged with another argument load. I'm also not entirely sure this is preserving the current clover ABI, although I would greatly prefer if it would stop widening arguments and match the HSA ABI. As-is I think it is extending < 4-byte arguments to 4-bytes but doesn't align them to 4-bytes. llvm-svn: 335650
* AMDGPU: Mark all unspecified CC functions in tests as amdgpu_kernelMatt Arsenault2017-03-211-12/+12
| | | | | | | | | | | | Currently the default C calling convention functions are treated the same as compute kernels. Make this explicit so the default calling convention can be changed to a non-kernel. Converted with perl -pi -e 's/define void/define amdgpu_kernel void/' on the relevant test directories (and undoing in one place that actually wanted a non-kernel). llvm-svn: 298444
* AMDGPU: Check if users of fneg can fold modsMatt Arsenault2017-02-021-5/+5
| | | | | | In multi-use cases this can save a few instructions. llvm-svn: 293962
* Enable FeatureFlatForGlobal on Volcanic IslandsMatt Arsenault2017-01-241-2/+2
| | | | | | | | | | | This switches to the workaround that HSA defaults to for the mesa path. This should be applied to the 4.0 branch. Patch by Vedran Miletić <vedran@miletic.net> llvm-svn: 292982
* AMDGPU: Combine fp16/fp64 subtarget featuresMatt Arsenault2017-01-231-6/+19
| | | | | | | The same control register controls both, and are set to the same defaults. Keep the old names around as aliases. llvm-svn: 292837
* AMDGPU: Fold fneg into fmulMatt Arsenault2017-01-121-4/+4
| | | | | | Patch mostly by Fiona Glaser llvm-svn: 291732
* AMDGPU: Pull fneg/fabs out of a selectMatt Arsenault2017-01-111-2/+2
| | | | | | Allows better source modifier usage. llvm-svn: 291729
* AMDGPU: Swap order of operands in fadd/fsub combineMatt Arsenault2016-12-221-8/+8
| | | | | | | FMA is canonicalized to constant in the middle operand. Do the same so fmad matches and avoid an extra combine step. llvm-svn: 290313
* AMDGPU: Enable some f32 fadd/fsub combines for f16Matt Arsenault2016-12-221-1/+111
| | | | llvm-svn: 290308
* AMDGPU: Run fp combine tests on VIMatt Arsenault2016-12-201-18/+27
| | | | llvm-svn: 290192
* AMDGPU: Support commuting with immediate in src0Matt Arsenault2016-09-081-1/+1
| | | | llvm-svn: 280970
* AMDGPU: Add volatile to test loads and storesMatt Arsenault2016-04-121-8/+8
| | | | | | | | When the memory vectorizer is enabled, these tests break. These tests don't really care about the memory instructions, and it's easier to write check lines with the unmerged loads. llvm-svn: 266071
* Only do fmul (fadd x, x), c combine if the fadd only has one useMatt Arsenault2015-07-171-0/+102
This was increasing the instruction count if the fadd has multiple uses. llvm-svn: 242498
OpenPOWER on IntegriCloud