diff options
| author | Stanislav Mekhanoshin <Stanislav.Mekhanoshin@amd.com> | 2019-10-21 19:25:27 +0000 |
|---|---|---|
| committer | Stanislav Mekhanoshin <Stanislav.Mekhanoshin@amd.com> | 2019-10-21 19:25:27 +0000 |
| commit | 33092194f2cefecc75b0fd90ea21843e3550d206 (patch) | |
| tree | 25beacd6a29df5d0c41f666fb4ced49e695162bd /llvm/test/CodeGen/AMDGPU/mfma-loop.ll | |
| parent | 7c15c4fb1745eb80d034f1ce3e2313b4c900bd23 (diff) | |
| download | bcm5719-llvm-33092194f2cefecc75b0fd90ea21843e3550d206.tar.gz bcm5719-llvm-33092194f2cefecc75b0fd90ea21843e3550d206.zip | |
[AMDGPU] Select AGPR in PHI operand legalization
If a PHI defines AGPR legalize its operands to AGPR.
At the moment we can get an AGPR PHI with VGPR operands.
I am not aware of any problems as it seems to be handled
gracefully in RA, but this is not right anyway.
It also slightly decreases VGPR pressure in some cases
because we do not have to a copy via VGPR.
Differential Revision: https://reviews.llvm.org/D69206
llvm-svn: 375446
Diffstat (limited to 'llvm/test/CodeGen/AMDGPU/mfma-loop.ll')
| -rw-r--r-- | llvm/test/CodeGen/AMDGPU/mfma-loop.ll | 53 |
1 files changed, 52 insertions, 1 deletions
diff --git a/llvm/test/CodeGen/AMDGPU/mfma-loop.ll b/llvm/test/CodeGen/AMDGPU/mfma-loop.ll index 02f7c9bcee7..a67aadfcd27 100644 --- a/llvm/test/CodeGen/AMDGPU/mfma-loop.ll +++ b/llvm/test/CodeGen/AMDGPU/mfma-loop.ll @@ -1,13 +1,64 @@ ; RUN: llc -march=amdgcn -mcpu=gfx908 -verify-machineinstrs < %s | FileCheck -check-prefix=GCN %s ; GCN-LABEL: {{^}}test_mfma_loop_zeroinit: -; GCN-COUNT32: v_accvgpr_write_b32 + +; Check that we do not use 32 temp vgprs, but rotate 3 vgprs only. +; 3 vgprs are needed to avoid wait states between writes. + +; FIXME: We should not be using and temporary registers at all. +; At the moment we initialize an sgpr, then copy it via vgprs. + +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP2:v[0-9]+]] +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP3:v[0-9]+]] + +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP1:v[0-9]+]] +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP2]] +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP3]] + +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP1]] +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP2]] +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP3]] + +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP1]] +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP2]] +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP3]] + +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP1]] +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP2]] +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP3]] + +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP1]] +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP2]] +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP3]] + +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP1]] +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP2]] +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP3]] + +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP1]] +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP2]] +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP3]] + +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP1]] +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP2]] +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP3]] + +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP1]] +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP2]] +; GCN: v_accvgpr_write_b32 a{{[0-9]+}}, [[TMP3]] + +; Check that we do not copy agprs to vgprs and back inside the loop. + ; GCN: [[LOOP:BB[0-9_]+]]: ; GCN-NOT: v_accvgpr ; GCN: v_mfma_f32_32x32x1f32 ; GCN-NOT: v_accvgpr ; GCN: s_cbranch_scc1 [[LOOP]] + +; Final result should be read only once after the loop. + ; GCN-COUNT32: v_accvgpr_read_b32 + define amdgpu_kernel void @test_mfma_loop_zeroinit(<32 x float> addrspace(1)* %arg) { entry: br label %for.cond.preheader |

