diff options
| author | Matt Arsenault <Matthew.Arsenault@amd.com> | 2019-03-27 17:31:29 +0000 |
|---|---|---|
| committer | Matt Arsenault <Matthew.Arsenault@amd.com> | 2019-03-27 17:31:29 +0000 |
| commit | 7b14b2425d607a97629643fd6d84ee2e64f5554d (patch) | |
| tree | b675fc195667d734ec92569123300c61ca239e71 /llvm/test | |
| parent | 86e4fc050441dc3338b9d2f54d7263f14dc1a8be (diff) | |
| download | bcm5719-llvm-7b14b2425d607a97629643fd6d84ee2e64f5554d.tar.gz bcm5719-llvm-7b14b2425d607a97629643fd6d84ee2e64f5554d.zip | |
Reapply "AMDGPU: Scavenge register instead of findUnusedReg"
This reapplies r356149, using the correct overload of findUnusedReg
which passes the current iterator.
This worked most of the time, because the scavenger iterator was moved
at the end of the frame index loop in PEI. This would fail if the
spill was the first instruction. This was further hidden by the fact
that the scavenger wasn't passed in for normal frame index
elimination.
llvm-svn: 357098
Diffstat (limited to 'llvm/test')
| -rw-r--r-- | llvm/test/CodeGen/AMDGPU/pei-reg-scavenger-position.mir | 44 |
1 files changed, 44 insertions, 0 deletions
diff --git a/llvm/test/CodeGen/AMDGPU/pei-reg-scavenger-position.mir b/llvm/test/CodeGen/AMDGPU/pei-reg-scavenger-position.mir new file mode 100644 index 00000000000..817ffc82f31 --- /dev/null +++ b/llvm/test/CodeGen/AMDGPU/pei-reg-scavenger-position.mir @@ -0,0 +1,44 @@ +# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py +# RUN: llc -mtriple=amdgcn-amd-amdhsa -verify-machineinstrs -run-pass=prologepilog %s -o - | FileCheck %s + +# The wrong form of scavengeRegister was used, so it wasn't accounting +# for the iterator passed to eliminateFrameIndex. It was instead using +# the current iterator in the scavenger, which was not yet set if the +# spill was the first instruction in the block. + +--- +name: scavenge_register_position +tracksRegLiveness: true + +# Force a frame larger than the immediate field with a large alignment. +stack: + - { id: 0, type: default, offset: 4096, size: 4, alignment: 8192 } + +machineFunctionInfo: + isEntryFunction: true + scratchRSrcReg: $sgpr0_sgpr1_sgpr2_sgpr3 + scratchWaveOffsetReg: $sgpr5 + frameOffsetReg: $sgpr5 + +body: | + ; CHECK-LABEL: name: scavenge_register_position + ; CHECK: bb.0: + ; CHECK: successors: %bb.1(0x80000000) + ; CHECK: liveins: $sgpr4, $sgpr0_sgpr1_sgpr2_sgpr3 + ; CHECK: $sgpr5 = COPY $sgpr4 + ; CHECK: $sgpr6 = S_ADD_U32 $sgpr5, 524288, implicit-def $scc + ; CHECK: $vgpr0 = BUFFER_LOAD_DWORD_OFFSET $sgpr0_sgpr1_sgpr2_sgpr3, killed $sgpr6, 0, 0, 0, 0, implicit $exec :: (load 4 from %stack.0, align 8192, addrspace 5) + ; CHECK: S_BRANCH %bb.1 + ; CHECK: bb.1: + ; CHECK: liveins: $sgpr5, $sgpr0_sgpr1_sgpr2_sgpr3 + ; CHECK: $sgpr4 = S_ADD_U32 $sgpr5, 524288, implicit-def $scc + ; CHECK: $vgpr0 = BUFFER_LOAD_DWORD_OFFSET $sgpr0_sgpr1_sgpr2_sgpr3, killed $sgpr4, 0, 0, 0, 0, implicit $exec :: (load 4 from %stack.0, align 8192, addrspace 5) + ; CHECK: S_ENDPGM 0, implicit $vgpr0 + bb.0: + $vgpr0 = SI_SPILL_V32_RESTORE %stack.0, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr5, 0, implicit $exec :: (load 4 from %stack.0, addrspace 5) + S_BRANCH %bb.1 + + bb.1: + $vgpr0 = SI_SPILL_V32_RESTORE %stack.0, $sgpr0_sgpr1_sgpr2_sgpr3, $sgpr5, 0, implicit $exec :: (load 4 from %stack.0, addrspace 5) + S_ENDPGM 0, implicit $vgpr0 +... |

