summaryrefslogtreecommitdiffstats
path: root/llvm/test/CodeGen/AMDGPU/skip-if-dead.ll
diff options
context:
space:
mode:
authorTim Renouf <tpr.llvm@botech.co.uk>2018-01-09 21:34:43 +0000
committerTim Renouf <tpr.llvm@botech.co.uk>2018-01-09 21:34:43 +0000
commit6eaad1e5397dc84c7dbb78be4fa433bcd6fb137f (patch)
tree913b778f0fc2c78591c1a0fe7460de17ba98ff1b /llvm/test/CodeGen/AMDGPU/skip-if-dead.ll
parentf0ef137bd0c34f984090638df8c62ad55d790694 (diff)
downloadbcm5719-llvm-6eaad1e5397dc84c7dbb78be4fa433bcd6fb137f.tar.gz
bcm5719-llvm-6eaad1e5397dc84c7dbb78be4fa433bcd6fb137f.zip
[AMDGPU] Fixed incorrect uniform branch condition
Summary: I had a case where multiple nested uniform ifs resulted in code that did v_cmp comparisons, combining the results with s_and_b64, s_or_b64 and s_xor_b64 and using the resulting mask in s_cbranch_vccnz, without first ensuring that bits for inactive lanes were clear. There was already code for inserting an "s_and_b64 vcc, exec, vcc" to clear bits for inactive lanes in the case that the branch is instruction selected as s_cbranch_scc1 and is then changed to s_cbranch_vccnz in SIFixSGPRCopies. I have added the same code into SILowerControlFlow for the case that the branch is instruction selected as s_cbranch_vccnz. This de-optimizes the code in some cases where the s_and is not needed, because vcc is the result of a v_cmp, or multiple v_cmp instructions combined by s_and/s_or. We should add a pass to re-optimize those cases. Reviewers: arsenm, kzhuravl Subscribers: wdng, yaxunl, t-tye, llvm-commits, dstuttard, timcorringham, nhaehnle Differential Revision: https://reviews.llvm.org/D41292 llvm-svn: 322119
Diffstat (limited to 'llvm/test/CodeGen/AMDGPU/skip-if-dead.ll')
-rw-r--r--llvm/test/CodeGen/AMDGPU/skip-if-dead.ll6
1 files changed, 3 insertions, 3 deletions
diff --git a/llvm/test/CodeGen/AMDGPU/skip-if-dead.ll b/llvm/test/CodeGen/AMDGPU/skip-if-dead.ll
index 9ae36b0a06c..54fa93ae9c8 100644
--- a/llvm/test/CodeGen/AMDGPU/skip-if-dead.ll
+++ b/llvm/test/CodeGen/AMDGPU/skip-if-dead.ll
@@ -267,7 +267,7 @@ exit:
; CHECK: [[PHIBB]]:
; CHECK: v_cmp_eq_f32_e32 vcc, 0, [[PHIREG]]
-; CHECK-NEXT: s_cbranch_vccz [[ENDBB:BB[0-9]+_[0-9]+]]
+; CHECK: s_cbranch_vccz [[ENDBB:BB[0-9]+_[0-9]+]]
; CHECK: ; %bb10
; CHECK: v_mov_b32_e32 v{{[0-9]+}}, 9
@@ -302,14 +302,14 @@ end:
; CHECK-LABEL: {{^}}no_skip_no_successors:
; CHECK: v_cmp_nge_f32
-; CHECK-NEXT: s_cbranch_vccz [[SKIPKILL:BB[0-9]+_[0-9]+]]
+; CHECK: s_cbranch_vccz [[SKIPKILL:BB[0-9]+_[0-9]+]]
; CHECK: ; %bb6
; CHECK: s_mov_b64 exec, 0
; CHECK: [[SKIPKILL]]:
; CHECK: v_cmp_nge_f32_e32 vcc
-; CHECK-NEXT: %bb.3: ; %bb5
+; CHECK: %bb.3: ; %bb5
; CHECK-NEXT: .Lfunc_end{{[0-9]+}}
define amdgpu_ps void @no_skip_no_successors(float inreg %arg, float inreg %arg1) #0 {
bb:
OpenPOWER on IntegriCloud