From 6eaad1e5397dc84c7dbb78be4fa433bcd6fb137f Mon Sep 17 00:00:00 2001 From: Tim Renouf Date: Tue, 9 Jan 2018 21:34:43 +0000 Subject: [AMDGPU] Fixed incorrect uniform branch condition Summary: I had a case where multiple nested uniform ifs resulted in code that did v_cmp comparisons, combining the results with s_and_b64, s_or_b64 and s_xor_b64 and using the resulting mask in s_cbranch_vccnz, without first ensuring that bits for inactive lanes were clear. There was already code for inserting an "s_and_b64 vcc, exec, vcc" to clear bits for inactive lanes in the case that the branch is instruction selected as s_cbranch_scc1 and is then changed to s_cbranch_vccnz in SIFixSGPRCopies. I have added the same code into SILowerControlFlow for the case that the branch is instruction selected as s_cbranch_vccnz. This de-optimizes the code in some cases where the s_and is not needed, because vcc is the result of a v_cmp, or multiple v_cmp instructions combined by s_and/s_or. We should add a pass to re-optimize those cases. Reviewers: arsenm, kzhuravl Subscribers: wdng, yaxunl, t-tye, llvm-commits, dstuttard, timcorringham, nhaehnle Differential Revision: https://reviews.llvm.org/D41292 llvm-svn: 322119 --- llvm/test/CodeGen/AMDGPU/cf-loop-on-constant.ll | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'llvm/test/CodeGen/AMDGPU/cf-loop-on-constant.ll') diff --git a/llvm/test/CodeGen/AMDGPU/cf-loop-on-constant.ll b/llvm/test/CodeGen/AMDGPU/cf-loop-on-constant.ll index 1e0af2611b0..1e04544d2cb 100644 --- a/llvm/test/CodeGen/AMDGPU/cf-loop-on-constant.ll +++ b/llvm/test/CodeGen/AMDGPU/cf-loop-on-constant.ll @@ -95,7 +95,7 @@ for.body: ; GCN-LABEL: {{^}}loop_arg_0: ; GCN: v_and_b32_e32 v{{[0-9]+}}, 1, v{{[0-9]+}} -; GCN: v_cmp_eq_u32_e32 vcc, 1, +; GCN: v_cmp_eq_u32{{[^,]*}}, 1, ; GCN: [[LOOPBB:BB[0-9]+_[0-9]+]] ; GCN: s_add_i32 s{{[0-9]+}}, s{{[0-9]+}}, 0x80 -- cgit v1.2.3