summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/AMDGPU/AMDGPUISelDAGToDAG.cpp
diff options
context:
space:
mode:
authorTim Renouf <tpr.llvm@botech.co.uk>2018-01-09 21:34:43 +0000
committerTim Renouf <tpr.llvm@botech.co.uk>2018-01-09 21:34:43 +0000
commit6eaad1e5397dc84c7dbb78be4fa433bcd6fb137f (patch)
tree913b778f0fc2c78591c1a0fe7460de17ba98ff1b /llvm/lib/Target/AMDGPU/AMDGPUISelDAGToDAG.cpp
parentf0ef137bd0c34f984090638df8c62ad55d790694 (diff)
downloadbcm5719-llvm-6eaad1e5397dc84c7dbb78be4fa433bcd6fb137f.tar.gz
bcm5719-llvm-6eaad1e5397dc84c7dbb78be4fa433bcd6fb137f.zip
[AMDGPU] Fixed incorrect uniform branch condition
Summary: I had a case where multiple nested uniform ifs resulted in code that did v_cmp comparisons, combining the results with s_and_b64, s_or_b64 and s_xor_b64 and using the resulting mask in s_cbranch_vccnz, without first ensuring that bits for inactive lanes were clear. There was already code for inserting an "s_and_b64 vcc, exec, vcc" to clear bits for inactive lanes in the case that the branch is instruction selected as s_cbranch_scc1 and is then changed to s_cbranch_vccnz in SIFixSGPRCopies. I have added the same code into SILowerControlFlow for the case that the branch is instruction selected as s_cbranch_vccnz. This de-optimizes the code in some cases where the s_and is not needed, because vcc is the result of a v_cmp, or multiple v_cmp instructions combined by s_and/s_or. We should add a pass to re-optimize those cases. Reviewers: arsenm, kzhuravl Subscribers: wdng, yaxunl, t-tye, llvm-commits, dstuttard, timcorringham, nhaehnle Differential Revision: https://reviews.llvm.org/D41292 llvm-svn: 322119
Diffstat (limited to 'llvm/lib/Target/AMDGPU/AMDGPUISelDAGToDAG.cpp')
-rw-r--r--llvm/lib/Target/AMDGPU/AMDGPUISelDAGToDAG.cpp20
1 files changed, 20 insertions, 0 deletions
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUISelDAGToDAG.cpp b/llvm/lib/Target/AMDGPU/AMDGPUISelDAGToDAG.cpp
index f4776adb069..3c166199d44 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUISelDAGToDAG.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUISelDAGToDAG.cpp
@@ -1674,6 +1674,26 @@ void AMDGPUDAGToDAGISel::SelectBRCOND(SDNode *N) {
unsigned CondReg = UseSCCBr ? AMDGPU::SCC : AMDGPU::VCC;
SDLoc SL(N);
+ if (!UseSCCBr) {
+ // This is the case that we are selecting to S_CBRANCH_VCCNZ. We have not
+ // analyzed what generates the vcc value, so we do not know whether vcc
+ // bits for disabled lanes are 0. Thus we need to mask out bits for
+ // disabled lanes.
+ //
+ // For the case that we select S_CBRANCH_SCC1 and it gets
+ // changed to S_CBRANCH_VCCNZ in SIFixSGPRCopies, SIFixSGPRCopies calls
+ // SIInstrInfo::moveToVALU which inserts the S_AND).
+ //
+ // We could add an analysis of what generates the vcc value here and omit
+ // the S_AND when is unnecessary. But it would be better to add a separate
+ // pass after SIFixSGPRCopies to do the unnecessary S_AND removal, so it
+ // catches both cases.
+ Cond = SDValue(CurDAG->getMachineNode(AMDGPU::S_AND_B64, SL, MVT::i1,
+ CurDAG->getRegister(AMDGPU::EXEC, MVT::i1),
+ Cond),
+ 0);
+ }
+
SDValue VCC = CurDAG->getCopyToReg(N->getOperand(0), SL, CondReg, Cond);
CurDAG->SelectNodeTo(N, BrOp, MVT::Other,
N->getOperand(2), // Basic Block
OpenPOWER on IntegriCloud