diff options
author | cdevadas <cdevadas@amd.com> | 2020-01-10 22:23:27 +0530 |
---|---|---|
committer | cdevadas <cdevadas@amd.com> | 2020-01-15 15:18:16 +0530 |
commit | 0dc6c249bffac9f23a605ce4e42a84341da3ddbd (patch) | |
tree | 113cc776987199087010ef82f8fd4728b06d0c8b /llvm/test/CodeGen/AMDGPU/smrd_vmem_war.ll | |
parent | 064859bde79ccd221fd5196fd2d889014c5435c4 (diff) | |
download | bcm5719-llvm-0dc6c249bffac9f23a605ce4e42a84341da3ddbd.tar.gz bcm5719-llvm-0dc6c249bffac9f23a605ce4e42a84341da3ddbd.zip |
[AMDGPU] Invert the handling of skip insertion.
The current implementation of skip insertion (SIInsertSkip) makes it a
mandatory pass required for correctness. Initially, the idea was to
have an optional pass. This patch inserts the s_cbranch_execz upfront
during SILowerControlFlow to skip over the sections of code when no
lanes are active. Later, SIRemoveShortExecBranches removes the skips
for short branches, unless there is a sideeffect and the skip branch is
really necessary.
This new pass will replace the handling of skip insertion in the
existing SIInsertSkip Pass.
Differential revision: https://reviews.llvm.org/D68092
Diffstat (limited to 'llvm/test/CodeGen/AMDGPU/smrd_vmem_war.ll')
-rw-r--r-- | llvm/test/CodeGen/AMDGPU/smrd_vmem_war.ll | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/llvm/test/CodeGen/AMDGPU/smrd_vmem_war.ll b/llvm/test/CodeGen/AMDGPU/smrd_vmem_war.ll index 4ba16b4eb30..c376886a3e8 100644 --- a/llvm/test/CodeGen/AMDGPU/smrd_vmem_war.ll +++ b/llvm/test/CodeGen/AMDGPU/smrd_vmem_war.ll @@ -1,6 +1,6 @@ ; RUN: llc -mtriple=amdgcn -mcpu=gfx900 -verify-machineinstrs < %s | FileCheck %s -check-prefix=GCN -; GCN-LABEL: BB0_1 +; GCN-LABEL: ; %bb.0: ; GCN: s_load_dword s{{[0-9]+}}, s{{\[}}[[ADDR_LO:[0-9]+]]{{\:}}[[ADDR_HI:[0-9]+]]{{\]}}, 0x0 ; GCN: s_waitcnt lgkmcnt(0) ; GCN: global_store_dword v{{\[}}[[ADDR_LO]]{{\:}}[[ADDR_HI]]{{\]}}, v{{[0-9]+}}, off |