diff options
| author | Hans Wennborg <hans@hanshq.net> | 2019-08-12 14:23:13 +0000 |
|---|---|---|
| committer | Hans Wennborg <hans@hanshq.net> | 2019-08-12 14:23:13 +0000 |
| commit | a45f301f7a5d0f62910d0ed93c96d221555697c9 (patch) | |
| tree | 41fa455129e4bddd79433bca19e28dc6e27b844e /llvm/test/CodeGen/AMDGPU/optimize-negated-cond.ll | |
| parent | e011a5b4edf828dcaaa4ab5552b71d2bacaaecab (diff) | |
| download | bcm5719-llvm-a45f301f7a5d0f62910d0ed93c96d221555697c9.tar.gz bcm5719-llvm-a45f301f7a5d0f62910d0ed93c96d221555697c9.zip | |
Revert r368339 "[MBP] Disable aggressive loop rotate in plain mode"
It caused assertions to fire when building Chromium:
lib/CodeGen/LiveDebugValues.cpp:331: bool
{anonymous}::LiveDebugValues::OpenRangesSet::empty() const: Assertion
`Vars.empty() == VarLocs.empty() && "open ranges are inconsistent"' failed.
See https://crbug.com/992871#c3 for how to reproduce.
> Patch https://reviews.llvm.org/D43256 introduced more aggressive loop layout optimization which depends on profile information. If profile information is not available, the statically estimated profile information(generated by BranchProbabilityInfo.cpp) is used. If user program doesn't behave as BranchProbabilityInfo.cpp expected, the layout may be worse.
>
> To be conservative this patch restores the original layout algorithm in plain mode. But user can still try the aggressive layout optimization with -force-precise-rotation-cost=true.
>
> Differential Revision: https://reviews.llvm.org/D65673
llvm-svn: 368579
Diffstat (limited to 'llvm/test/CodeGen/AMDGPU/optimize-negated-cond.ll')
| -rw-r--r-- | llvm/test/CodeGen/AMDGPU/optimize-negated-cond.ll | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/llvm/test/CodeGen/AMDGPU/optimize-negated-cond.ll b/llvm/test/CodeGen/AMDGPU/optimize-negated-cond.ll index be5d8d47205..2be99267c4e 100644 --- a/llvm/test/CodeGen/AMDGPU/optimize-negated-cond.ll +++ b/llvm/test/CodeGen/AMDGPU/optimize-negated-cond.ll @@ -3,11 +3,11 @@ ; GCN-LABEL: {{^}}negated_cond: ; GCN: BB0_1: ; GCN: v_cmp_eq_u32_e64 [[CC:[^,]+]], -; GCN: BB0_2: +; GCN: BB0_3: ; GCN-NOT: v_cndmask_b32 ; GCN-NOT: v_cmp ; GCN: s_andn2_b64 vcc, exec, [[CC]] -; GCN: s_cbranch_vccnz BB0_4 +; GCN: s_cbranch_vccnz BB0_2 define amdgpu_kernel void @negated_cond(i32 addrspace(1)* %arg1) { bb: br label %bb1 @@ -36,11 +36,11 @@ bb4: ; GCN-LABEL: {{^}}negated_cond_dominated_blocks: ; GCN: v_cmp_eq_u32_e64 [[CC:[^,]+]], -; GCN: BB1_1: +; GCN: %bb4 ; GCN-NOT: v_cndmask_b32 ; GCN-NOT: v_cmp ; GCN: s_andn2_b64 vcc, exec, [[CC]] -; GCN: s_cbranch_vccz BB1_3 +; GCN: s_cbranch_vccnz BB1_1 define amdgpu_kernel void @negated_cond_dominated_blocks(i32 addrspace(1)* %arg1) { bb: br label %bb2 |

