summaryrefslogtreecommitdiffstats
path: root/llvm/test/CodeGen/X86/lsr-loop-exit-cond.ll
diff options
context:
space:
mode:
authorJames Molloy <james.molloy@arm.com>2016-08-15 07:53:03 +0000
committerJames Molloy <james.molloy@arm.com>2016-08-15 07:53:03 +0000
commit196ad0823e67bffef39983fbd9d7c13fb25911b6 (patch)
tree9bdc1e2cbd6c94f8f75a6304847c91a206b6b7fe /llvm/test/CodeGen/X86/lsr-loop-exit-cond.ll
parenta5c8a685356a5c5e956723429f7c2c39f8fbcd37 (diff)
downloadbcm5719-llvm-196ad0823e67bffef39983fbd9d7c13fb25911b6.tar.gz
bcm5719-llvm-196ad0823e67bffef39983fbd9d7c13fb25911b6.zip
[LSR] Don't try and create post-inc expressions on non-rotated loops
If a loop is not rotated (for example when optimizing for size), the latch is not the backedge. If we promote an expression to post-inc form, we not only increase register pressure and add a COPY for that IV expression but for all IVs! Motivating testcase: void f(float *a, float *b, float *c, int n) { while (n-- > 0) *c++ = *a++ + *b++; } It's imperative that the pointer increments be located in the latch block and not the header block; if not, we cannot use post-increment loads and stores and we have to keep both the post-inc and pre-inc values around until the end of the latch which bloats register usage. llvm-svn: 278658
Diffstat (limited to 'llvm/test/CodeGen/X86/lsr-loop-exit-cond.ll')
-rw-r--r--llvm/test/CodeGen/X86/lsr-loop-exit-cond.ll4
1 files changed, 2 insertions, 2 deletions
diff --git a/llvm/test/CodeGen/X86/lsr-loop-exit-cond.ll b/llvm/test/CodeGen/X86/lsr-loop-exit-cond.ll
index 2e3929be31c..240026164e6 100644
--- a/llvm/test/CodeGen/X86/lsr-loop-exit-cond.ll
+++ b/llvm/test/CodeGen/X86/lsr-loop-exit-cond.ll
@@ -3,12 +3,12 @@
; CHECK-LABEL: t:
; CHECK: movl (%r9,%rax,4), %e{{..}}
-; CHECK-NEXT: decq
+; CHECK-NEXT: testq
; CHECK-NEXT: jne
; ATOM-LABEL: t:
; ATOM: movl (%r9,%r{{.+}},4), %e{{..}}
-; ATOM-NEXT: decq
+; ATOM-NEXT: testq
; ATOM-NEXT: jne
@Te0 = external global [256 x i32] ; <[256 x i32]*> [#uses=5]
OpenPOWER on IntegriCloud