summaryrefslogtreecommitdiffstats
path: root/llvm/test
diff options
context:
space:
mode:
authorChandler Carruth <chandlerc@gmail.com>2013-01-07 14:41:08 +0000
committerChandler Carruth <chandlerc@gmail.com>2013-01-07 14:41:08 +0000
commit26c59fa87037f2b9e137fac5c9518417ad863726 (patch)
treed1c38c4f1c6f78f78fd4a2902fd7f2d932642d36 /llvm/test
parent3f9093753585be93f77ad6a52b47b873e9699934 (diff)
downloadbcm5719-llvm-26c59fa87037f2b9e137fac5c9518417ad863726.tar.gz
bcm5719-llvm-26c59fa87037f2b9e137fac5c9518417ad863726.zip
Switch the SCEV expander and LoopStrengthReduce to use
TargetTransformInfo rather than TargetLowering, removing one of the primary instances of the layering violation of Transforms depending directly on Target. This is a really big deal because LSR used to be a "special" pass that could only be tested fully using llc and by looking at the full output of it. It also couldn't run with any other loop passes because it had to be created by the backend. No longer is this true. LSR is now just a normal pass and we should probably lift the creation of LSR out of lib/CodeGen/Passes.cpp and into the PassManagerBuilder. =] I've not done this, or updated all of the tests to use opt and a triple, because I suspect someone more familiar with LSR would do a better job. This change should be essentially without functional impact for normal compilations, and only change behvaior of targetless compilations. The conversion required changing all of the LSR code to refer to the TTI interfaces, which fortunately are very similar to TargetLowering's interfaces. However, it also allowed us to *always* expect to have some implementation around. I've pushed that simplification through the pass, and leveraged it to simplify code somewhat. It required some test updates for one of two things: either we used to skip some checks altogether but now we get the default "no" answer for them, or we used to have no information about the target and now we do have some. I've also started the process of removing AddrMode, as the TTI interface doesn't use it any longer. In some cases this simplifies code, and in others it adds some complexity, but I think it's not a bad tradeoff even there. Subsequent patches will try to clean this up even further and use other (more appropriate) abstractions. Yet again, almost all of the formatting changes brought to you by clang-format. =] llvm-svn: 171735
Diffstat (limited to 'llvm/test')
-rw-r--r--llvm/test/Transforms/LoopStrengthReduce/2012-07-18-LimitReassociate.ll13
-rw-r--r--llvm/test/Transforms/LoopStrengthReduce/X86/2008-08-14-ShadowIV.ll (renamed from llvm/test/Transforms/LoopStrengthReduce/2008-08-14-ShadowIV.ll)2
-rw-r--r--llvm/test/Transforms/LoopStrengthReduce/X86/2011-07-20-DoubleIV.ll (renamed from llvm/test/Transforms/LoopStrengthReduce/2011-07-20-DoubleIV.ll)2
-rw-r--r--llvm/test/Transforms/LoopStrengthReduce/post-inc-icmpzero.ll8
4 files changed, 13 insertions, 12 deletions
diff --git a/llvm/test/Transforms/LoopStrengthReduce/2012-07-18-LimitReassociate.ll b/llvm/test/Transforms/LoopStrengthReduce/2012-07-18-LimitReassociate.ll
index 0aa27876207..53da4627162 100644
--- a/llvm/test/Transforms/LoopStrengthReduce/2012-07-18-LimitReassociate.ll
+++ b/llvm/test/Transforms/LoopStrengthReduce/2012-07-18-LimitReassociate.ll
@@ -5,16 +5,17 @@
; PR13361: LSR + SCEV "hangs" on reasonably sized test with sequence of loops
;
; Without limits on CollectSubexpr, we have thousands of formulae for
-; the use that crosses loops. With limits we have five.
+; the use that crosses loops. With limits we have six.
; CHECK: LSR on loop %bb221:
; CHECK: After generating reuse formulae:
; CHECK: LSR is examining the following uses:
; CHECK: LSR Use: Kind=Special
-; CHECK: {{.*reg\(\{\{\{\{\{\{\{\{\{}}
-; CHECK: {{.*reg\(\{\{\{\{\{\{\{\{\{}}
-; CHECK: {{.*reg\(\{\{\{\{\{\{\{\{\{}}
-; CHECK: {{.*reg\(\{\{\{\{\{\{\{\{\{}}
-; CHECK: {{.*reg\(\{\{\{\{\{\{\{\{\{}}
+; CHECK: {{.*reg\(\{.*\{.*\{.*\{.*\{.*\{.*\{.*\{.*\{}}
+; CHECK: {{.*reg\(\{.*\{.*\{.*\{.*\{.*\{.*\{.*\{.*\{}}
+; CHECK: {{.*reg\(\{.*\{.*\{.*\{.*\{.*\{.*\{.*\{.*\{}}
+; CHECK: {{.*reg\(\{.*\{.*\{.*\{.*\{.*\{.*\{.*\{.*\{}}
+; CHECK: {{.*reg\(\{.*\{.*\{.*\{.*\{.*\{.*\{.*\{.*\{}}
+; CHECK: {{.*reg\(\{.*\{.*\{.*\{.*\{.*\{.*\{.*\{.*\{}}
; CHECK-NOT:reg
; CHECK: Filtering for use
target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128"
diff --git a/llvm/test/Transforms/LoopStrengthReduce/2008-08-14-ShadowIV.ll b/llvm/test/Transforms/LoopStrengthReduce/X86/2008-08-14-ShadowIV.ll
index c650d8cf76d..9a7f4865c59 100644
--- a/llvm/test/Transforms/LoopStrengthReduce/2008-08-14-ShadowIV.ll
+++ b/llvm/test/Transforms/LoopStrengthReduce/X86/2008-08-14-ShadowIV.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -loop-reduce -S | grep "phi double" | count 1
+; RUN: opt < %s -loop-reduce -S -mtriple=x86_64-unknown-unknown | grep "phi double" | count 1
define void @foobar(i32 %n) nounwind {
entry:
diff --git a/llvm/test/Transforms/LoopStrengthReduce/2011-07-20-DoubleIV.ll b/llvm/test/Transforms/LoopStrengthReduce/X86/2011-07-20-DoubleIV.ll
index 5d9ed64ef42..a932b479258 100644
--- a/llvm/test/Transforms/LoopStrengthReduce/2011-07-20-DoubleIV.ll
+++ b/llvm/test/Transforms/LoopStrengthReduce/X86/2011-07-20-DoubleIV.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -loop-reduce -S | FileCheck %s
+; RUN: opt < %s -loop-reduce -S -mtriple=x86_64-unknown-unknown | FileCheck %s
;
; Test LSR's OptimizeShadowIV. Handle a floating-point IV with a
; nonzero initial value.
diff --git a/llvm/test/Transforms/LoopStrengthReduce/post-inc-icmpzero.ll b/llvm/test/Transforms/LoopStrengthReduce/post-inc-icmpzero.ll
index 96904c66e64..9e02d92a6f4 100644
--- a/llvm/test/Transforms/LoopStrengthReduce/post-inc-icmpzero.ll
+++ b/llvm/test/Transforms/LoopStrengthReduce/post-inc-icmpzero.ll
@@ -4,12 +4,12 @@
; LSR should properly handle the post-inc offset when folding the
; non-IV operand of an icmp into the IV.
-; CHECK: %4 = sub i64 %sub.ptr.lhs.cast, %sub.ptr.rhs.cast
-; CHECK: %5 = lshr i64 %4, 1
-; CHECK: %6 = mul i64 %5, 2
+; CHECK: %3 = sub i64 %sub.ptr.lhs.cast, %sub.ptr.rhs.cast
+; CHECK: %4 = lshr i64 %3, 1
+; CHECK: %5 = mul i64 %4, 2
; CHECK: br label %for.body
; CHECK: for.body:
-; CHECK: %lsr.iv2 = phi i64 [ %lsr.iv.next, %for.body ], [ %6, %for.body.lr.ph ]
+; CHECK: %lsr.iv2 = phi i64 [ %lsr.iv.next, %for.body ], [ %5, %for.body.lr.ph ]
; CHECK: %lsr.iv.next = add i64 %lsr.iv2, -2
; CHECK: %lsr.iv.next3 = inttoptr i64 %lsr.iv.next to i16*
; CHECK: %cmp27 = icmp eq i16* %lsr.iv.next3, null
OpenPOWER on IntegriCloud