summaryrefslogtreecommitdiffstats
path: root/llvm/test/Transforms/LoopVectorize
diff options
context:
space:
mode:
authorSilviu Baranga <silviu.baranga@arm.com>2016-02-08 10:45:50 +0000
committerSilviu Baranga <silviu.baranga@arm.com>2016-02-08 10:45:50 +0000
commita35fadc7c49fa6949342739f2e75c833a5966fdd (patch)
treed2f4032ca0d6a0493cbe4b3984a4212907a4e25a /llvm/test/Transforms/LoopVectorize
parent25779e404b08eb92ff409a8f38080f3e46575bb7 (diff)
downloadbcm5719-llvm-a35fadc7c49fa6949342739f2e75c833a5966fdd.tar.gz
bcm5719-llvm-a35fadc7c49fa6949342739f2e75c833a5966fdd.zip
[SCEV][LAA] Add no wrap SCEV predicates and use use them to improve strided pointer detection
Summary: This change adds no wrap SCEV predicates with: - support for runtime checking - support for expression rewriting: (sext ({x,+,y}) -> {sext(x),+,sext(y)} (zext ({x,+,y}) -> {zext(x),+,sext(y)} Note that we are sign extending the increment of the SCEV, even for the zext case. This is needed to cover the fairly common case where y would be a (small) negative integer. In order to do this, this change adds two new flags: nusw and nssw that are applicable to AddRecExprs and permit the transformations above. We also change isStridedPtr in LAA to be able to make use of these predicates. With this feature we should now always be able to work around overflow issues in the dependence analysis. Reviewers: mzolotukhin, sanjoy, anemet Subscribers: mzolotukhin, sanjoy, llvm-commits, rengolin, jmolloy, hfinkel Differential Revision: http://reviews.llvm.org/D15412 llvm-svn: 260085
Diffstat (limited to 'llvm/test/Transforms/LoopVectorize')
-rw-r--r--llvm/test/Transforms/LoopVectorize/same-base-access.ll10
1 files changed, 4 insertions, 6 deletions
diff --git a/llvm/test/Transforms/LoopVectorize/same-base-access.ll b/llvm/test/Transforms/LoopVectorize/same-base-access.ll
index 31cff0ee653..53fad8afdad 100644
--- a/llvm/test/Transforms/LoopVectorize/same-base-access.ll
+++ b/llvm/test/Transforms/LoopVectorize/same-base-access.ll
@@ -62,11 +62,9 @@ define i32 @kernel11(double* %x, double* %y, i32 %n) nounwind uwtable ssp {
}
-
-; We don't vectorize this function because A[i*7] is scalarized, and the
-; different scalars can in theory wrap around and overwrite other scalar
-; elements. At the moment we only allow read/write access to arrays
-; that are consecutive.
+; A[i*7] is scalarized, and the different scalars can in theory wrap
+; around and overwrite other scalar elements. However we can still
+; vectorize because we can version the loop to avoid this case.
;
; void foo(int *a) {
; for (int i=0; i<256; ++i) {
@@ -78,7 +76,7 @@ define i32 @kernel11(double* %x, double* %y, i32 %n) nounwind uwtable ssp {
; }
; CHECK-LABEL: @func2(
-; CHECK-NOT: <4 x i32>
+; CHECK: <4 x i32>
; CHECK: ret
define i32 @func2(i32* nocapture %a) nounwind uwtable ssp {
br label %1
OpenPOWER on IntegriCloud