From 01a057a0c4752940f4ba32b84bf209e85564e424 Mon Sep 17 00:00:00 2001 From: David L Kreitzer Date: Fri, 14 Oct 2016 18:20:41 +0000 Subject: Add a pass to optimize patterns of vectorized interleaved memory accesses for X86. The pass optimizes as a unit the entire wide load + shuffles pattern produced by interleaved vectorization. This initial patch optimizes one pattern (64-bit elements interleaved by a factor of 4). Future patches will generalize to additional patterns. Patch by Farhana Aleen Differential revision: http://reviews.llvm.org/D24681 llvm-svn: 284260 --- llvm/lib/CodeGen/InterleavedAccessPass.cpp | 5 +++++ 1 file changed, 5 insertions(+) (limited to 'llvm/lib/CodeGen/InterleavedAccessPass.cpp') diff --git a/llvm/lib/CodeGen/InterleavedAccessPass.cpp b/llvm/lib/CodeGen/InterleavedAccessPass.cpp index eec282d53b0..362f61744f7 100644 --- a/llvm/lib/CodeGen/InterleavedAccessPass.cpp +++ b/llvm/lib/CodeGen/InterleavedAccessPass.cpp @@ -29,6 +29,9 @@ // It could be transformed into a ld2 intrinsic in AArch64 backend or a vld2 // intrinsic in ARM backend. // +// In X86, this can be further optimized into a set of target +// specific loads followed by an optimized sequence of shuffles. +// // E.g. An interleaved store (Factor = 3): // %i.vec = shuffle <8 x i32> %v0, <8 x i32> %v1, // <0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11> @@ -37,6 +40,8 @@ // It could be transformed into a st3 intrinsic in AArch64 backend or a vst3 // intrinsic in ARM backend. // +// Similarly, a set of interleaved stores can be transformed into an optimized +// sequence of shuffles followed by a set of target specific stores for X86. //===----------------------------------------------------------------------===// #include "llvm/CodeGen/Passes.h" -- cgit v1.2.3