diff options
| author | Quentin Colombet <qcolombet@apple.com> | 2013-07-30 00:24:09 +0000 |
|---|---|---|
| committer | Quentin Colombet <qcolombet@apple.com> | 2013-07-30 00:24:09 +0000 |
| commit | 6bf4baa408c7042df5791adce351b317e5635e21 (patch) | |
| tree | b3013d49a8af3a28c099013010b514ec1c22f47d /llvm/test/CodeGen/ARM | |
| parent | 6e10f149c4d0ec6fd8b0c6012b6c4d70dccb586d (diff) | |
| download | bcm5719-llvm-6bf4baa408c7042df5791adce351b317e5635e21.tar.gz bcm5719-llvm-6bf4baa408c7042df5791adce351b317e5635e21.zip | |
[DAGCombiner] insert_vector_elt: Avoid building a vector twice.
This patch prevents the following combine when the input vector is used more
than once.
insert_vector_elt (build_vector elt0, ..., eltN), NewEltIdx, idx
=>
build_vector elt0, ..., NewEltIdx, ..., eltN
The reasons are:
- Building a vector may be expensive, so try to reuse the existing part of a
vector instead of creating a new one (think big vectors).
- elt0 to eltN now have two users instead of one. This may prevent some other
optimizations.
llvm-svn: 187396
Diffstat (limited to 'llvm/test/CodeGen/ARM')
| -rw-r--r-- | llvm/test/CodeGen/ARM/vector-DAGCombine.ll | 26 |
1 files changed, 26 insertions, 0 deletions
diff --git a/llvm/test/CodeGen/ARM/vector-DAGCombine.ll b/llvm/test/CodeGen/ARM/vector-DAGCombine.ll index 3e138199e6f..4221c98424a 100644 --- a/llvm/test/CodeGen/ARM/vector-DAGCombine.ll +++ b/llvm/test/CodeGen/ARM/vector-DAGCombine.ll @@ -198,3 +198,29 @@ entry: %vmull.i = tail call <8 x i16> @llvm.arm.neon.vmullu.v8i16(<8 x i8> %0, <8 x i8> %0) ret <8 x i16> %vmull.i } + +; Make sure vector load is used for all three loads. +; Lowering to build vector was breaking the single use property of the load of +; %pix_sp0.0.copyload. +; CHECK: t5 +; CHECK: vld1.32 {[[REG1:d[0-9]+]][1]}, [r0] +; CHECK: vorr [[REG2:d[0-9]+]], [[REG1]], [[REG1]] +; CHECK: vld1.32 {[[REG1]][0]}, [r1] +; CHECK: vld1.32 {[[REG2]][0]}, [r2] +; CHECK: vmull.u8 q{{[0-9]+}}, [[REG1]], [[REG2]] +define <8 x i16> @t5(i8* nocapture %sp0, i8* nocapture %sp1, i8* nocapture %sp2) { +entry: + %pix_sp0.0.cast = bitcast i8* %sp0 to i32* + %pix_sp0.0.copyload = load i32* %pix_sp0.0.cast, align 1 + %pix_sp1.0.cast = bitcast i8* %sp1 to i32* + %pix_sp1.0.copyload = load i32* %pix_sp1.0.cast, align 1 + %pix_sp2.0.cast = bitcast i8* %sp2 to i32* + %pix_sp2.0.copyload = load i32* %pix_sp2.0.cast, align 1 + %vec = insertelement <2 x i32> undef, i32 %pix_sp0.0.copyload, i32 1 + %vecinit1 = insertelement <2 x i32> %vec, i32 %pix_sp1.0.copyload, i32 0 + %vecinit2 = insertelement <2 x i32> %vec, i32 %pix_sp2.0.copyload, i32 0 + %0 = bitcast <2 x i32> %vecinit1 to <8 x i8> + %1 = bitcast <2 x i32> %vecinit2 to <8 x i8> + %vmull.i = tail call <8 x i16> @llvm.arm.neon.vmullu.v8i16(<8 x i8> %0, <8 x i8> %1) + ret <8 x i16> %vmull.i +} |

