diff options
author | Matthias Braun <matze@braunis.de> | 2015-07-17 01:44:31 +0000 |
---|---|---|
committer | Matthias Braun <matze@braunis.de> | 2015-07-17 01:44:31 +0000 |
commit | 2d8315f8066bbce201e9b2c28b7d24915dcbe5f0 (patch) | |
tree | 6de810000cfdd7af9f988349786d0d911cf9932f /llvm/test/CodeGen/ARM/vector-load.ll | |
parent | fb2398d0c43405a6b654c80560e38fb3ccd134b9 (diff) | |
download | bcm5719-llvm-2d8315f8066bbce201e9b2c28b7d24915dcbe5f0.tar.gz bcm5719-llvm-2d8315f8066bbce201e9b2c28b7d24915dcbe5f0.zip |
ARM: Enable MachineScheduler and disable PostRAScheduler for swift.
This is mostly done to disable the PostRAScheduler which optimizes for
instruction latencies which isn't a good fit for out-of-order
architectures. This also allows to leave out the itinerary table in
swift in favor of the SchedModel ones.
This change leads to performance improvements/regressions by as much as
10% in some benchmarks, in fact we loose 0.4% performance over the
llvm-testsuite for reasons that appear to be unknown or out of the
compilers control. rdar://20803802 documents the investigation of
these effects.
While it is probably a good idea to perform the same switch for the
other ARM out-of-order CPUs, I limited this change to swift as I cannot
perform the benchmark verification on the other CPUs.
Differential Revision: http://reviews.llvm.org/D10513
llvm-svn: 242500
Diffstat (limited to 'llvm/test/CodeGen/ARM/vector-load.ll')
-rw-r--r-- | llvm/test/CodeGen/ARM/vector-load.ll | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/llvm/test/CodeGen/ARM/vector-load.ll b/llvm/test/CodeGen/ARM/vector-load.ll index 17f134f458a..a638c2bdb9b 100644 --- a/llvm/test/CodeGen/ARM/vector-load.ll +++ b/llvm/test/CodeGen/ARM/vector-load.ll @@ -238,12 +238,12 @@ define <4 x i32> @zextload_v8i8tov8i32(<4 x i8>** %ptr) { define <4 x i32> @zextload_v8i8tov8i32_fake_update(<4 x i8>** %ptr) { ;CHECK-LABEL: zextload_v8i8tov8i32_fake_update: -;CHECK: ldr.w r[[PTRREG:[0-9]+]], [r0] +;CHECK: ldr r[[PTRREG:[0-9]+]], [r0] ;CHECK: vld1.32 {{{d[0-9]+}}[0]}, [r[[PTRREG]]:32] ;CHECK: add.w r[[INCREG:[0-9]+]], r[[PTRREG]], #16 -;CHECK: str.w r[[INCREG]], [r0] ;CHECK: vmovl.u8 {{q[0-9]+}}, {{d[0-9]+}} ;CHECK: vmovl.u16 {{q[0-9]+}}, {{d[0-9]+}} +;CHECK: str r[[INCREG]], [r0] %A = load <4 x i8>*, <4 x i8>** %ptr %lA = load <4 x i8>, <4 x i8>* %A, align 4 %inc = getelementptr <4 x i8>, <4 x i8>* %A, i38 4 |