summaryrefslogtreecommitdiffstats
path: root/llvm/test/CodeGen/AArch64/bitfield.ll
diff options
context:
space:
mode:
authorTim Northover <tnorthover@apple.com>2015-07-29 21:34:32 +0000
committerTim Northover <tnorthover@apple.com>2015-07-29 21:34:32 +0000
commit2a9d801fd58d2b6407662d76e95fcd21957282a6 (patch)
tree80326e3ed6fb3e793fe263bfd282498fd4fa5491 /llvm/test/CodeGen/AArch64/bitfield.ll
parenta6f9a37d92bbecaec799e02f84d6bf2cb866e91b (diff)
downloadbcm5719-llvm-2a9d801fd58d2b6407662d76e95fcd21957282a6.tar.gz
bcm5719-llvm-2a9d801fd58d2b6407662d76e95fcd21957282a6.zip
AArch64: use 32-bit MOV rather than UBFX to truncate registers.
It's potentially more efficient on Cyclone, and from the optimization guides & schedulers looks like it has no effect on Cortex-A53 or A57. In general you'd expect a MOV to be about the most efficient instruction with its semantics, even though the official "UXTW" alias is really a UBFX. llvm-svn: 243576
Diffstat (limited to 'llvm/test/CodeGen/AArch64/bitfield.ll')
-rw-r--r--llvm/test/CodeGen/AArch64/bitfield.ll2
1 files changed, 1 insertions, 1 deletions
diff --git a/llvm/test/CodeGen/AArch64/bitfield.ll b/llvm/test/CodeGen/AArch64/bitfield.ll
index 78399c80b5d..e1e4f62f662 100644
--- a/llvm/test/CodeGen/AArch64/bitfield.ll
+++ b/llvm/test/CodeGen/AArch64/bitfield.ll
@@ -60,7 +60,7 @@ define void @test_extendw(i32 %var) {
%uxt64 = zext i32 %var to i64
store volatile i64 %uxt64, i64* @var64
-; CHECK: ubfx {{x[0-9]+}}, {{x[0-9]+}}, #0, #32
+; CHECK: mov {{w[0-9]+}}, w0
ret void
}
OpenPOWER on IntegriCloud