summaryrefslogtreecommitdiffstats
path: root/llvm/test/CodeGen/AMDGPU/insert_vector_elt.ll
diff options
context:
space:
mode:
authorMatt Arsenault <Matthew.Arsenault@amd.com>2016-11-01 22:55:07 +0000
committerMatt Arsenault <Matthew.Arsenault@amd.com>2016-11-01 22:55:07 +0000
commit3d463193a961c891ec1b49d07de4729794cc5b14 (patch)
tree6084e14c2d67c2a90ab6d3d40c288fdd8180ad52 /llvm/test/CodeGen/AMDGPU/insert_vector_elt.ll
parent8a99f120bc831d424185b65809903a1212989348 (diff)
downloadbcm5719-llvm-3d463193a961c891ec1b49d07de4729794cc5b14.tar.gz
bcm5719-llvm-3d463193a961c891ec1b49d07de4729794cc5b14.zip
AMDGPU: Default to using scalar mov to materialize immediate
This is the conservatively correct way because it's easy to move or replace a scalar immediate. This was incorrect in the case when the register class wasn't known from the static instruction definition, but still needed to be an SGPR. The main example of this is inlineasm has an SGPR constraint. Also start verifying the register classes of inlineasm operands. llvm-svn: 285762
Diffstat (limited to 'llvm/test/CodeGen/AMDGPU/insert_vector_elt.ll')
-rw-r--r--llvm/test/CodeGen/AMDGPU/insert_vector_elt.ll2
1 files changed, 1 insertions, 1 deletions
diff --git a/llvm/test/CodeGen/AMDGPU/insert_vector_elt.ll b/llvm/test/CodeGen/AMDGPU/insert_vector_elt.ll
index 0cdb1c9fb3a..37da9c5d5ad 100644
--- a/llvm/test/CodeGen/AMDGPU/insert_vector_elt.ll
+++ b/llvm/test/CodeGen/AMDGPU/insert_vector_elt.ll
@@ -15,7 +15,7 @@
; GCN-DAG: v_mov_b32_e32 v{{[0-9]+}}, s{{[0-9]+}}
; GCN-DAG: v_mov_b32_e32 v{{[0-9]+}}, s{{[0-9]+}}
; GCN-DAG: v_mov_b32_e32 v{{[0-9]+}}, s{{[0-9]+}}
-; GCN-DAG: v_mov_b32_e32 [[CONSTREG:v[0-9]+]], 0x40a00000
+; GCN-DAG: s_mov_b32 [[CONSTREG:s[0-9]+]], 0x40a00000
; GCN-DAG: v_mov_b32_e32 v[[LOW_REG:[0-9]+]], [[CONSTREG]]
; GCN: buffer_store_dwordx4 v{{\[}}[[LOW_REG]]:
define void @insertelement_v4f32_0(<4 x float> addrspace(1)* %out, <4 x float> %a) nounwind {
OpenPOWER on IntegriCloud