diff options
| author | Nadav Rotem <nrotem@apple.com> | 2012-12-24 09:40:33 +0000 |
|---|---|---|
| committer | Nadav Rotem <nrotem@apple.com> | 2012-12-24 09:40:33 +0000 |
| commit | dc0ad92b64aa4775a82a4ed6c960ba4b3762cc66 (patch) | |
| tree | 2f3c1393e610059721e894cb42b1500bcee4a599 /llvm/test/CodeGen/X86 | |
| parent | 5f7c12cfbde275cd87c2ba33aa17d6ff6f5de436 (diff) | |
| download | bcm5719-llvm-dc0ad92b64aa4775a82a4ed6c960ba4b3762cc66.tar.gz bcm5719-llvm-dc0ad92b64aa4775a82a4ed6c960ba4b3762cc66.zip | |
Some x86 instructions can load/store one of the operands to memory. On SSE, this memory needs to be aligned.
When these instructions are encoded in VEX (on AVX) there is no such requirement. This changes the folding
tables and removes the alignment restrictions from VEX-encoded instructions.
llvm-svn: 171024
Diffstat (limited to 'llvm/test/CodeGen/X86')
| -rw-r--r-- | llvm/test/CodeGen/X86/fold-vex.ll | 16 |
1 files changed, 16 insertions, 0 deletions
diff --git a/llvm/test/CodeGen/X86/fold-vex.ll b/llvm/test/CodeGen/X86/fold-vex.ll new file mode 100644 index 00000000000..60e500b419e --- /dev/null +++ b/llvm/test/CodeGen/X86/fold-vex.ll @@ -0,0 +1,16 @@ +; RUN: llc < %s -mcpu=corei7-avx -march=x86-64 | FileCheck %s + +;CHECK: @test +; No need to load from memory. The operand will be loaded as part of th AND instr. +;CHECK-NOT: vmovaps +;CHECK: vandps +;CHECK: ret + +define void @test1(<8 x i32>* %p0, <8 x i32> %in1) nounwind { +entry: + %in0 = load <8 x i32>* %p0, align 2 + %a = and <8 x i32> %in0, %in1 + store <8 x i32> %a, <8 x i32>* undef + ret void +} + |

