diff options
author | Evan Cheng <evan.cheng@apple.com> | 2011-01-07 19:35:30 +0000 |
---|---|---|
committer | Evan Cheng <evan.cheng@apple.com> | 2011-01-07 19:35:30 +0000 |
commit | a048c83fe476e47bd549d70f423e1949a520f7af (patch) | |
tree | f77e6756fe59890824ef9aec93e0b43cf256aa0b /llvm/test/CodeGen/X86/memcpy.ll | |
parent | 2cd32a025139abd759ca51e970c8ad10310e2137 (diff) | |
download | bcm5719-llvm-a048c83fe476e47bd549d70f423e1949a520f7af.tar.gz bcm5719-llvm-a048c83fe476e47bd549d70f423e1949a520f7af.zip |
Revert r122955. It seems using movups to lower memcpy can cause massive regression (even on Nehalem) in edge cases. I also didn't see any real performance benefit.
llvm-svn: 123015
Diffstat (limited to 'llvm/test/CodeGen/X86/memcpy.ll')
-rw-r--r-- | llvm/test/CodeGen/X86/memcpy.ll | 42 |
1 files changed, 17 insertions, 25 deletions
diff --git a/llvm/test/CodeGen/X86/memcpy.ll b/llvm/test/CodeGen/X86/memcpy.ll index 4af93ad3682..72342cbacb4 100644 --- a/llvm/test/CodeGen/X86/memcpy.ll +++ b/llvm/test/CodeGen/X86/memcpy.ll @@ -37,34 +37,26 @@ entry: tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %A, i8* %B, i64 64, i32 1, i1 false) ret void ; LINUX: test3: -; LINUX-NOT: memcpy -; LINUX: movups -; LINUX: movups -; LINUX: movups -; LINUX: movups -; LINUX: movups -; LINUX: movups -; LINUX: movups -; LINUX: movups +; LINUX: memcpy ; DARWIN: test3: ; DARWIN-NOT: memcpy -; DARWIN: movups -; DARWIN: movups -; DARWIN: movups -; DARWIN: movups -; DARWIN: movups -; DARWIN: movups -; DARWIN: movups -; DARWIN: movups -; DARWIN: movups -; DARWIN: movups -; DARWIN: movups -; DARWIN: movups -; DARWIN: movups -; DARWIN: movups -; DARWIN: movups -; DARWIN: movups +; DARWIN: movq +; DARWIN: movq +; DARWIN: movq +; DARWIN: movq +; DARWIN: movq +; DARWIN: movq +; DARWIN: movq +; DARWIN: movq +; DARWIN: movq +; DARWIN: movq +; DARWIN: movq +; DARWIN: movq +; DARWIN: movq +; DARWIN: movq +; DARWIN: movq +; DARWIN: movq } ; Large constant memcpy's should be inlined when not optimizing for size. |