diff options
author | Evan Cheng <evan.cheng@apple.com> | 2011-01-07 19:35:30 +0000 |
---|---|---|
committer | Evan Cheng <evan.cheng@apple.com> | 2011-01-07 19:35:30 +0000 |
commit | a048c83fe476e47bd549d70f423e1949a520f7af (patch) | |
tree | f77e6756fe59890824ef9aec93e0b43cf256aa0b /llvm/lib | |
parent | 2cd32a025139abd759ca51e970c8ad10310e2137 (diff) | |
download | bcm5719-llvm-a048c83fe476e47bd549d70f423e1949a520f7af.tar.gz bcm5719-llvm-a048c83fe476e47bd549d70f423e1949a520f7af.zip |
Revert r122955. It seems using movups to lower memcpy can cause massive regression (even on Nehalem) in edge cases. I also didn't see any real performance benefit.
llvm-svn: 123015
Diffstat (limited to 'llvm/lib')
-rw-r--r-- | llvm/lib/Target/X86/X86ISelLowering.cpp | 6 |
1 files changed, 5 insertions, 1 deletions
diff --git a/llvm/lib/Target/X86/X86ISelLowering.cpp b/llvm/lib/Target/X86/X86ISelLowering.cpp index ddec78bfff3..f871b5a7701 100644 --- a/llvm/lib/Target/X86/X86ISelLowering.cpp +++ b/llvm/lib/Target/X86/X86ISelLowering.cpp @@ -1063,8 +1063,12 @@ X86TargetLowering::getOptimalMemOpType(uint64_t Size, // linux. This is because the stack realignment code can't handle certain // cases like PR2962. This should be removed when PR2962 is fixed. const Function *F = MF.getFunction(); - if (NonScalarIntSafe && !F->hasFnAttr(Attribute::NoImplicitFloat)) { + if (NonScalarIntSafe && + !F->hasFnAttr(Attribute::NoImplicitFloat)) { if (Size >= 16 && + (Subtarget->isUnalignedMemAccessFast() || + ((DstAlign == 0 || DstAlign >= 16) && + (SrcAlign == 0 || SrcAlign >= 16))) && Subtarget->getStackAlignment() >= 16) { if (Subtarget->hasSSE2()) return MVT::v4i32; |