summaryrefslogtreecommitdiffstats
path: root/llvm/test/CodeGen/X86/2008-10-27-StackRealignment.ll
Commit message (Collapse)AuthorAgeFilesLines
* X86: Enable SSE memory intrinsics even when stack alignment is less than 16 ↵Benjamin Kramer2012-11-141-22/+0
| | | | | | | | | | | | | | | | | | bytes. The stack realignment code was fixed to work when there is stack realignment and a dynamic alloca is present so this shouldn't cause correctness issues anymore. Note that this also enables generation of AVX instructions for memset under the assumptions: - Unaligned loads/stores are always fast on CPUs supporting AVX - AVX is not slower than SSE We may need some tweaked heuristics if one of those assumptions turns out not to be true. Effectively reverts r58317. Part of PR2962. llvm-svn: 167967
* rip out a ton of intrinsic modernization logic from AutoUpgrade.cpp, which isChris Lattner2011-06-181-4/+4
| | | | | | | | | for pre-2.9 bitcode files. We keep x86 unaligned loads, movnt, crc32, and the target indep prefetch change. As usual, updating the testsuite is a PITA. llvm-svn: 133337
* Experiment with changing the default 32-bit linux stack alignment toEric Christopher2011-01-131-2/+2
| | | | | | 16 bytes for PR8969. Update all testcases accordingly. llvm-svn: 123367
* Eliminate more uses of llvm-as and llvm-dis.Dan Gohman2009-09-081-2/+2
| | | | llvm-svn: 81290
* Fix a nasty miscompilation of 176.gcc on linux/x86 where we synthesizedChris Lattner2008-10-281-0/+22
a memset using 16-byte XMM stores, but where the stack realignment code didn't work. Until it does (PR2962) disable use of xmm regs in memcpy and memset formation for linux and other targets with insufficiently aligned stacks. This is part of PR2888 llvm-svn: 58317
OpenPOWER on IntegriCloud