summaryrefslogtreecommitdiffstats
path: root/llvm/test/CodeGen/ARM/2011-10-26-memset-with-neon.ll
Commit message (Collapse)AuthorAgeFilesLines
* Revert "Change memcpy/memset/memmove to have dest and source alignments."Pete Cooper2015-11-191-2/+2
| | | | | | | | | | This reverts commit r253511. This likely broke the bots in http://lab.llvm.org:8011/builders/clang-ppc64-elf-linux2/builds/20202 http://bb.pgr.jp/builders/clang-3stage-i686-linux/builds/3787 llvm-svn: 253543
* Change memcpy/memset/memmove to have dest and source alignments.Pete Cooper2015-11-181-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Note, this was reviewed (and more details are in) http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html These intrinsics currently have an explicit alignment argument which is required to be a constant integer. It represents the alignment of the source and dest, and so must be the minimum of those. This change allows source and dest to each have their own alignments by using the alignment attribute on their arguments. The alignment argument itself is removed. There are a few places in the code for which the code needs to be checked by an expert as to whether using only src/dest alignment is safe. For those places, they currently take the minimum of src/dest alignments which matches the current behaviour. For example, code which used to read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 500, i32 8, i1 false) will now read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 8 %dest, i8* align 8 %src, i32 500, i1 false) For out of tree owners, I was able to strip alignment from calls using sed by replacing: (call.*llvm\.memset.*)i32\ [0-9]*\,\ i1 false\) with: $1i1 false) and similarly for memmove and memcpy. I then added back in alignment to test cases which needed it. A similar commit will be made to clang which actually has many differences in alignment as now IRBuilder can generate different source/dest alignments on calls. In IRBuilder itself, a new argument was added. Instead of calling: CreateMemCpy(Dst, Src, getInt64(Size), DstAlign, /* isVolatile */ false) you now call CreateMemCpy(Dst, Src, getInt64(Size), DstAlign, SrcAlign, /* isVolatile */ false) There is a temporary class (IntegerAlignment) which takes the source alignment and rejects implicit conversion from bool. This is to prevent isVolatile here from passing its default parameter to the source alignment. Note, changes in future can now be made to codegen. I didn't change anything here, but this change should enable better memcpy code sequences. Reviewed by Hal Finkel. llvm-svn: 253511
* ARM: fixup more tests to specify the target more explicitlySaleem Abdulrasool2014-04-031-1/+1
| | | | | | | | | | | | | This changes the tests that were targeting ARM EABI to explicitly specify the environment rather than relying on the default. This breaks with the new Windows on ARM support when running the tests on Windows where the default environment is no longer EABI. Take the opportunity to avoid a pointless redirect (helps when trying to debug with providing a command line invocation which can be copy and pasted) and removing a few greps in favour of FileCheck. llvm-svn: 205541
* CriticalAntiDepBreaker is no longer needed for armv7 scheduling.Andrew Trick2013-09-251-2/+2
| | | | | | | | | | | | | | | | | This is being disabled because it is no longer needed for performance. It is only used by postRAscheduler which is also planned for removal, and it is implemented with an out-dated view of register liveness. It consideres aliases instead of register units, assumes valid kill flags, and assumes implicit uses on partial register defs. Kill flags and implicit operands are error prone and impossible to verify. We should gradually eliminate dependence on them in the postRA phases. Targets that still benefit from this should move to the MI scheduler. If that doesn't solve the problem, then we should add a hook to regalloc to optimize reload placement. llvm-svn: 191348
* Some enhancements for memcpy / memset inline expansion.Evan Cheng2012-12-101-8/+0
| | | | | | | | | | | | | | | | | | | | | 1. Teach it to use overlapping unaligned load / store to copy / set the trailing bytes. e.g. On 86, use two pairs of movups / movaps for 17 - 31 byte copies. 2. Use f64 for memcpy / memset on targets where i64 is not legal but f64 is. e.g. x86 and ARM. 3. When memcpy from a constant string, do *not* replace the load with a constant if it's not possible to materialize an integer immediate with a single instruction (required a new target hook: TLI.isIntImmLegal()). 4. Use unaligned load / stores more aggressively if target hooks indicates they are "fast". 5. Update ARM target hooks to use unaligned load / stores. e.g. vld1.8 / vst1.8. Also increase the threshold to something reasonable (8 for memset, 4 pairs for memcpy). This significantly improves Dhrystone, up to 50% on ARM iOS devices. rdar://12760078 llvm-svn: 169791
* Use vld1 / vst2 for unaligned v2f64 load / store. e.g. Use vld1.16 for 2-byteEvan Cheng2012-09-181-2/+2
| | | | | | | | | aligned address. Based on patch by David Peixotto. Also use vld1.64 / vst1.64 with 128-bit alignment to take advantage of alignment hints. rdar://12090772, rdar://12238782 llvm-svn: 164089
* ARM VLDR/VSTR instructions don't need a size suffix.Jim Grosbach2011-11-141-1/+1
| | | | | | | Canonicallize on the non-suffixed form, but continue to accept assembly that has any correctly sized type suffix. llvm-svn: 144583
* Try to lower memset/memcpy/memmove to vector instructions on ARM where the ↵Lang Hames2011-11-021-0/+20
alignment permits. llvm-svn: 143582
OpenPOWER on IntegriCloud