summaryrefslogtreecommitdiffstats
path: root/llvm/test/Transforms/ScalarRepl/2008-06-22-LargeArray.ll
Commit message (Collapse)AuthorAgeFilesLines
* Remove the ScalarReplAggregates passDavid Majnemer2016-06-151-17/+0
| | | | | | | | | | Nearly all the changes to this pass have been done while maintaining and updating other parts of LLVM. LLVM has had another pass, SROA, which has superseded ScalarReplAggregates for quite some time. Differential Revision: http://reviews.llvm.org/D21316 llvm-svn: 272737
* Revert "Change memcpy/memset/memmove to have dest and source alignments."Pete Cooper2015-11-191-3/+3
| | | | | | | | | | This reverts commit r253511. This likely broke the bots in http://lab.llvm.org:8011/builders/clang-ppc64-elf-linux2/builds/20202 http://bb.pgr.jp/builders/clang-3stage-i686-linux/builds/3787 llvm-svn: 253543
* Change memcpy/memset/memmove to have dest and source alignments.Pete Cooper2015-11-181-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Note, this was reviewed (and more details are in) http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html These intrinsics currently have an explicit alignment argument which is required to be a constant integer. It represents the alignment of the source and dest, and so must be the minimum of those. This change allows source and dest to each have their own alignments by using the alignment attribute on their arguments. The alignment argument itself is removed. There are a few places in the code for which the code needs to be checked by an expert as to whether using only src/dest alignment is safe. For those places, they currently take the minimum of src/dest alignments which matches the current behaviour. For example, code which used to read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 500, i32 8, i1 false) will now read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 8 %dest, i8* align 8 %src, i32 500, i1 false) For out of tree owners, I was able to strip alignment from calls using sed by replacing: (call.*llvm\.memset.*)i32\ [0-9]*\,\ i1 false\) with: $1i1 false) and similarly for memmove and memcpy. I then added back in alignment to test cases which needed it. A similar commit will be made to clang which actually has many differences in alignment as now IRBuilder can generate different source/dest alignments on calls. In IRBuilder itself, a new argument was added. Instead of calling: CreateMemCpy(Dst, Src, getInt64(Size), DstAlign, /* isVolatile */ false) you now call CreateMemCpy(Dst, Src, getInt64(Size), DstAlign, SrcAlign, /* isVolatile */ false) There is a temporary class (IntegerAlignment) which takes the source alignment and rejects implicit conversion from bool. This is to prevent isVolatile here from passing its default parameter to the source alignment. Note, changes in future can now be made to codegen. I didn't change anything here, but this change should enable better memcpy code sequences. Reviewed by Hal Finkel. llvm-svn: 253511
* Convert all tests using TCL-style quoting to use shell-style quoting.Chandler Carruth2012-07-021-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | This was done through the aid of a terrible Perl creation. I will not paste any of the horrors here. Suffice to say, it require multiple staged rounds of replacements, state carried between, and a few nested-construct-parsing hacks that I'm not proud of. It happens, by luck, to be able to deal with all the TCL-quoting patterns in evidence in the LLVM test suite. If anyone is maintaining large out-of-tree test trees, feel free to poke me and I'll send you the steps I used to convert things, as well as answer any painful questions etc. IRC works best for this type of thing I find. Once converted, switch the LLVM lit config to use ShTests the same as Clang. In addition to being able to delete large amounts of Python code from 'lit', this will also simplify the entire test suite and some of lit's architecture. Finally, the test suite runs 33% faster on Linux now. ;] For my 16-hardware-thread (2x 4-core xeon e5520): 36s -> 24s llvm-svn: 159525
* rip out a ton of intrinsic modernization logic from AutoUpgrade.cpp, which isChris Lattner2011-06-181-8/+8
| | | | | | | | | for pre-2.9 bitcode files. We keep x86 unaligned loads, movnt, crc32, and the target indep prefetch change. As usual, updating the testsuite is a PITA. llvm-svn: 133337
* Change tests from "opt %s" to "opt < %s" so that opt doesn't see theDan Gohman2009-09-111-1/+1
| | | | | | | | input filename so that opt doesn't print the input filename in the output so that grep lines in the tests don't unintentionally match strings in the input filename. llvm-svn: 81537
* Use opt -S instead of piping bitcode output through llvm-dis.Dan Gohman2009-09-081-1/+1
| | | | llvm-svn: 81257
* Change these tests to feed the assembly files to opt directly, insteadDan Gohman2009-09-081-1/+1
| | | | | | of using llvm-as, now that opt supports this. llvm-svn: 81226
* Enhance SROA to "promote to scalar" allocas which are Chris Lattner2009-03-081-6/+5
| | | | | | | memcpy/memmove'd into or out of. This fixes a serious perf issue that Nate ran into. llvm-svn: 66366
* Fix PR2369 by making scalarrepl more careful about promoting Chris Lattner2008-06-221-0/+18
structures. Its default threshold is to promote things that are smaller than 128 bytes, which is sane. However, it is not sane to do this for things that turn into 128 *registers*. Add a cap on the number of registers introduced, defaulting to 128/4=32. llvm-svn: 52611
OpenPOWER on IntegriCloud