summaryrefslogtreecommitdiffstats
path: root/llvm/test/Transforms/MemCpyOpt
Commit message (Collapse)AuthorAgeFilesLines
* Add address space mangling to lifetime intrinsicsMatt Arsenault2017-04-104-21/+21
| | | | | | In preparation for allowing allocas to have non-0 addrspace. llvm-svn: 299876
* [MemCpyOpt] Only replace memcpy with bitcast if address spaces matchMatt Arsenault2017-04-101-0/+13
| | | | | | Patch by James Price llvm-svn: 299866
* MemCpyOptimizer: don't create new addrspace castsFiona Glaser2017-03-141-0/+15
| | | | | | | This isn't safe on all targets, and since we don't have a way to know it's safe, avoid doing it for now. llvm-svn: 297788
* [MemCpyOpt] Don't sink LoadInst below possible clobber.Bryant Wong2016-12-271-0/+36
| | | | | | Differential Revision: https://reviews.llvm.org/D26811 llvm-svn: 290611
* [MemCpyOpt] Don't emit IR in an unspecified orderBenjamin Kramer2016-11-071-28/+28
| | | | | | | | | Argument evaluation order is one of the edge cases where Clang differs from GCC, yielding different IR depending on which compiler LLVM was built with. Make the order deterministic and tune the test to actually verify the order instead of trying to hide it. llvm-svn: 286126
* [MemCpy] Add comments for r279769Tim Shen2016-08-251-0/+1
| | | | | | Differential Revision: https://reviews.llvm.org/D23846 llvm-svn: 279778
* [MemCpy] Check for alias in performMemCpyToMemSetOptzn, instead of the ↵Tim Shen2016-08-251-0/+38
| | | | | | | | | | | | | | | identity of two operands Summary: This fixes pr29105. The reason is that lifetime marks creates new aliasing pointers the original ones, but before this patch aliases were not checked in performMemCpyToMemSetOptzn. Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D23846 llvm-svn: 279769
* [AliasAnalysis] Treat invariant.start as read-memoryAnna Thomas2016-08-091-0/+49
| | | | | | | | | | | | | | | | | Summary: We teach alias analysis that invariant.start is readonly. This helps with GVN and memcopy optimizations that currently treat. invariant.start as a clobber. We need to treat this as readonly, so that DSE does not incorrectly remove stores prior to the invariant.start Reviewers: sanjoy, reames, majnemer, dberlin Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D23214 llvm-svn: 278138
* [PM] Port MemCpyOpt to the new PM.Sean Silva2016-06-141-0/+1
| | | | | | | | | The need for all these Lookup* functions is just because of calls to getAnalysis inside methods (i.e. not at the top level) of the runOnFunction method. They should be straightforward to clean up when the old PM is gone. llvm-svn: 272615
* [MemCpyOpt] Do not exchange llvm.lifetime.start and llvm.memcpyTim Shen2016-06-081-0/+25
| | | | | | | | | | Reviewers: iteratee Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D21087 llvm-svn: 272192
* [MemCpyOpt] Don't perform callslot optimization across may-throw callsDavid Majnemer2016-05-262-1/+35
| | | | | | | | | An exception could prevent a store from occurring but MemCpyOpt's callslot optimization would fire anyway, causing the store to occur. This fixes PR27849. llvm-svn: 270892
* [MemCpyOpt] Use MaxIntSize in byte instead of bitJun Bum Lim2016-05-131-0/+20
| | | | | | | | | | | | Summary: This change fix the bug in isProfitableToUseMemset() where MaxIntSize shoule be in byte, not bit. Reviewers: arsenm, joker.eph, mcrosier Subscribers: mcrosier, llvm-commits Differential Revision: http://reviews.llvm.org/D20176 llvm-svn: 269433
* Revert "MemCpyOpt: combine local load/store sequences into memcpy."Tim Northover2016-05-101-353/+0
| | | | | | | This reverts commit r269125. It was in my tree when I ran "git svn dcommit". It's really still under review. llvm-svn: 269127
* MemCpyOpt: combine local load/store sequences into memcpy.Tim Northover2016-05-101-0/+353
| | | | | | | | Sort of the BB-local equivalent to idiom-recognizer: if we have a basic-block that really implements a memcpy operation, backends can benefit from seeing this. llvm-svn: 269125
* Imporove load to store => memcpyAmaury Sechet2016-03-141-7/+29
| | | | | | | | | | Summary: This now try to reorder instructions in order to help create the optimizable pattern. Reviewers: craig.topper, spatel, dexonsmith, Prazek, chandlerc, joker.eph, majnemer Differential Revision: http://reviews.llvm.org/D16523 llvm-svn: 263503
* Fix PR26051: Memcpy optimization should introduce a call to memcpy before ↵Mehdi Amini2016-01-061-0/+14
| | | | | | | | | | the store destination position This is a conservative fix, I expect Amaury to relax this. Follow-up for r256923 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 256999
* Promote aggregate store to memset when possibleAmaury Sechet2016-01-061-4/+6
| | | | | | | | | | | | Summary: As per title. This will allow the optimizer to pick up on it. Reviewers: craig.topper, spatel, dexonsmith, Prazek, chandlerc, joker.eph, majnemer Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D15923 llvm-svn: 256969
* Improve load/store to memcpy for aggregateAmaury Sechet2016-01-061-0/+25
| | | | | | | | | | | | Summary: It turns out that if we don't try to do it at the store location, we can do it before any operation that alias the load, as long as no operation alias the store. Reviewers: craig.topper, spatel, dexonsmith, Prazek, chandlerc, joker.eph Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D15903 llvm-svn: 256923
* Implement load to store => memcpy in MemCpyOpt for aggregatesAmaury Sechet2016-01-051-0/+47
| | | | | | | | | | | | | | | Summary: Most of the tool chain is able to optimize scalar and memcpy like operation effisciently while it isn't that good with aggregates. In order to improve the support of aggregate, we try to change aggregate manipulation into either scalar or memcpy like ones whenever possible without loosing informations. This is one such opportunity. Reviewers: craig.topper, spatel, dexonsmith, Prazek, chandlerc Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D15894 llvm-svn: 256868
* Revert "Change memcpy/memset/memmove to have dest and source alignments."Pete Cooper2015-11-1917-182/+182
| | | | | | | | | | This reverts commit r253511. This likely broke the bots in http://lab.llvm.org:8011/builders/clang-ppc64-elf-linux2/builds/20202 http://bb.pgr.jp/builders/clang-3stage-i686-linux/builds/3787 llvm-svn: 253543
* Change memcpy/memset/memmove to have dest and source alignments.Pete Cooper2015-11-1817-182/+182
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Note, this was reviewed (and more details are in) http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html These intrinsics currently have an explicit alignment argument which is required to be a constant integer. It represents the alignment of the source and dest, and so must be the minimum of those. This change allows source and dest to each have their own alignments by using the alignment attribute on their arguments. The alignment argument itself is removed. There are a few places in the code for which the code needs to be checked by an expert as to whether using only src/dest alignment is safe. For those places, they currently take the minimum of src/dest alignments which matches the current behaviour. For example, code which used to read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 500, i32 8, i1 false) will now read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 8 %dest, i8* align 8 %src, i32 500, i1 false) For out of tree owners, I was able to strip alignment from calls using sed by replacing: (call.*llvm\.memset.*)i32\ [0-9]*\,\ i1 false\) with: $1i1 false) and similarly for memmove and memcpy. I then added back in alignment to test cases which needed it. A similar commit will be made to clang which actually has many differences in alignment as now IRBuilder can generate different source/dest alignments on calls. In IRBuilder itself, a new argument was added. Instead of calling: CreateMemCpy(Dst, Src, getInt64(Size), DstAlign, /* isVolatile */ false) you now call CreateMemCpy(Dst, Src, getInt64(Size), DstAlign, SrcAlign, /* isVolatile */ false) There is a temporary class (IntegerAlignment) which takes the source alignment and rejects implicit conversion from bool. This is to prevent isVolatile here from passing its default parameter to the source alignment. Note, changes in future can now be made to codegen. I didn't change anything here, but this change should enable better memcpy code sequences. Reviewed by Hal Finkel. llvm-svn: 253511
* Sort the enums in Attributes.h in case insensitive alphabetical order.Akira Hatanaka2015-11-111-1/+1
| | | | | | | | | Sort the enums in preparation for moving the attributes to a table-gen file. rdar://problem/19836465 llvm-svn: 252692
* [MemCpyOpt] Fix wrong merging adjacent nontemporal stores into memset calls.Andrea Di Biagio2015-10-091-0/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pass MemCpyOpt doesn't check if a store instruction is nontemporal. As a consequence, adjacent nontemporal stores are always merged into a memset call. Example: ;;; define void @foo(<4 x float>* nocapture %p) { entry: store <4 x float> zeroinitializer, <4 x float>* %p, align 16, !nontemporal !0 %p1 = getelementptr inbounds <4 x float>, <4 x float>* %dst, i64 1 store <4 x float> zeroinitializer, <4 x float>* %p1, align 16, !nontemporal !0 ret void } !0 = !{i32 1} ;;; In this example, the two nontemporal stores are combined to a memset of zero which does not preserve the nontemporal hint. Later on the backend (tested on a x86-64 corei7) expands that memset call into a sequence of two normal 16-byte aligned vector stores. opt -memcpyopt example.ll -S -o - | llc -mcpu=corei7 -o - Before: xorps %xmm0, %xmm0 movaps %xmm0, 16(%rdi) movaps %xmm0, (%rdi) With this patch, we no longer merge nontemporal stores into calls to memset. In this example, llc correctly expands the two stores into two movntps: xorps %xmm0, %xmm0 movntps %xmm0, 16(%rdi) movntps %xmm0, (%rdi) In theory, we could extend the usage of !nontemporal metadata to memcpy/memset calls. However a change like that would only have the effect of forcing the backend to expand !nontemporal memsets back to sequences of store instructions. A memset library call would not have exactly the same semantic of a builtin !nontemporal memset call. So, SelectionDAG will have to conservatively expand it back to a sequence of !nontemporal stores (effectively undoing the merging). Differential Revision: http://reviews.llvm.org/D13519 llvm-svn: 249820
* Emit argmemonly attribute for intrinsics.Igor Laevsky2015-08-131-2/+3
| | | | | | Differential Revision: http://reviews.llvm.org/D11352 llvm-svn: 244920
* [MemCpyOpt] Do move the memset, but look at its dest's dependencies.Ahmed Bougacha2015-05-211-1/+21
| | | | | | | | | In effect a partial revert of r237858, which was a dumb shortcut. Looking at the dependencies of the destination should be the proper fix: if the new memset would depend on anything other than itself, the transformation isn't correct. llvm-svn: 237874
* [MemCpyOpt] Don't move the memset when optimizing memset+memcpy.Ahmed Bougacha2015-05-201-6/+6
| | | | | | | | | | | | Fixes PR23599, another miscompile introduced by r235232: when there is another dependency on the destination of the created memset (i.e., the part of the original destination that the memcpy doesn't depend on) between the memcpy and the original memset, we would insert the created memset after the memcpy, and thus after the other dependency. Instead, insert the created memset right after the old one. llvm-svn: 237858
* [MemCpyOpt] Turn memcpy from just-memset'd source into memset.Ahmed Bougacha2015-05-163-2/+104
| | | | | | | | | | | | | | | | | | | There's no point in copying around constants, so, when all else fails, we can still transform memcpy of memset into two independent memsets. To quote the example, we can turn: memset(dst1, c, dst1_size); memcpy(dst2, dst1, dst2_size); into: memset(dst1, c, dst1_size); memset(dst2, c, dst2_size); When dst2_size <= dst1_size. Like r235232 for copy constructors, this can occur in move constructors. Differential Revision: http://reviews.llvm.org/D9682 llvm-svn: 237506
* Remove dead code in testcase. NFC.Ahmed Bougacha2015-05-161-4/+0
| | | | llvm-svn: 237501
* [MemCpyOpt] Look at any dependency -not just source- for memset+memcpy.Ahmed Bougacha2015-05-111-0/+18
| | | | | | | | | | This fixes another miscompile introduced by r235232: when there was a dependency on the memcpy destination other than the memset, we would ignore it, because we only looked at the source dependency. It was a mistake to use SrcDepInfo. Instead, just use DepInfo. llvm-svn: 237066
* [MemCpyOpt] Use the raw i8* dest when optimizing memset+memcpy.Ahmed Bougacha2015-04-211-0/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | MemIntrinsic::getDest() looks through pointer casts, and using it directly when building the new GEP+memset results in stuff like: %0 = getelementptr i64* %p, i32 16 %1 = bitcast i64* %0 to i8* call ..memset(i8* %1, ...) instead of the correct: %0 = bitcast i64* %p to i8* %1 = getelementptr i8* %0, i32 16 call ..memset(i8* %1, ...) Instead, use getRawDest, which just gives you the i8* value. While there, use the memcpy's dest, as it's live anyway. In most cases, when the optimization triggers, the memset and memcpy sizes are the same, so the built memset is 0-sized and eliminated. The problem occurs when they're different. Fixes a regression caused by r235232: PR23300. llvm-svn: 235419
* [MemCpyOpt] Don't force i64 when promoting memset/memcpy sizes.Ahmed Bougacha2015-04-181-0/+32
| | | | | | | | | | Harden r235258 to support any integer bitwidth. The quick glance at the reference made me think only i32 and i64 were valid types, but they're not special, so any overload is legal. Thanks to David Majnemer for noticing! llvm-svn: 235261
* [MemCpyOpt] Promote both memset/memcpy sizes if differently typed.Ahmed Bougacha2015-04-181-3/+35
| | | | | | | | | | | | | Followup to r235232, which caused PR23278. We can't assume the memset and memcpy sizes have the same type, as nothing in the language reference prevents that. Instead, zext both to i64 if they disagree. While there, robustify tests by using i8 %c rather than i8 0 for the memset character. llvm-svn: 235258
* [MemCpyOpt] Optimize double-storing by memset+memcpy.Ahmed Bougacha2015-04-171-0/+54
| | | | | | | | | | | | | | | | | | A common idiom in some code is to do the following: memset(dst, 0, dst_size); memcpy(dst, src, src_size); Some of the memset is redundant; instead, we can do: memcpy(dst, src, src_size); memset(dst + src_size, 0, dst_size <= src_size ? 0 : dst_size - src_size); Original patch by: Joel Jones Differential Revision: http://reviews.llvm.org/D498 llvm-svn: 235232
* [opaque pointer type] Add textual IR support for explicit type parameter to ↵David Blaikie2015-04-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the call instruction See r230786 and r230794 for similar changes to gep and load respectively. Call is a bit different because it often doesn't have a single explicit type - usually the type is deduced from the arguments, and just the return type is explicit. In those cases there's no need to change the IR. When that's not the case, the IR usually contains the pointer type of the first operand - but since typed pointers are going away, that representation is insufficient so I'm just stripping the "pointerness" of the explicit type away. This does make the IR a bit weird - it /sort of/ reads like the type of the first operand: "call void () %x(" but %x is actually of type "void ()*" and will eventually be just of type "ptr". But this seems not too bad and I don't think it would benefit from repeating the type ("void (), void () * %x(" and then eventually "void (), ptr %x(") as has been done with gep and load. This also has a side benefit: since the explicit type is no longer a pointer, there's no ambiguity between an explicit type and a function that returns a function pointer. Previously this case needed an explicit type (eg: a function returning a void() function was written as "call void () () * @x(" rather than "call void () * @x(" because of the ambiguity between a function returning a pointer to a void() function and a function returning void). No ambiguity means even function pointer return types can just be written alone, without writing the whole function's type. This leaves /only/ the varargs case where the explicit type is required. Given the special type syntax in call instructions, the regex-fu used for migration was a bit more involved in its own unique way (as every one of these is) so here it is. Use it in conjunction with the apply.sh script and associated find/xargs commands I've provided in rr230786 to migrate your out of tree tests. Do let me know if any of this doesn't cover your cases & we can iterate on a more general script/regexes to help others with out of tree tests. About 9 test cases couldn't be automatically migrated - half of those were functions returning function pointers, where I just had to manually delete the function argument types now that we didn't need an explicit function type there. The other half were typedefs of function types used in calls - just had to manually drop the * from those. import fileinput import sys import re pat = re.compile(r'((?:=|:|^|\s)call\s(?:[^@]*?))(\s*$|\s*(?:(?:\[\[[a-zA-Z0-9_]+\]\]|[@%](?:(")?[\\\?@a-zA-Z0-9_.]*?(?(3)"|)|{{.*}}))(?:\(|$)|undef|inttoptr|bitcast|null|asm).*$)') addrspace_end = re.compile(r"addrspace\(\d+\)\s*\*$") func_end = re.compile("(?:void.*|\)\s*)\*$") def conv(match, line): if not match or re.search(addrspace_end, match.group(1)) or not re.search(func_end, match.group(1)): return line return line[:match.start()] + match.group(1)[:match.group(1).rfind('*')].rstrip() + match.group(2) + line[match.end():] for line in sys.stdin: sys.stdout.write(conv(re.search(pat, line), line)) llvm-svn: 235145
* [opaque pointer type] Add textual IR support for explicit type parameter to ↵David Blaikie2015-03-134-21/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | gep operator Similar to gep (r230786) and load (r230794) changes. Similar migration script can be used to update test cases, which successfully migrated all of LLVM and Polly, but about 4 test cases needed manually changes in Clang. (this script will read the contents of stdin and massage it into stdout - wrap it in the 'apply.sh' script shown in previous commits + xargs to apply it over a large set of test cases) import fileinput import sys import re rep = re.compile(r"(getelementptr(?:\s+inbounds)?\s*\()((<\d*\s+x\s+)?([^@]*?)(|\s*addrspace\(\d+\))\s*\*(?(3)>)\s*)(?=$|%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|zeroinitializer|<|\[\[[a-zA-Z]|\{\{)", re.MULTILINE | re.DOTALL) def conv(match): line = match.group(1) line += match.group(4) line += ", " line += match.group(2) return line line = sys.stdin.read() off = 0 for match in re.finditer(rep, line): sys.stdout.write(line[off:match.start()]) sys.stdout.write(conv(match)) off = match.end() sys.stdout.write(line[off:]) llvm-svn: 232184
* [opaque pointer type] Add textual IR support for explicit type parameter to ↵David Blaikie2015-02-276-9/+9
| | | | | | | | | | | | | | | | | | | | | | | | load instruction Essentially the same as the GEP change in r230786. A similar migration script can be used to update test cases, though a few more test case improvements/changes were required this time around: (r229269-r229278) import fileinput import sys import re pat = re.compile(r"((?:=|:|^)\s*load (?:atomic )?(?:volatile )?(.*?))(| addrspace\(\d+\) *)\*($| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$)") for line in sys.stdin: sys.stdout.write(re.sub(pat, r"\1, \2\3*\4", line)) Reviewers: rafael, dexonsmith, grosser Differential Revision: http://reviews.llvm.org/D7649 llvm-svn: 230794
* [opaque pointer type] Add textual IR support for explicit type parameter to ↵David Blaikie2015-02-2715-121/+121
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | getelementptr instruction One of several parallel first steps to remove the target type of pointers, replacing them with a single opaque pointer type. This adds an explicit type parameter to the gep instruction so that when the first parameter becomes an opaque pointer type, the type to gep through is still available to the instructions. * This doesn't modify gep operators, only instructions (operators will be handled separately) * Textual IR changes only. Bitcode (including upgrade) and changing the in-memory representation will be in separate changes. * geps of vectors are transformed as: getelementptr <4 x float*> %x, ... ->getelementptr float, <4 x float*> %x, ... Then, once the opaque pointer type is introduced, this will ultimately look like: getelementptr float, <4 x ptr> %x with the unambiguous interpretation that it is a vector of pointers to float. * address spaces remain on the pointer, not the type: getelementptr float addrspace(1)* %x ->getelementptr float, float addrspace(1)* %x Then, eventually: getelementptr float, ptr addrspace(1) %x Importantly, the massive amount of test case churn has been automated by same crappy python code. I had to manually update a few test cases that wouldn't fit the script's model (r228970,r229196,r229197,r229198). The python script just massages stdin and writes the result to stdout, I then wrapped that in a shell script to handle replacing files, then using the usual find+xargs to migrate all the files. update.py: import fileinput import sys import re ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))") normrep = re.compile( r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))") def conv(match, line): if not match: return line line = match.groups()[0] if len(match.groups()[5]) == 0: line += match.groups()[2] line += match.groups()[3] line += ", " line += match.groups()[1] line += "\n" return line for line in sys.stdin: if line.find("getelementptr ") == line.find("getelementptr inbounds"): if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("): line = conv(re.match(ibrep, line), line) elif line.find("getelementptr ") != line.find("getelementptr ("): line = conv(re.match(normrep, line), line) sys.stdout.write(line) apply.sh: for name in "$@" do python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name" rm -f "$name.tmp" done The actual commands: From llvm/src: find test/ -name *.ll | xargs ./apply.sh From llvm/src/tools/clang: find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}" From llvm/src/tools/polly: find test/ -name *.ll | xargs ./apply.sh After that, check-all (with llvm, clang, clang-tools-extra, lld, compiler-rt, and polly all checked out). The extra 'rm' in the apply.sh script is due to a few files in clang's test suite using interesting unicode stuff that my python script was throwing exceptions on. None of those files needed to be migrated, so it seemed sufficient to ignore those cases. Reviewers: rafael, dexonsmith, grosser Differential Revision: http://reviews.llvm.org/D7636 llvm-svn: 230786
* ValueTracking: Make isBytewiseValue simpler and more powerful at the same time.Benjamin Kramer2015-02-071-0/+15
| | | | | | | Turns out there is a simpler way of checking that all bytes in a word are equal than binary decomposition. llvm-svn: 228503
* Properly update AA metadata when performing call slot optimizationBjorn Steinbrink2015-02-071-0/+22
| | | | | | | | Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D7482 llvm-svn: 228500
* Allow call-slop optzn for destinations with a suitable dereferenceable attributeBjorn Steinbrink2014-10-161-0/+29
| | | | | | | | | | | | | | | Summary: Currently, call slot optimization requires that if the destination is an argument, the argument has the sret attribute. This is to ensure that the memory access won't trap. In addition to sret, we can also allow the optimization to happen for arguments that have the new dereferenceable attribute, which gives the same guarantee. Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D5832 llvm-svn: 219950
* Fix a really bad miscompile introduced in r216865 - the else-if logicChandler Carruth2014-09-011-9/+36
| | | | | | | | | | | | | | | | | | chain became completely broken here as *all* intrinsic users ended up being skipped, and the ones that seemed to be singled out were actually the exact wrong set. This is a great example of why long else-if chains can be easily confusing. Switch the entire code to use early exits and early continues to have simpler (and more importantly, correct) logic here, as well as fixing the reversed logic for detecting and continuing on lifetime intrinsics. I've also significantly cleaned up the test case and added another test case demonstrating an example where the optimization is not (trivially) safe to perform. llvm-svn: 216871
* Ignore lifetime intrinsics in use list for MemCpyOptimizer. Patch by Luqman ↵Nick Lewycky2014-09-011-0/+28
| | | | | | Aden, review by Hal Finkel. llvm-svn: 216865
* Don't eliminate memcpy's when the address of the pointer may itself be ↵Nick Lewycky2014-07-146-7/+29
| | | | | | relevant. Fixes PR18304. Patch by David Wiberg! llvm-svn: 212970
* Treat lifetime.start'd memory like we treat freshly alloca'd memory. Patch ↵Nick Lewycky2014-03-261-0/+21
| | | | | | by Björn Steinbrink! llvm-svn: 204876
* MemCpyOpt: When merging memsets also merge the trivial case of two memsets ↵Benjamin Kramer2014-03-101-0/+12
| | | | | | | | with the same destination. The testcase is from PR19092, but I think the bug described there is actually a clang issue. llvm-svn: 203489
* A memcpy out of an fresh alloca is a no-op, delete it. Patch by Patrick Walton!Nick Lewycky2014-02-061-0/+25
| | | | llvm-svn: 200907
* Handle an addrspacecast case in memcpyoptMatt Arsenault2014-01-221-0/+17
| | | | llvm-svn: 199836
* [tests] Cleanup initialization of test suffixes.Daniel Dunbar2013-08-161-1/+0
| | | | | | | | | | | | | | | | | - Instead of setting the suffixes in a bunch of places, just set one master list in the top-level config. We now only modify the suffix list in a few suites that have one particular unique suffix (.ml, .mc, .yaml, .td, .py). - Aside from removing the need for a bunch of lit.local.cfg files, this enables 4 tests that were inadvertently being skipped (one in Transforms/BranchFolding, a .s file each in DebugInfo/AArch64 and CodeGen/PowerPC, and one in CodeGen/SI which is now failing and has been XFAILED). - This commit also fixes a bunch of config files to use config.root instead of older copy-pasted code. llvm-svn: 188513
* Update Transforms tests to use CHECK-LABEL for easier debugging. No ↵Stephen Lin2013-07-146-23/+23
| | | | | | | | | | | | | | | | | | | | | | functionality change. This update was done with the following bash script: find test/Transforms -name "*.ll" | \ while read NAME; do echo "$NAME" if ! grep -q "^; *RUN: *llc" $NAME; then TEMP=`mktemp -t temp` cp $NAME $TEMP sed -n "s/^define [^@]*@\([A-Za-z0-9_]*\)(.*$/\1/p" < $NAME | \ while read FUNC; do sed -i '' "s/;\(.*\)\([A-Za-z0-9_]*\):\( *\)@$FUNC\([( ]*\)\$/;\1\2-LABEL:\3@$FUNC(/g" $TEMP done mv $TEMP $NAME fi done llvm-svn: 186268
* Fix a potential bug in r183584.Shuxin Yang2013-06-081-22/+0
| | | | | | | | | | | | | r183584 tries to derive some info from the code *AFTER* a call and apply these derived info to the code *BEFORE* the call, which is not always safe as the call in question may never return, and in this case, the derived info is invalid. Thank Duncan for pointing out this potential bug. rdar://14073661 llvm-svn: 183606
OpenPOWER on IntegriCloud