Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | [Analysis] Make LocationSize pretty-printing more descriptive | George Burgess IV | 2018-10-10 | 6 | -223/+223 |
| | | | | | | | | | | | | | This is the third patch in a series intended to make https://reviews.llvm.org/D44748 more easily reviewable. Please see that patch for more context. The second being r344013. The intent is to make the output of printing a LocationSize more precise. The main motivation for this is that we plan to add a bit to distinguish whether a given LocationSize is an upper-bound or is precise; making that information available in pretty-printing is nice. llvm-svn: 344108 | ||||
* | [AST] Add test coverage of memsets | Philip Reames | 2018-09-10 | 1 | -0/+48 |
| | | | | | | Immediately after posting https://reviews.llvm.org/D51895, I noticed a small bug. These tests would have caught that. llvm-svn: 341880 | ||||
* | [AST] Visit memtransfer arguments in order | Philip Reames | 2018-09-10 | 1 | -8/+8 |
| | | | | | | | | | | The only point to this change is the test diffs. When I remove this code entirely (in favor of the recently added generic handling), I don't want there to be any confusion due to spurious test diffs. As an aside, the fact out tests are AST construction order dependent is not great. I thought about fixing that, but the reasonable schemes I might want (e.g. sort by name) need the test diffs anyways. Philip llvm-svn: 341841 | ||||
* | [AST] Generalize argument specific aliasing | Philip Reames | 2018-09-07 | 1 | -36/+28 |
| | | | | | | | | | | | | | | AliasSetTracker has special case handling for memset, memcpy and memmove which pre-existed argmemonly on functions and readonly and writeonly on arguments. This patch generalizes it using the AA infrastructure to any call correctly annotated. The motivation here is to cut down on confusion, not performance per se. For most instructions, there is a direct mapping to alias set. However, this is not guaranteed by the interface and was not in fact true for these three intrinsics *and only these three intrinsics*. I kept getting myself confused about this invariant, so I figured it would be good to clearly distinguish between a instructions and alias sets. Calls happened to be an easy target. The nice side effect is that custom implementations of memset/memcpy/memmove - including wrappers discovered by IPO - can now be optimized the same as builts by LICM. Note: The actual removal of the memset/memtransfer specific handling will happen in a follow on NFC patch. It was originally part of this one, but separate for ease of review and rebase. Differential Revision: https://reviews.llvm.org/D50730 llvm-svn: 341713 | ||||
* | [AST] Add a test for attribute intersection | Philip Reames | 2018-08-22 | 1 | -0/+18 |
| | | | | | | Already works, but I initially convinced myself it doesn't, so add a test which shows it does. :) llvm-svn: 340453 | ||||
* | [AST] Remove notion of volatile from alias sets [NFCI] | Philip Reames | 2018-08-21 | 1 | -2/+2 |
| | | | | | | | | | | Volatility is not an aliasing property. We used to model volatile as if it had extremely conservative aliasing implications, but that hasn't been true for several years now. So, it doesn't make sense to be in AliasSet. It also turns out the code is entirely a noop. Outside of the AST code to update it, there was only one user: load store promotion in LICM. L/S promotion doesn't need the check since it walks all the users of the address anyway. It already checks each load or store via !isUnordered which causes us to bail for volatile accesses. (Look at the lines immediately following the two remove asserts.) There is the possibility of some small compile time impact here, but the only case which will get noticeably slower is a loop with a large number of loads and stores to the same address where only the last one we inspect is volatile. This is sufficiently rare it's not worth optimizing for.. llvm-svn: 340312 | ||||
* | [AST] Clarify printing of unknown size locations [NFC] | Philip Reames | 2018-08-17 | 1 | -0/+22 |
| | | | | | | Printing "unknown" is much more clear than an arbitrary large integer llvm-svn: 340108 | ||||
* | [AST][Tests] Clarify what each test is doing | Philip Reames | 2018-08-17 | 1 | -20/+23 |
| | | | | llvm-svn: 340100 | ||||
* | [AST[Tests] Shorten tests using noalias params | Philip Reames | 2018-08-17 | 1 | -12/+4 |
| | | | | llvm-svn: 340099 | ||||
* | [AST] Add tests for argmemonly calls [NFC] | Philip Reames | 2018-08-17 | 1 | -0/+130 |
| | | | | | | First step towards building a test set to rebase D50730 on top of. Starting with clone of memtransfer tests, more to come. llvm-svn: 340095 | ||||
* | [AliasSetTracker] Do not treat experimental_guard intrinsic as memory ↵ | Max Kazantsev | 2018-08-15 | 2 | -48/+94 |
| | | | | | | | | | | | | | | | | writing instruction The `experimental_guard` intrinsic has memory write semantics to model the thread-exiting logic, but does not do any actual writes to memory. Currently, `AliasSetTracker` treats it as a normal memory write. As result, a loop-invariant load cannot be hoisted out of loop because the guard may possibly alias with it. This patch makes `AliasSetTracker` so that it doesn't treat guards as memory writes. Differential Revision: https://reviews.llvm.org/D50497 Reviewed By: reames llvm-svn: 339753 | ||||
* | [NFC] Add comprehensive test of AliasSetTracker with guards | Max Kazantsev | 2018-08-14 | 1 | -0/+1550 |
| | | | | llvm-svn: 339643 | ||||
* | [AliasSet] Teach the alias set how to handle atomic memcpy/memmove/memset | Daniel Neilson | 2018-05-30 | 1 | -0/+77 |
| | | | | | | | | | Summary: The atomic variants of the memcpy/memmove/memset intrinsics can be treated the same was as the regular forms, with respect to aliasing. Update the AliasSetTracker to treat the atomic forms the same was as the regular forms. llvm-svn: 333551 | ||||
* | Remove alignment argument from memcpy/memmove/memset in favour of alignment ↵ | Daniel Neilson | 2018-01-19 | 1 | -10/+10 |
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | attributes (Step 1) Summary: This is a resurrection of work first proposed and discussed in Aug 2015: http://lists.llvm.org/pipermail/llvm-dev/2015-August/089384.html and initially landed (but then backed out) in Nov 2015: http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html The @llvm.memcpy/memmove/memset intrinsics currently have an explicit argument which is required to be a constant integer. It represents the alignment of the dest (and source), and so must be the minimum of the actual alignment of the two. This change is the first in a series that allows source and dest to each have their own alignments by using the alignment attribute on their arguments. In this change we: 1) Remove the alignment argument. 2) Add alignment attributes to the source & dest arguments. We, temporarily, require that the alignments for source & dest be equal. For example, code which used to read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 100, i32 4, i1 false) will now read call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 4 %dest, i8* align 4 %src, i32 100, i1 false) Downstream users may have to update their lit tests that check for @llvm.memcpy/memmove/memset call/declaration patterns. The following extended sed script may help with updating the majority of your tests, but it does not catch all possible patterns so some manual checking and updating will be required. s~declare void @llvm\.mem(set|cpy|move)\.p([^(]*)\((.*), i32, i1\)~declare void @llvm.mem\1.p\2(\3, i1)~g s~call void @llvm\.memset\.p([^(]*)i8\(i8([^*]*)\* (.*), i8 (.*), i8 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i8(i8\2* \3, i8 \4, i8 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i16\(i8([^*]*)\* (.*), i8 (.*), i16 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i16(i8\2* \3, i8 \4, i16 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i32\(i8([^*]*)\* (.*), i8 (.*), i32 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i32(i8\2* \3, i8 \4, i32 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i64\(i8([^*]*)\* (.*), i8 (.*), i64 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i64(i8\2* \3, i8 \4, i64 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i128\(i8([^*]*)\* (.*), i8 (.*), i128 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i128(i8\2* \3, i8 \4, i128 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i8\(i8([^*]*)\* (.*), i8 (.*), i8 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i8(i8\2* align \6 \3, i8 \4, i8 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i16\(i8([^*]*)\* (.*), i8 (.*), i16 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i16(i8\2* align \6 \3, i8 \4, i16 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i32\(i8([^*]*)\* (.*), i8 (.*), i32 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i32(i8\2* align \6 \3, i8 \4, i32 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i64\(i8([^*]*)\* (.*), i8 (.*), i64 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i64(i8\2* align \6 \3, i8 \4, i64 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i128\(i8([^*]*)\* (.*), i8 (.*), i128 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i128(i8\2* align \6 \3, i8 \4, i128 \5, i1 \7)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i8\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i8 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i8(i8\3* \4, i8\5* \6, i8 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i16\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i16 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i16(i8\3* \4, i8\5* \6, i16 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i32\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i32 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i32(i8\3* \4, i8\5* \6, i32 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i64\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i64 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i64(i8\3* \4, i8\5* \6, i64 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i128\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i128 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i128(i8\3* \4, i8\5* \6, i128 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i8\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i8 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i8(i8\3* align \8 \4, i8\5* align \8 \6, i8 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i16\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i16 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i16(i8\3* align \8 \4, i8\5* align \8 \6, i16 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i32\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i32 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i32(i8\3* align \8 \4, i8\5* align \8 \6, i32 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i64\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i64 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i64(i8\3* align \8 \4, i8\5* align \8 \6, i64 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i128\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i128 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i128(i8\3* align \8 \4, i8\5* align \8 \6, i128 \7, i1 \9)~g The remaining changes in the series will: Step 2) Expand the IRBuilder API to allow creation of memcpy/memmove with differing source and dest alignments. Step 3) Update Clang to use the new IRBuilder API. Step 4) Update Polly to use the new IRBuilder API. Step 5) Update LLVM passes that create memcpy/memmove calls to use the new IRBuilder API, and those that use use MemIntrinsicInst::[get|set]Alignment() to use getDestAlignment() and getSourceAlignment() instead. Step 6) Remove the single-alignment IRBuilder API for memcpy/memmove, and the MemIntrinsicInst::[get|set]Alignment() methods. Reviewers: pete, hfinkel, lhames, reames, bollu Reviewed By: reames Subscribers: niosHD, reames, jholewinski, qcolombet, jfb, sanjoy, arsenm, dschuff, dylanmckay, mehdi_amini, sdardis, nemanjai, david2050, nhaehnle, javed.absar, sbc100, jgravelle-google, eraman, aheejin, kbarton, JDevlieghere, asb, rbar, johnrusso, simoncook, jordy.potman.lists, apazos, sabuasal, llvm-commits Differential Revision: https://reviews.llvm.org/D41675 llvm-svn: 322965 | ||||
* | Use WeakVH instead of WeakTrackingVH in AliasSetTracker's UnkownInsts | Sanjoy Das | 2017-05-01 | 1 | -0/+25 |
| | | | | | | | | | | | | | In cases where an instruction (a call site, say) is RAUW'ed with some other value (this is possible via the `returned` attribute, for instance), we want the slot in UnknownInsts to point to the original Instruction we wanted to track, not the value it got replaced by. Fixes PR32587. This relands r301426. llvm-svn: 301814 | ||||
* | Reverts commit r301424, r301425 and r301426 | Sanjoy Das | 2017-04-26 | 1 | -25/+0 |
| | | | | | | | | | | | | Commits were: "Use WeakVH instead of WeakTrackingVH in AliasSetTracker's UnkownInsts" "Add a new WeakVH value handle; NFC" "Rename WeakVH to WeakTrackingVH; NFC" The changes assumed pointers are 8 byte aligned on all architectures. llvm-svn: 301429 | ||||
* | Use WeakVH instead of WeakTrackingVH in AliasSetTracker's UnkownInsts | Sanjoy Das | 2017-04-26 | 1 | -0/+25 |
| | | | | | | | | | | | | | | | | | | | Summary: In cases where an instruction (a call site, say) is RAUW'ed with some other value (this is possible via the `returned` attribute, amongst other things), we want the slot in UnknownInsts to point to the original Instruction we wanted to track, not the value it got replaced by. Fixes PR32587. Reviewers: davide Subscribers: mcrosier, llvm-commits Differential Revision: https://reviews.llvm.org/D32268 llvm-svn: 301426 | ||||
* | [AliasSetTracker] Make AST smarter about assume intrinsics that don't ↵ | Chad Rosier | 2016-11-07 | 1 | -0/+19 |
| | | | | | | | | actually affect memory. Differential Revision: https://reviews.llvm.org/D26252 llvm-svn: 286108 | ||||
* | Revert "[AliasSetTracker] Make AST smarter about intrinsics that don't ↵ | Chad Rosier | 2016-10-26 | 1 | -54/+0 |
| | | | | | | | | | | | actually affect memory." This reverts commit r285191. LICM appears to rely on the Alias Set Tracker hitting lifetime markers to prevent code from being moved outside of the original scope. llvm-svn: 285227 | ||||
* | [AliasSetTracker] Make AST smarter about intrinsics that don't actually ↵ | Chad Rosier | 2016-10-26 | 1 | -0/+54 |
| | | | | | | | | affect memory. Differential Revision: https://reviews.llvm.org/D25969 llvm-svn: 285191 | ||||
* | [AliasSetTracker] Add support for memcpy and memmove. | Chad Rosier | 2016-10-19 | 1 | -0/+114 |
| | | | | | | Differential Revision: https://reviews.llvm.org/D25776 llvm-svn: 284630 | ||||
* | [AliasSetTracker] Degrade AliasSetTracker when may-alias sets get too large. | Michael Kuperstein | 2016-08-19 | 1 | -0/+53 |
Repeated inserts into AliasSetTracker have quadratic behavior - inserting a pointer into AST is linear, since it requires walking over all "may" alias sets and running an alias check vs. every pointer in the set. We can avoid this by tracking the total number of pointers in "may" sets, and when that number exceeds a threshold, declare the tracker "saturated". This lumps all pointers into a single "may" set that aliases every other pointer. (This is a stop-gap solution until we migrate to MemorySSA) This fixes PR28832. Differential Revision: https://reviews.llvm.org/D23432 llvm-svn: 279274 |