summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms
Commit message (Collapse)AuthorAgeFilesLines
* [LV] Move VectorParts allocation and mapping into PHI widening (NFC)Matthew Simpson2016-09-011-29/+38
| | | | | | | | | | | | | | | | This patch moves the allocation of VectorParts for PHI nodes into the actual PHI widening code. Previously, we allocated these VectorParts in vectorizeBlockInLoop, and passed them by reference to widenPHIInstruction. Upon returning, we would then map the VectorParts in VectorLoopValueMap. This behavior is problematic for the cases where we only want to generate a scalar version of a PHI node. For example, if in the future we only generate a scalar version of an induction variable, we would end up inserting an empty vector entry into the map once we return to vectorizeBlockInLoop. We now no longer need to pass VectorParts to the various PHI widening functions, and we can keep VectorParts allocation as close as possible to the point at which they are actually mapped in VectorLoopValueMap. llvm-svn: 280390
* [EarlyCSE] Change C API pass interface for EarlyCSE w/ MemorySSAGeoff Berry2016-09-011-2/+6
| | | | | | | | | | Previous change broke the C API for creating an EarlyCSE pass w/ MemorySSA by adding a bool parameter to control whether MemorySSA was used or not. This broke the OCaml bindings. Instead, change the old C API entry point back and add a new one to request an EarlyCSE pass with MemorySSA. llvm-svn: 280379
* [InstCombine] remove fold of an icmp pattern that should never happenSanjay Patel2016-09-011-15/+0
| | | | | | | | | | | | While removing a scalar shackle from an icmp fold, I noticed that I couldn't find any tests to trigger this code path. The 'and' shrinking transform should be handled by InstCombiner::foldCastedBitwiseLogic() or eliminated with InstSimplify. The icmp narrowing is part of InstCombiner::foldICmpWithCastAndCast(). Differential Revision: https://reviews.llvm.org/D24031 llvm-svn: 280370
* [SimplifyCFG] Handle tail-sinking of more than 2 incoming branchesJames Molloy2016-09-011-28/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This was a real restriction in the original version of SinkIfThenCodeToEnd. Now it's been rewritten, the restriction can be lifted. As part of this, we handle a very common and useful case where one of the incoming branches is actually conditional. Consider: if (a) x(1); else if (b) x(2); This produces the following CFG: [if] / \ [x(1)] [if] | | \ | | \ | [x(2)] | \ | / [ end ] [end] has two unconditional predecessor arcs and one conditional. The conditional refers to the implicit empty 'else' arc. This same pattern can also be caused by an empty default block in a switch. We can't sink the call to x() down to end because no call to x() happens on the third incoming arc (assume that x() has sideeffects for the sake of argument; if something is safe to speculate we could indeed sink nevertheless but this cannot happen in the general case and causes many extra selects). We are now able to detect this case and split off the unconditional arcs to a common successor: [if] / \ [x(1)] [if] | | \ | | \ | [x(2)] | \ / | [sink.split] | \ / [ end ] Now we can sink the call to x() into %sink.split. This can cause significant code simplification in many testcases. llvm-svn: 280364
* [SimplifyCFG] Change the algorithm in SinkThenElseCodeToEndJames Molloy2016-09-011-90/+149
| | | | | | | | | | | | | | | | | | | | | | | | | r279460 rewrote this function to be able to handle more than two incoming edges and took pains to ensure this didn't regress anything. This time we change the logic for determining if an instruction should be sunk. Previously we used a single pass greedy algorithm - sink instructions until one requires more than one PHI node or we run out of instructions to sink. This had the problem that sinking instructions that had non-identical but trivially the same operands needed extra logic so we sunk them aggressively. For example: %a = load i32* %b %d = load i32* %b %c = gep i32* %a, i32 0 %e = gep i32* %d, i32 1 Sinking %c and %e would naively require two PHI merges as %a != %d. But the loads are obviously equivalent (and maybe can't be hoisted because there is no common predecessor). This is why we implemented the fairly complex function areValuesTriviallySame(), to look through trivial differences like this. However it's just not clever enough. Instead, throw areValuesTriviallySame away, use pointer equality to check equivalence of operands and switch to a two-stage algorithm. In the "scan" stage, we look at every sinkable instruction in isolation from end of block to front. If it's sinkable, we keep track of all operands that required PHI merging. In the "sink" stage, we iteratively sink the last non-terminator in the source blocks. But when calculating how many PHIs are actually required to be inserted (to work out if we should stop or not) we remove any values that have already been sunk from the set of PHI-merges required, which allows us to be more aggressive. This turns an algorithm with potentially recursive lookahead (looking through GEPs, casts, loads and any other instruction potentially not CSE'd) to two linear scans. llvm-svn: 280351
* [SimplifyCFG] Fix nondeterministic iteration orderJames Molloy2016-09-011-2/+2
| | | | | | | | We iterate over the result from SafeToMergeTerminators, so make it a SmallSetVector instead of a SmallPtrSet. Should fix stage3 convergence builds. llvm-svn: 280342
* [SimplifyCFG] Improve FoldValueComparisonIntoPredecessors to handle more casesJames Molloy2016-09-011-6/+21
| | | | | | | | | | | | | | | | | | | A very important case is not handled here: multiple arcs to a single block with a PHI. Consider: a: %1 = icmp %b, 1 br %1, label %c, label %e c: %2 = icmp %b, 2 br %2, label %d, label %e d: br %e e: phi [0, %a], [1, %c], [2, %d] FoldValueComparisonIntoPredecessors will refuse to fold this, as it doesn't know how to deal with two arcs to a common destination with different PHI values. The answer is obvious - just split all conflicting arcs. llvm-svn: 280338
* Add cast to appease windows builder. Fixes build break introduced in r280306.Nick Lewycky2016-08-311-1/+1
| | | | llvm-svn: 280311
* Add -fprofile-dir= to clang.Nick Lewycky2016-08-311-13/+30
| | | | | | | | | | | | | | | | | | | | | -fprofile-dir=path allows the user to specify where .gcda files should be emitted when the program is run. In particular, this is the first flag that causes the .gcno and .o files to have different paths, LLVM is extended to support this. -fprofile-dir= does not change the file name in the .gcno (and thus where lcov looks for the source) but it does change the name in the .gcda (and thus where the runtime library writes the .gcda file). It's different from a GCOV_PREFIX because a user can observe that the GCOV_PREFIX_STRIP will strip paths off of -fprofile-dir= but not off of a supplied GCOV_PREFIX. To implement this we split -coverage-file into -coverage-data-file and -coverage-notes-file to specify the two different names. The !llvm.gcov metadata node grows from a 2-element form {string coverage-file, node dbg.cu} to 3-elements, {string coverage-notes-file, string coverage-data-file, node dbg.cu}. In the 3-element form, the file name is already "mangled" with .gcno/.gcda suffixes, while the 2-element form left that to the middle end pass. llvm-svn: 280306
* [InstCombine] allow icmp (shr exact X, C2), C fold for splat constant vectorsSanjay Patel2016-08-311-5/+0
| | | | | | | The enhancement to foldICmpDivConstant ( http://llvm.org/viewvc/llvm-project?view=revision&revision=280299 ) allows us to remove the ConstantInt check; no other changes needed. llvm-svn: 280300
* [InstCombine] allow icmp (div X, Y), C folds for splat constant vectorsSanjay Patel2016-08-311-37/+26
| | | | | | Converting all of the overflow ops to APInt looked risky, so I've left that as a TODO. llvm-svn: 280299
* [InstCombine] change insertRangeTest() to use APInt instead of Constant; NFCISanjay Patel2016-08-313-20/+29
| | | | | | | | This is prep work before changing the callers to also use APInt which will allow folds for splat vectors. Currently, the callers have ConstantInt guards in place, so no functional change intended with this commit. llvm-svn: 280282
* [LoopInfo] Add verification by recomputation.Michael Zolotukhin2016-08-311-1/+1
| | | | | | | | | | | | | | | | | Summary: Current implementation of LI verifier isn't ideal and fails to detect some cases when LI is incorrect. For instance, it checks that all recorded loops are in a correct form, but it has no way to check if there are no more other (unrecorded in LI) loops in the function. This patch adds a way to detect such bugs. Reviewers: chandlerc, sanjoy, hfinkel Subscribers: llvm-commits, silvas, mzolotukhin Differential Revision: https://reviews.llvm.org/D23437 llvm-svn: 280280
* [EarlyCSE] Optionally use MemorySSA. NFC.Geoff Berry2016-08-312-19/+128
| | | | | | | | | | | | | | | | | Summary: Use MemorySSA, if requested, to do less conservative memory dependency checking. This change doesn't enable the MemorySSA enhanced EarlyCSE in the default pipelines, so should be NFC. Reviewers: dberlin, sanjoy, reames, majnemer Subscribers: mcrosier, llvm-commits Differential Revision: http://reviews.llvm.org/D19821 llvm-svn: 280279
* [EarlyCSE] Allow forwarding a non-invariant load into an invariant load.Geoff Berry2016-08-311-5/+5
| | | | | | | | | | Reviewers: sanjoy Subscribers: mcrosier, llvm-commits Differential Revision: https://reviews.llvm.org/D23935 llvm-svn: 280265
* [SLP] Update the debug based on Michael's suggestion.Chad Rosier2016-08-311-2/+3
| | | | | | | Passing the types/opcode check still doesn't guarantee we'll actually vectorize. Therefore, just make it clear we're attempting to vectorize. llvm-svn: 280263
* [SLP] Sink debug after checking for matching types/opcode.Chad Rosier2016-08-311-2/+2
| | | | | | Differential Revision: https://reviews.llvm.org/D24090 llvm-svn: 280260
* s/static inline/static/ for headers I have changed in r279475. NFC.Tim Shen2016-08-311-5/+3
| | | | llvm-svn: 280257
* [statepoints][experimental] Add support for live-in semantics of values in ↵Philip Reames2016-08-311-0/+26
| | | | | | | | | | | | | | | | deopt bundles This is a first step towards supporting deopt value lowering and reporting entirely with the register allocator. I hope to build on this in the near future to support live-on-return semantics, but I have a use case which allows me to test and investigate code quality with just the live-in semantics so I've chosen to start there. For those curious, my use cases is our implementation of the "__llvm_deoptimize" function we bind to @llvm.deoptimize. I'm choosing not to hard code that fact in the patch and instead make it configurable via function attributes. The basic approach here is modelled on what is done for the "Live In" values on stackmaps and patchpoints. (A secondary goal here is to remove one of the last barriers to merging the pseudo instructions.) We start by adding the operands directly to the STATEPOINT SDNode. Once we've lowered to MI, we extend the remat logic used by the register allocator to fold virtual register uses into StackMap::Indirect entries as needed. This does rely on the fact that the register allocator rematerializes. If it didn't along some code path, we could end up with more vregs than physical registers and fail to allocate. Today, we *only* fold in the register allocator. This can create some weird effects when combined with arguments passed on the stack because we don't fold them appropriately. I have an idea how to fix that, but it needs this patch in place to work on that effectively. (There's some weird interaction with the scheduler as well, more investigation needed.) My near term plan is to land this patch off-by-default, experiment in my local tree to identify any correctness issues and then start fixing codegen problems one by one as I find them. Once I have the live-in lowering fully working (both correctness and code quality), I'm hoping to move on to the live-on-return semantics. Note: I don't have any *known* miscompiles with this patch enabled, but I'm pretty sure I'll find at least a couple. Thus, the "experimental" tag and the fact it's off by default. Differential Revision: https://reviews.llvm.org/D24000 llvm-svn: 280250
* [SLP] Arguments should be camel case, and start with an upper case letter. NFC.Chad Rosier2016-08-311-2/+2
| | | | llvm-svn: 280248
* Revert "[SimplifyCFG] Improve FoldValueComparisonIntoPredecessors to handle ↵James Molloy2016-08-311-20/+6
| | | | | | | | more cases" This reverts commit r280218. This *also* causes buildbot errors. Sigh. Not a successful day all around! llvm-svn: 280239
* Revert "[SimplifyCFG] Change the algorithm in SinkThenElseCodeToEnd"James Molloy2016-08-311-127/+86
| | | | | | This reverts commit r280216 - it caused buildbot failures. llvm-svn: 280234
* Revert "[SimplifyCFG] Handle tail-sinking of more than 2 incoming branches"James Molloy2016-08-311-89/+27
| | | | | | This reverts commit r280217. r280216 caused buildbot failures - backing out the entire chain. llvm-svn: 280233
* Revert "[SimplifyCFG] Add a workaround to fix PR30188"James Molloy2016-08-311-10/+0
| | | | | | This reverts commit r280219. r280216 caused buildbot failures - backing out the entire chain. llvm-svn: 280232
* Revert "[SimplifyCFG] Fix bootstrap failure after r280220"James Molloy2016-08-311-21/+5
| | | | | | This reverts commit r280228. r280216 caused buildbot failures - backing out the entire sequence. llvm-svn: 280231
* [SimplifyCFG] Fix bootstrap failure after r280220James Molloy2016-08-311-5/+21
| | | | | | We check that a sinking candidate is used by only one PHI node during our legality checks. However for instructions that are used by other sinking candidates our heuristic is less conservative. This can result in a candidate actually being illegal when we come to sink it because of how we sunk a predecessor. Do the used-by-only-one-PHI checks again during sinking to ensure we don't crash. llvm-svn: 280228
* [SimplifyCFG] Add a workaround to fix PR30188James Molloy2016-08-311-0/+10
| | | | | | | | We're sinking stores, which is a good thing, but in the process creating selects for the store address operand, which SROA/Mem2Reg can't look through, which caused serious regressions. The real fix is in SROA, which I'll be looking into. llvm-svn: 280219
* [SimplifyCFG] Improve FoldValueComparisonIntoPredecessors to handle more casesJames Molloy2016-08-311-6/+20
| | | | | | | | | | | | | | | | | | | A very important case is not handled here: multiple arcs to a single block with a PHI. Consider: a: %1 = icmp %b, 1 br %1, label %c, label %e c: %2 = icmp %b, 2 br %2, label %d, label %e d: br %e e: phi [0, %a], [1, %c], [2, %d] FoldValueComparisonIntoPredecessors will refuse to fold this, as it doesn't know how to deal with two arcs to a common destination with different PHI values. The answer is obvious - just split all conflicting arcs. llvm-svn: 280218
* [SimplifyCFG] Handle tail-sinking of more than 2 incoming branchesJames Molloy2016-08-311-27/+89
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This was a real restriction in the original version of SinkIfThenCodeToEnd. Now it's been rewritten, the restriction can be lifted. As part of this, we handle a very common and useful case where one of the incoming branches is actually conditional. Consider: if (a) x(1); else if (b) x(2); This produces the following CFG: [if] / \ [x(1)] [if] | | \ | | \ | [x(2)] | \ | / [ end ] [end] has two unconditional predecessor arcs and one conditional. The conditional refers to the implicit empty 'else' arc. This same pattern can also be caused by an empty default block in a switch. We can't sink the call to x() down to end because no call to x() happens on the third incoming arc (assume that x() has sideeffects for the sake of argument; if something is safe to speculate we could indeed sink nevertheless but this cannot happen in the general case and causes many extra selects). We are now able to detect this case and split off the unconditional arcs to a common successor: [if] / \ [x(1)] [if] | | \ | | \ | [x(2)] | \ / | [sink.split] | \ / [ end ] Now we can sink the call to x() into %sink.split. This can cause significant code simplification in many testcases. llvm-svn: 280217
* [SimplifyCFG] Change the algorithm in SinkThenElseCodeToEndJames Molloy2016-08-311-86/+127
| | | | | | | | | | | | | | | | | | | | | | | | | r279460 rewrote this function to be able to handle more than two incoming edges and took pains to ensure this didn't regress anything. This time we change the logic for determining if an instruction should be sunk. Previously we used a single pass greedy algorithm - sink instructions until one requires more than one PHI node or we run out of instructions to sink. This had the problem that sinking instructions that had non-identical but trivially the same operands needed extra logic so we sunk them aggressively. For example: %a = load i32* %b %d = load i32* %b %c = gep i32* %a, i32 0 %e = gep i32* %d, i32 1 Sinking %c and %e would naively require two PHI merges as %a != %d. But the loads are obviously equivalent (and maybe can't be hoisted because there is no common predecessor). This is why we implemented the fairly complex function areValuesTriviallySame(), to look through trivial differences like this. However it's just not clever enough. Instead, throw areValuesTriviallySame away, use pointer equality to check equivalence of operands and switch to a two-stage algorithm. In the "scan" stage, we look at every sinkable instruction in isolation from end of block to front. If it's sinkable, we keep track of all operands that required PHI merging. In the "sink" stage, we iteratively sink the last non-terminator in the source blocks. But when calculating how many PHIs are actually required to be inserted (to work out if we should stop or not) we remove any values that have already been sunk from the set of PHI-merges required, which allows us to be more aggressive. This turns an algorithm with potentially recursive lookahead (looking through GEPs, casts, loads and any other instruction potentially not CSE'd) to two linear scans. llvm-svn: 280216
* [SimplifyCFG] Tail-merge calls with sideeffectsJames Molloy2016-08-311-7/+3
| | | | | | | | | | | | | This was deliberately disabled during my rewrite of SinkIfThenToEnd to keep behaviour at least vaguely consistent with the previous version and keep it as close to NFC as I could. There's no real reason not to merge sideeffect calls though, so let's do it! Small fixup along the way to ensure we don't create indirect calls. Should fix PR28964. llvm-svn: 280215
* [Coroutines] Part 10: Add coroutine promise support.Gor Nishanov2016-08-315-9/+127
| | | | | | | | | | | | | | | | | | | Summary: 1) CoroEarly now lowers llvm.coro.promise intrinsic that allows to obtain a coroutine promise pointer from a coroutine frame and vice versa. 2) CoroFrame now interprets Promise argument of llvm.coro.begin to place CoroutinPromise alloca at a deterministic offset from the coroutine frame. Now, the coroutine promise example from docs\Coroutines.rst compiles and produces expected result (see test/Transform/Coroutines/ex4.ll). Reviewers: majnemer Subscribers: llvm-commits, mehdi_amini Differential Revision: https://reviews.llvm.org/D23993 llvm-svn: 280184
* [InstCombine] clean up InsertRangeTest; NFCISanjay Patel2016-08-311-35/+15
| | | | | | | | | | It's much less code and easier to read if we don't duplicate everything between the 'Inside' and not 'Inside' cases. As noted with the FIXME, the goal is to make this vector-friendly in a follow-up patch. llvm-svn: 280183
* [LoadStoreVectorizer] Change VectorSet to Vector to match head and tail ↵Alina Sbirlea2016-08-301-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | positions. Resolves PR29148. Summary: LSV was using two vector sets (heads and tails) to track pairs of adjiacent position to vectorize. A recent optimization is trying to obtain the longest chain to vectorize and assumes the positions in heads(H) and tails(T) match, which is not the case is there are multiple tails for the same head. e.g.: i1: store a[0] i2: store a[1] i3: store a[1] Leads to: H: i1 T: i2 i3 Instead of: H: i1 i1 T: i2 i3 So the positions for instructions that follow i3 will have different indexes in H/T. This patch resolves PR29148. This issue also surfaced the fact that if the chain is too long, and TLI returns a "not-fast" answer, the whole chain will be abandoned for vectorization, even though a smaller one would be beneficial. Added a testcase and FIXME for this. Reviewers: tstellarAMD, arsenm, jlebar Subscribers: mzolotukhin, wdng, llvm-commits Differential Revision: https://reviews.llvm.org/D24057 llvm-svn: 280179
* [LoopVectorizer] Predicate instructions in blocks with several incoming edgesMichael Kuperstein2016-08-301-12/+4
| | | | | | | | | | We don't need to limit predication to blocks that have a single incoming edge, we just need to use the right mask. This fixes PR30172. Differential Revision: https://reviews.llvm.org/D24009 llvm-svn: 280148
* [InstCombine] replace divide-by-constant checks with asserts; NFCSanjay Patel2016-08-301-19/+16
| | | | | | | These folds already have tests for scalar and vector types, except for the vector div-by-0 case, so I'm adding tests for that. llvm-svn: 280115
* [InstCombine] clean up foldICmpDivConstant; NFCISanjay Patel2016-08-301-16/+20
| | | | | | | | 1. Fix comments to match variable names 2. Remove redundant CmpRHS variable 3. Add FIXME to replace some checks with asserts llvm-svn: 280112
* [Reassociate] Add additional debug output. NFC.Chad Rosier2016-08-301-0/+2
| | | | llvm-svn: 280090
* [SimplifyCFG] Properly CSE metadata in SinkThenElseCodeToEndJames Molloy2016-08-301-0/+5
| | | | | | This was missing, meaning the metadata in sunk instructions was potentially bogus and could cause miscompiles. llvm-svn: 280072
* [RewriteStatepointsForGC] Update comment for same PHI node check. NFCAnna Thomas2016-08-301-1/+2
| | | | llvm-svn: 280052
* [sanitizer-coverage] add two more modes of instrumentation: trace-div and ↵Kostya Serebryany2016-08-301-6/+75
| | | | | | trace-gep, mostly usaful for value-profile-based fuzzing; llvm part llvm-svn: 280043
* ADT: Give ilist<T>::reverse_iterator a handle to the current nodeDuncan P. N. Exon Smith2016-08-303-12/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Reverse iterators to doubly-linked lists can be simpler (and cheaper) than std::reverse_iterator. Make it so. In particular, change ilist<T>::reverse_iterator so that it is *never* invalidated unless the node it references is deleted. This matches the guarantees of ilist<T>::iterator. (Note: MachineBasicBlock::iterator is *not* an ilist iterator, but a MachineInstrBundleIterator<MachineInstr>. This commit does not change MachineBasicBlock::reverse_iterator, but it does update MachineBasicBlock::reverse_instr_iterator. See note at end of commit message for details on bundle iterators.) Given the list (with the Sentinel showing twice for simplicity): [Sentinel] <-> A <-> B <-> [Sentinel] the following is now true: 1. begin() represents A. 2. begin() holds the pointer for A. 3. end() represents [Sentinel]. 4. end() holds the poitner for [Sentinel]. 5. rbegin() represents B. 6. rbegin() holds the pointer for B. 7. rend() represents [Sentinel]. 8. rend() holds the pointer for [Sentinel]. The changes are #6 and #8. Here are some properties from the old scheme (which used std::reverse_iterator): - rbegin() held the pointer for [Sentinel] and rend() held the pointer for A; - operator*() cost two dereferences instead of one; - converting from a valid iterator to its valid reverse_iterator involved a confusing increment; and - "RI++->erase()" left RI invalid. The unintuitive replacement was "RI->erase(), RE = end()". With vector-like data structures these properties are hard to avoid (since past-the-beginning is not a valid pointer), and don't impose a real cost (since there's still only one dereference, and all iterators are invalidated on erase). But with lists, this was a poor design. Specifically, the following code (which obviously works with normal iterators) now works with ilist::reverse_iterator as well: for (auto RI = L.rbegin(), RE = L.rend(); RI != RE;) fooThatMightRemoveArgFromList(*RI++); Converting between iterator and reverse_iterator for the same node uses the getReverse() function. reverse_iterator iterator::getReverse(); iterator reverse_iterator::getReverse(); Why doesn't iterator <=> reverse_iterator conversion use constructors? In order to catch and update old code, reverse_iterator does not even have an explicit conversion from iterator. It wouldn't be safe because there would be no reasonable way to catch all the bugs from the changed semantic (see the changes at call sites that are part of this patch). Old code used this API: std::reverse_iterator::reverse_iterator(iterator); iterator std::reverse_iterator::base(); Here's how to update from old code to new (that incorporates the semantic change), assuming I is an ilist<>::iterator and RI is an ilist<>::reverse_iterator: [Old] ==> [New] reverse_iterator(I) (--I).getReverse() reverse_iterator(I) ++I.getReverse() --reverse_iterator(I) I.getReverse() reverse_iterator(++I) I.getReverse() RI.base() (--RI).getReverse() RI.base() ++RI.getReverse() --RI.base() RI.getReverse() (++RI).base() RI.getReverse() delete &*RI, RE = end() delete &*RI++ RI->erase(), RE = end() RI++->erase() ======================================= Note: bundle iterators are out of scope ======================================= MachineBasicBlock::iterator, also known as MachineInstrBundleIterator<MachineInstr>, is a wrapper to represent MachineInstr bundles. The idea is that each operator++ takes you to the beginning of the next bundle. Implementing a sane reverse iterator for this is harder than ilist. Here are the options: - Use std::reverse_iterator<MBB::i>. Store a handle to the beginning of the next bundle. A call to operator*() runs a loop (usually operator--() will be called 1 time, for unbundled instructions). Increment/decrement just works. This is the status quo. - Store a handle to the final node in the bundle. A call to operator*() still runs a loop, but it iterates one time fewer (usually operator--() will be called 0 times, for unbundled instructions). Increment/decrement just works. - Make the ilist_sentinel<MachineInstr> *always* store that it's the sentinel (instead of just in asserts mode). Then the bundle iterator can sniff the sentinel bit in operator++(). I initially tried implementing the end() option as part of this commit, but updating iterator/reverse_iterator conversion call sites was error-prone. I have a WIP series of patches that implements the final option. llvm-svn: 280032
* [ThinLTO] Indirect call promotion fixes for promoted local functionsTeresa Johnson2016-08-291-3/+14
| | | | | | | | | | | | | | | | | | | Summary: Fix a couple issues limiting the application of indirect call promotion in ThinLTO mode: - Invoke indirect call promotion before globalopt, since it may eliminate imported functions which appear unreferenced. - Invoke indirect call promotion with InLTO=true so that the PGOFuncName metadata is used to get the name for locals which would have been renamed during promotion. Reviewers: davidxl, mehdi_amini Subscribers: Prazek, llvm-commits, mehdi_amini Differential Revision: https://reviews.llvm.org/D24004 llvm-svn: 280024
* [SLP] Return a boolean value for these static helpers. NFC.Chad Rosier2016-08-291-17/+16
| | | | | | Differential Revision: https://reviews.llvm.org/D24008 llvm-svn: 280020
* [LV] Move insertelement sequence after scalar definitionsMatthew Simpson2016-08-291-0/+11
| | | | | | | | | | | | | | After r279649 when getting a vector value from VectorLoopValueMap, we create an insertelement sequence on-demand if the value has been scalarized instead of vectorized. We previously inserted this insertelement sequence before the value's first vector user. However, this insert location is problematic if that user is the phi node of a first-order recurrence. With this patch, we move the insertelement sequence after the last scalar instruction we created when scalarizing the value. Thus, the value's vector definition in the new loop will immediately follow its scalar definitions. This should fix PR30183. Reference: https://llvm.org/bugs/show_bug.cgi?id=30183 llvm-svn: 280001
* [asan] Enable new stack poisoning with store instruction by defaultVitaly Buka2016-08-291-1/+1
| | | | | | | | | | Reviewers: eugenis Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D23968 llvm-svn: 279993
* ASan: remove variable only used in assertions buildTim Northover2016-08-291-2/+1
| | | | llvm-svn: 279990
* Use store operation to poison allocas for lifetime analysis.Vitaly Buka2016-08-291-59/+106
| | | | | | | | | | | | | | | | | | Summary: Calling __asan_poison_stack_memory and __asan_unpoison_stack_memory for small variables is too expensive. Code is disabled by default and can be enabled by -asan-experimental-poisoning. PR27453 Reviewers: eugenis Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D23947 llvm-svn: 279984
* [asan] Separate calculation of ShadowBytes from calculating ASanStackFrameLayoutVitaly Buka2016-08-292-33/+57
| | | | | | | | | | | | Summary: No functional changes, just refactoring to make D23947 simpler. Reviewers: eugenis Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D23954 llvm-svn: 279982
* [SimplifyCFG] Hoisting invalidates metadataDavid Majnemer2016-08-291-2/+8
| | | | | | | | | We forgot to remove optimization metadata when performing hosting during FoldTwoEntryPHINode. This fixes PR29163. llvm-svn: 279980
OpenPOWER on IntegriCloud